id
stringlengths
25
96
input
stringlengths
137
1.08M
output
stringlengths
501
1.6k
instruction
stringclasses
5 values
arxiv-format/0403114v1.md
# Vetoes for Inspiral Triggers in LIGO Data Nelson Christensen\\({}^{1}\\), Peter Shawhan\\({}^{2}\\) and Gabriela Gonzalez\\({}^{3}\\), For the LIGO Scientific Collaboration \\({}^{1}\\)Physics and Astronomy, Carleton College, Northfield, MN 55057, USA \\({}^{2}\\)LIGO Laboratory, California Institute of Technology, Pasadena, CA 91125 USA \\({}^{3}\\)Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803, USA [email protected] ## 1 Introduction The Laser Interferometer Gravitational Wave Observatory (LIGO) is now operating, and collecting meaningful scientific data [1]. The LIGO Scientific Collaboration (LSC) is conducting searches for several types of gravitational-wave signals. To date, analysis of data from LIGO's first science data run has led to the publication of searches for continuous waves from pulsars [2], the \"inspiral\" (orbital decay) of compact binary systems [3], short bursts [4], and an isotropic stochastic background [5]. The waveform emitted by an inspiraling compact binary system can be modeled accurately (at least if the component masses are fairly low), allowing the use of matched filtering techniques when searching for this class of signals. The data is filtered using a large number of \"template\" waveforms in order to search for signals with a range of physical parameters. For any given template, the search algorithm generates a \"trigger\" each time the output of the matched filter exceeds a pre-determined threshold in signal-to-noise ratio (SNR), provided that the frequency distribution of the signal power is consistent with the expected waveform, checked quantitatively using a \\(\\chi^{2}\\) test. While this search algorithm is optimal in the case of stationary Gaussian noise, the actual noise in the LIGO interferometers is strongly influenced by optical alignment, servo control settings, and environmental conditions. Large amplitude _glitch_ events, or short stretches of increased broadband noise, will excite the inspiral filter for many templates, thereby leading to false triggers in the search. An example of this can be seen in Figure 1, where a large-amplitude glitch causes numerous inspiral templates torespond over a time span as long as \\(\\sim\\)16 s. This time scale is related to the treatment of sharp features in the power spectral density of the detector noise, which is used as an inverse weighting factor in the matched filter. Figure 2 shows the output of the matched filter in the vicinity of this glitch, illustrating how these inaccurate inspiral coalescence times can arise from the ringing of the template filter: although the main SNR peak is easily rejected by the \\(\\chi^{2}\\) test, there are a few nearby times for which the SNR exceeds the trigger threshold while \\(\\chi^{2}\\) is below the rejection threshold. The goal of the studies described in this paper is to eliminate demonstrably bad stretches of data and to identify environmental or instrumental causes of glitches when possible, allowing us to \"veto\" (reject) any inspiral triggers occurring at nearby times. In addition to the main data channel in which a gravitational wave signal would appear (called \"LSC-AS_Q\" because it is the Length Sensing and Control signal extracted from the \"Anti-Symmetric port\" photodiode using Quadrature demodulation phase), Figure 1: An example of how a large amplitude _glitch_ can cause numerous templates to report significant SNR triggers. The top trace shows a glitch observed in the time series of the LIGO Livingston gravitational wave channel, denoted L1:LSC-AS_Q. The bottom plot shows the inspiral triggers with SNR\\(>8\\) which were reported (based on filtering with many template waveforms) in the vicinity of this glitch. Each trigger is represented by a horizontal bar which extends from the time at which the template waveform passes 100 Hz to the inferred coalescence time. The vertical position of the bar indicates the maximum SNR observed in that template. The inferred coalescence times extend over a span of \\(\\sim\\)16 s. numerous additional channels are recorded to monitor auxiliary optical signals and servo control points in the interferometer, as well as environmental conditions. In some cases, we are able to significantly reduce the rate of false triggers by using these additional channels as indicators of instrumental or environmental disturbances. LIGO's first science data run, called S1, spanned 17 days from August 23 to September 9, 2002. The second science data run, called S2, spanned two months from February 14 to April 14, 2003. The average noise in the LIGO interferometers was roughly an order of magnitude better during S2 than during S1. Building on the analysis of the S1 data [3], a search for binary neutron star (BNS) inspiral events is being conducted with the S2 data; an upper limit will be placed on the coalescence rate in the Milky Way and nearby galaxies [6]. The specifics determining the vetoes are presented in the remainder of the paper. Section 2 outlines the concepts used veto studies and summarizes the S1 veto analysis; a more complete description can be found in [3]. A comprehensive description of the S2 inspiral veto analysis is presented in Section 3. A summary of our conclusion, and thoughts on possible future analysis plans, is contained in Section 4. In the course of this paper we refer to the 4 km interferometer at Livingston, Louisiana, as L1, and the 4 km and 2 km interferometers Figure 2: Time series displays of the output of the matched filter, for one particular template, in the vicinity of the large glitch shown in Figure 1. The top plot shows the SNR and the threshold used to identify triggers, while the bottom plot shows the \\(\\chi^{2}\\) variable which is required to be below a threshold. The circles note the times when the signal exceeded the SNR threshold (top), yet passed the \\(\\chi^{2}\\) test (bottom). at Hanford, Washington, as H1 and H2 respectively. ## 2 Vetoes for LIGO Science Data Run S1 A description of the vetoes implemented for the BNS inspiral analysis of data from LIGO science data run S1 [3] is presented here. In order to avoid the possibility of statistical bias, potential veto conditions were studied using only a \"playground\" data set comprising about 10 % of the collected data, selected by hand to give a sampling of different degrees of non-stationarity observed in the detector noise at different times. This playground data was not used in the calculation of the inspiral rate limit. Only the L1 and H1 interferometers were used for the S1 inspiral analysis. For either interferometer, sections of data were excluded from examination if there were problems with calibration signals. This resulted in 5 % of L1 data being excluded, and 7 % of the H1 data. In addition, periods of time when the noise level of an interferometer was abnormally large were excluded from analysis. This determination was made through the monitoring of the band-limited root-mean-square noise that occurred in four frequency bands [3, 4]. This veto eliminated 8 % of L1 data and 18 % of H1 data. Numerous interferometer control and environmental monitoring channels were examined at times when the inspiral templates reported triggers during the playground section, in order to look for correlations. The subset of channels which showed a possible correlation were processed using a glitch-finding program which generated \"veto triggers\". These veto triggers were compared to the list of inspiral triggers, with an adjustable time window to account for instrumental delays as well as the different trigger generation algorithms. The effectiveness of a channel as a veto, using a given time window, was measured by calculating the veto efficiency (fraction of inspiral triggers rejected by veto triggers), usage fraction (fraction of veto triggers coincident with at least one inspiral trigger), and dead-time (fraction of total run time during which inspiral triggers would be rejected according to the set of veto triggers and the time window). The H1 channel H1:LSC-REFL_I, a photodiode signal at the interferometer's reflected port, was found to contain large glitches which correlated well with large glitches seen in the gravitational-wave channel. A program called _glitchMon_1 was used to filter the H1:LSC-REFL_I channel and record large excursions as veto triggers. A time window of \\(\\pm 1\\) s around these veto trigger times yielded a veto efficiency of over 60 % for inspiral triggers with SNR \\(>10\\), with a deadtime of only 0.2 %. A prospective veto condition for the L1 interferometer, using a channel called L1:LSC-AS_I which is derived from the same photodiode as the gravitational-wave channel, was abandoned due to concerns that a gravitational wave could appear in this channel with non-negligible amplitude. Footnote 1: _glitchMon_, written by M. Ito (University of Oregon), is a program which looks for transient signals in selected LIGO data channels. It is based on the LIGO Data Monitoring Tool (DMT) library. Once these data quality and veto conditions had been developed using the playground data, they were subsequently implemented as part of the S1 analysis pipeline [3]. Inspiral triggers that passed the SNR threshold, \\(\\chi^{2}\\) test, and veto condition were reported as event candidates and were used to calculate an upper limit on the rate of binary inspirals in the Galaxy. A \"_post-mortem_\" examination of these events provided illuminating information. For example, the \"loudest\" event detectedin the L1 data was the result of a saturation of the interferometer's antisymmetric port photodiode, probably caused by a misalignment in the optical system. These results, and the experience from the S1 veto analysis served as a starting point for the examination of the S2 data. ## 3 Vetoes for LIGO Science Data Run S2 The character of the S2 data was very different from that of S1. The stability of all of the LIGO interferometers had improved significantly, and the quality of the data was dramatically improved. The interferometer sensitivities had also improved, and consequently new noise sources became visible. The experience derived from the S1 analysis was brought forward, but due to the different behavior of the interferometers it was necessary to reinspect all of the interferometer control and environmental monitoring channels in detail again. Numerous tools were used for the task. What was initially helpful was to use the inspiral template triggers, found in playground data, and to inspect candidate channels at these times. Data quality examinations (more comprehensive than those done for the S1 analysis) provided the means to exclude sections of data where there were obvious problems. A number of problems caused data to be excluded: data outside of the official S2 run times, missing data, missing or unreliable calibration, non-standard servo control settings (in a few L1 segments), and input/output controller timing problems at L1. The playground data was then used to judge the relevance of other potential data quality flags, leading to two additional data quality vetoes. One concerned the H1 interferometer, which suffered from occasional episodes of elevated non-stationary broadband noise. We eliminated data in which the noise level in the upper part of the sensitive frequency band was high for consecutive periods of at least three minutes; this requirement ensured that a real gravitational wave inspiral signal would not invoke this veto condition, even if it had an exceptionally large amplitude. The other data quality veto used pertained to the saturation of the photodiode at the antisymmetric port at any of the LIGO interferometers, as was observed during S1. This effect correlated with a small, but significant number of L1 inspiral triggers. As in the S1 veto study, numerous channels, with various filters and thresholds, were processed with _glitchMon_ to produce veto triggers. The efficiency and dead-time for each possible veto condition was evaluated using a playground data set, which for the S2 run consisted of 600 seconds out of every 6370 seconds of data. This definition of the playground ensured that it was representative of the entire run; for instance, it included some data from all times of the day. The \"safety\" of several potential veto channels was evaluated by injecting simulated gravitational-wave signals into the interferometer arm lengths and checking for the signals to appear in various auxiliary channels. The signals were found to appear in just one tested channel, L1:LSC-AS_I, with measurable amplitude, so that channel was deemed to be unsafe for use as a veto. No good candidate veto channels were identified for H1 and H2, however, there were a few candidates for L1. Non-stationary noise in low frequency part of the sensitivity range used for inspiral search, initially 50-2048 Hz, appeared to be dominant cause for deleterious glitches in the data. In particular, the non-stationary noise in L1 had dominant frequency content around 70 Hz. A key auxiliary channel, L1:LSC-POB_I, also had highly variable noise at 70 Hz. There are understandable physical mechanisms for this: the power recycling servo loop (for which L1:LSC-POB_I is the error signal) has a known instability around 70 Hz when the gain is too high;independently, when the gain of the differential arm length servo loop goes too low (due to low optical gain), glitches around 70 Hz tend to appear. Sometimes these glitches in L1:LSC-POB_I couple into the differential arm length signal sufficiently strongly to produce inspiral triggers. To avoid these excess triggers, we decided to increase the lower bound of the frequency band used for the BNS inspiral search to 100 Hz. This reduced the number of inspiral triggers, and simulations indicated the loss of sensitivity for the target population of binary neutron star systems was acceptably small. The lists of veto triggers produced by _glitchMon_ were compared to the output of the inspiral template search using data in the S2 playground. Figure 3 shows an example of the veto efficiency versus deadtime for the channel L1:LSC-POB_I, using symmetric time windows of \\(0.0,\\pm 0.05,\\pm 0.1,\\pm 0.15,\\pm 0.2,\\pm 0.25,\\pm 0.3,\\pm 0.4,\\pm 0.5,\\pm 0.75, \\pm 1.0,\\pm 2.0,\\pm 4.0\\) and \\(\\pm 6.0\\) s. In this case, the L1:LSC-POB_I data was filtered with a fourth-order Chebyshev high-pass filter with a corner frequency of 70 Hz, and _glitchMon_ triggers with a significance of 70% or greater were taken to be veto triggers. These results are from the S2 playground data. Figure 3: An example of the veto efficiency (for BNS inspiral triggers in L1) versus deadtime for the veto channel L1:LSC-POB_I, using symmetric windows of \\(0.0,\\pm 0.05,\\pm 0.1,\\pm 0.15,\\pm 0.2,\\pm 0.25,\\pm 0.3,\\pm 0.4,\\pm 0.5,\\pm 0.75, \\pm 1.0,\\pm 2.0,\\pm 4.0\\), and \\(\\pm 6.0\\) s. The data from L1:LSC-POB_I was filtered with a fourth order Chebyshev 70 Hz high-pass filter, and excursions found by _glitchMon_ with significance of 7\\(\\sigma\\) or greater were taken to be veto triggers. These results are from the S2 playground data. of more than \\(7\\sigma\\) were taken to be veto triggers. Note that the veto efficiency rises significantly as the time window is increased. As was illustrated in Figure 1, a large-amplitude glitch can cause the inspiral search algorithm to generate triggers with inferred coalescence times rather far from the time of glitch. For this reason, we found that we had to use rather long veto time windows to achieve good veto efficiency. After a long series of studies, we settled on using the L1:LSC-POB_I channel, with the filtering and threshold given above, with a very wide and asymmetric window, \\(-4\\) s to \\(+8\\) s. In the playground data, this veto condition vetoed 27 % of the BNS inspiral triggers with SNR \\(>8\\), and 35 % of the inspiral triggers with SNR \\(>10\\), with a dead-time of 2.5 %. The usage fraction of the veto was 25 % for SNR \\(>8\\) and 7 % for SNR \\(>10\\), while the expected random use would be 4.6 % and 0.5 % respectively. The final analysis of the full S2 data set (excluding the playground) was done using a more stringent \\(\\chi^{2}\\) threshold to reduce the number of false triggers, so the final veto efficiencies and usage fractions are somewhat lower than the numbers given above: the efficiency is 13 % for inspiral triggers with SNR \\(>8\\) and \\(30\\pm 10\\) % for inspiral triggers with SNR \\(>10\\), with dead-time of 3.0 %. Figure 4 demonstrates the appropriateness of this veto channel in a different way, using data from an epoch in the S2 run during which the L1 detector noise was extremely non-stationary. Presented is a sample time-trace (from the S2 playground data) of the interferometer's gravitational wave signal channel, L1:LSC-AS_Q, after high-pass filtering, along with the signal from L1:LSC-POB_I. Also displayed in Figure 4 are the template waveform starting/ending times and the SNR for the BNS inspiral triggers, and the time intervals of the L1:LSC-POB_I veto triggers as reported by _glitchMon_. In addition to the S2 search for binary neutron star inspiral signals, a search is underway for binary black hole (BBH) signals. These signals have shorter duration and are restricted to a lower frequency range than in the BNS case, so it is possible that different channels could provide the best veto conditions. We have repeated the veto study using a preliminary list of BBH inspiral triggers in the S2 playground data. L1:LSC-POB_I again appears as a good candidate veto, with efficiency roughly comparable to what was measured for the BNS case. However, the channel L1:LSC-MICH_CTRL (the control signal for the servo loop which controls the differential distance between the beamsplitter and the input mirrors of the long Fabry-Perot arm cavitites) appears to yield comparable veto efficiency with slightly less dead-time. Figures 5 and 6 show the veto efficiency versus dead-time for L1:LSC-MICH_CTRL and L1:LSC-POB_I, respectively, using veto time windows up to \\(\\pm 1\\) s. Combining the two channels only increases the veto efficiency by 1 %, indicating that the two channels appear to be glitching concurrently. The final choice of veto condition for the BBH inspiral search will be made after refinement of the inspiral search algorithm and parameters. ## 4 Discussion and Conclusions LIGO is now acquiring data, and astrophysically interesting analyses are being conducted [2, 3, 4, 5]. From the S1 and S2 data it has been seen that spurious events, or glitches, can exceed the SNR threshold and occasionally pass the \\(\\chi^{2}\\) test in the BNS inspiral search. As the interferometers' sensitivities continue to improve the character of the data changes. The investigations into possible vetoes for the inspiral analyses will continue to evolve as the interferometers' performance changes. For the S2 inspiral trigger studies we have eliminated problematic data using data quality checks and a coincident glitch veto. Data quality cuts eliminate high-noise data in H1 as well as photodiode saturations in all three LIGO interferometers. Based on preliminary investigations, the low-frequency cutoff for the BNS inspiral search was elevated to 100 Hz in order to avoid problematic non-stationary noise around 70 Hz. The L1:LSC-POB_I channel provided a moderately efficient veto for the L1 interferometer, with a dead-time of 3 %. No suitable veto conditions were identified for the H1 or H2 interferometers. The BBH inspiral search is still being developed and tuned. Based on preliminary studies, either L1:LSC-POB_I or L1:LSC-MICH_CTRL appears to provide a useful veto, comparable in efficiency to the BNS case. For future LIGO science runs we hope to gain a better understanding of the Figure 4: Correlation between glitches in the gravitational wave channel L1:LSC-AS_Q (abbreviated “ASQ” in the figure) and the prospective veto channel L1:LSC-POB_I (“POBI”). The first and third plots show time series of these channels after filtering with a fourth order Chebyshev 100 Hz high-pass filter. The second and fourth plots show the time intervals of the triggers reported by the software, represented as horizontal bars. In the case of L1:LSC-AS_Q, the data was filtered using many template waveforms, and the SNR for various templates is indicated by the vertical positions of the bars. In the case of L1:LSC-POB_I, the vertical position of the bar indicates the glitch “size” reported by _glitchMon_. The data shown here is from a time in the S2 playground data for which L1:LSC-AS_Q is especially glitchy and the efficiency of the L1:LSC-POB_I veto is especially good, and is not typical of the entire S2 run. root causes of glitches. As the interferometers' noise decreases it is hoped that environmental causes of triggers will be clearly identified. It is likely that low-frequency environmental noise can cause higher frequency noise in the interferometer output through non-linear coupling. We intend to use higher-order statistical measures, such as the _bicoherence_, as a means of monitoring the non-linear up-conversion. Also, we hope to implement further inspiral waveform consistency tests [7] in order to eliminate false triggers that manage to pass the SNR threshold and current \\(\\chi^{2}\\) test. Thanks to Laura Cadonati for providing _glitchMon_ veto trigger files, and to other members of the LSC Burst Analysis Group for discussions. We are pleased to acknowledge Peter Saulson for carefully reading the manuscript and providing helpful comments. This work was supported by grants from the National Science Foundation, including grants PHY-0071327, PHY-0107417, PHY-0135389, and PHY-0244357. Figure 5: An example of the veto efficiency (for BBH inspiral triggers in L1) versus dead-time for the channel L1:LSC-AS_Q, using symmetric windows of \\(0.0,\\pm 0.05,\\pm 0.1,\\pm 0.15,\\pm 0.2,\\pm 0.25,\\pm 0.3,\\pm 0.4,\\pm 0.5,\\pm 0.75\\), and \\(\\pm 1.0\\) s. The data from L1:LSC-MICH_CTRL was filtered with a fourth order Chebyshev 100 Hz high-pass filter, and resulting transients with amplitudes exceeding \\(16\\sigma\\) were declared veto triggers. These results are from the S2 playground data. ## References * [1] Abbott B _et al_2004 _Nuclear Instruments and Methods in Physics Research A_, **517** 154 * [2] Abbott B _et al_2003 _Phys. Rev. D_, _Setting upper limits on the strength of periodic gravitational waves from PSR J1939+2134 using the first science data from the GEO 600 and LIGO detectors_, in-press, _gr-qc/0308050_. * [3] Abbott B _et al_2003 _Phys. Rev. D_, _Analysis of LIGO data for gravitational waves from binary neutron stars_, in-press, _gr-qc/0308069_ * [4] Abbott B _et al_2003 _Phys. Rev. D_, _First upper limits from LIGO on gravitational wave bursts_, in-press, _gr-qc/0312056_ * [5] Abbott B _et al_2003, _Analysis of first LIGO science data for stochastic gravitational waves_, _gr-qc/0312088_ * [6] Abbott B _et al_2004, _Upper limit on the coalescence rate of Galactic and extrragalactic binary neutron stars established from LIGO observations_, pre-print * [7] Shawhan P and Ochsner E 2004, _Inspiral Waveform Consistency Tests_, submitted this issue _Classical and Quantum Gravity Figure 6: An example of the veto efficiency (for BBH inspiral triggers in L1) versus dead-time for the channel L1:LSC-AS_Q, using symmetric windows of \\(0.0,\\pm 0.05,\\pm 0.1,\\pm 0.15,\\pm 0.2,\\pm 0.25,\\pm 0.3,\\pm 0.4,\\pm 0.5,\\pm 0.75\\), and \\(\\pm 1.0\\) s. The data from L1:LSC-POB_I was filtered with a fourth order Chebyshev 70 Hz high-pass filter, and resulting transients with amplitudes exceeding \\(7\\sigma\\) were declared veto triggers. These results are from the S2 playground data.
Presented is a summary of studies by the LIGO Scientific Collaboration's Inspiral Analysis Group on the development of possible vetoes to be used in evaluation of data from the first two LIGO science data runs. Numerous environmental monitor signals and interferometer control channels have been analyzed in order to characterize the interferometers' performance. The results of studies on selected data segments are provided in this paper. The vetoes used in the compact binary inspiral analyses of LIGO's S1 and S2 science data runs are presented and discussed.
Write a summary of the passage below.
arxiv-format/0404041v2.md
## Quantum Reality, Complex Numbers and the Meteorological Butterfly Effect **by** **T.N.Palmer** European Centre for Medium-Range Weather Forecasts Shinfield Park, RG2 9AX, Reading UK [email protected] Submitted to \"Bulletin of the American Meteorological Society\" April 2004. Revised January 2005. ## 1 Introduction Weather and climate affect the lives of virtually everyone on the planet. It is not surprising, therefore, how interdisciplinary is the science of meteorology, with clear quantitative links to many applied sciences (eg Palmer et al, 2004). But is it possible that interdisciplinary links might also exist \"back\" towards more fundamental physics? More specifically, could nonlinear thinking about predictability of weather help reformulate quantum theory in such a way as to help solve the conceptual and foundational difficulties which made the theory so difficult for Einstein to accept, and which continue to plague the theory to this day (Penrose, 2004)? On the face of it, this seems an utterly preposterous idea - what could the largely classical and familiar world of meteorology have to say about counter-intuitive notions like wave-particle duality, quantum non-locality, parallel universes and the like? Notwithstanding such entirely reasonable pre-conceptions, the purpose of this article is to suggest that application of nonlinear meteorological thinking may indeed provide fresh insights on the foundational problems of quantum theory, as summarised in section 2. On this, the 100th anniversary of the publication of Einstein's seminal work on quantum theory and the photoelectric effect, we will focus on the two main concerns which ultimately led him to reject this theory: indeterminacy (\"I cannot believe that God plays dice with the cosmos\") and, more importantly, the notion of non-local causality, which Einstein referred to as \"spooky action at a distance\". In section 3 the validity of a key assumption required to demonstrate non-local causality in quantum theory is called into doubt from the perspective of a toy universe governed by the prototypical Lorenz (1963) model of low-order chaos. On the other hand, it is also shown in section 3 that low-order chaos cannot itself provide a solution to these quantum foundational problems. In section 4 is discussed the apparent finitetime predictability horizon associated with 3D inviscid fluid motion, referred to as the \"meteorological butterfly effect\". The existence of this horizon is associated with a self-similar upscale cascade of uncertainty and is quite different from that associated with low-order chaos. In section 5, an idealised representation of the upscale cascade is formulated using permutation operators that have the same multiplicative properties as the unit complex numbers. Complex numbers play an essential role in describing the evolution of the quantum wave-function, and in section 6, this reinterpretation of complex numbers leads to a reformulation of the quantum wave-function as a set of intricately-encoded binary sequences (\"quantum DNA\"). This reinterpretation allows the arguments in section 3 to be used to reject both quantum indeterminacy and quantum non-local causality. The technical work on which this paper is based has been published (Palmer, 2004) in a journal not widely read in meteorological circles. However, as the ideas put forward were motivated by idealised problems of predictability in meteorology, they might be of interest to philosophically-minded members of the American Meteorological Society. This paper assumes no prior knowledge of quantum theory. A glossary of the key terms used in the paper is given in the appendix. ## 2 Some Quantum Background Quantum theory is the most successfully tested, yet least-well understood of all physical theories (see for example, Penrose, 2004). Einstein's dissatisfaction with quantum theory is well known; this section briefly summarises the two key reasons for such dissatisfaction: indeterminacy and non-local causality According to quantum theory, a quantum system (eg a photon) is described by a quantity called the wave-function \\(\\left|\\psi\\right\\rangle.\\) When the quantum system is not being \"observed\" eg interrogated by some laboratory apparatus, \\(\\left|\\psi\\right\\rangle\\)'s evolution in time is described by a deterministic linear differential equation \\[i\\hbar\\frac{\\partial\\left|\\psi\\right\\rangle}{\\partial t}=H\\left|\\psi\\right\\rangle \\tag{1}\\] known as the Schrodinger equation. Here \\(\\hbar\\) is a form of Planck's constant and \\(H\\) is a linear operator known as the Hamiltonian, whose classical (ie non-quantum) form is well-known to mathematical meteorologists as an expression for total energy. For the purposes of this paper, an absolutely central point about the Schrodinger equation is that it explicitly contains reference to the complex number \\(i=\\sqrt{-1}.\\) On the other hand, when we try to use quantum theory to predict the outcome of some possible measurement (eg to determine which path a photon takes through an interferometer, see below) quantum theory says that \\(\\left|\\psi\\right\\rangle\\) evolves non-deterministically; all that can be predicted is the probability of one of a number of possible outcomes. Stochastic generalisations of the Schrodinger equation can be formulated to account for this indeterminacy (Percival, 1998). In this sense, the indeterminism of quantum theory appears to arise from some external interaction of \\(\\left|\\psi\\right\\rangle\\) with the classical world (which may or may not include laboratory apparatuses). However, notwithstanding the philosophical difficulties associated with the notion of randomness (Stewart, 2004), surely it is unsatisfactory for a fundamental theory of physics to have to assume, _ab initio_, such a pre-existing classical world? This problem is brought into focus in cosmology. Cosmologists who wish to investigate \\(\\left|\\psi\\right\\rangle\\) for the whole universe do not have the luxury of invoking an external classical world. Because of this, some prominent quantum theorists believe that these probabilities arise because the universe splits into multiple universes every time a quantum interaction occurs (Deutsch, 1988). For example, the number of universes where outcome O is observed, is in proportion to the probability of occurrence of O, as given the \"normal rules\" of quantum theory. Many people find this so-called \"many-worlds\" interpretation too bizarre to accept, but equally hard to reject objectively. One of the most famous experiments which illustrates quantum strangeness is Young's two-slit experiment (Fig 1a). This clearly demonstrates the wave nature of light, diffraction at the two slits causing an interference pattern on the back screen as the coherent beams of light emanating from the two slits combine. If the intensity of the light source is reduced so that the source only emits one photon at a time, a photon is never observed to split in two, ie travel through the two slits at the same time; rather, individual photons are observed to go through one slit or the other. Despite this, when no attempt is made to find out which slit a given photon goes through, photons are never observed at a minimum of the classical interference pattern. The mystery is this: how do these single photons \"know\" never to travel to a position of destructive interference? Standard quantum theory accounts for this mathematically by making use of the linearity of the Schrodinger equation. If \\(\\left|\\psi_{{}_{top}}\\right\\rangle\\): \"photon travels through top slit\" and \\(\\left|\\psi_{{}_{bottom}}\\right\\rangle\\):\"photon travels through bottom slit\" are solutions of the Schrodinger equation, then so are complex linear superpositions \\(\\left(\\left|\\psi_{{}_{top}}\\right\\rangle+e^{i\\lambda}\\left|\\psi_{{}_{bottom}} \\right\\rangle\\right)/\\sqrt{2}\\). If we try to observe the existence of a photon near one of the slits, standard quantum theory says that this superposed wave-function reduces indeterminately to either \\(\\left|\\psi_{{}_{top}}\\right\\rangle\\) or \\(\\left|\\psi_{{}_{bottom}}\\right\\rangle\\) with equal probability. However, if we do not try to observe the photons travelling through the slits, then the superposed states remains a coherent entity until the photon (or lack of it) is observed at the back screen. The interference pattern at the back screen arises from variations in the complex phase factor \\(\\lambda\\). Schrodinger himself realised the ludicrousness of the notion of \"superposition\". To illustrate this, he considered a cat in a closed box containing a phial of deadly cyanide gas (Fig 1b). The phial breaks if a radio-active atom decays within a period of time. According to quantum theory, the state of the atom within this period is again in a linearly superposed state (of decay and non-decay). On this basis, the cat too is somehow in a linear superposition of aliveness and deadness! Only by us observers opening the box and looking inside, does the cat non-deterministically evolve (by a supposed shake of God's dice) from this \"undead\" state, to one of definite aliveness or deadness. If we are to believe the standard rules of quantum theory, it is our curiosity that kills the cat! Why this should be, remains mysterious. Why we do not know about the cat? Figure 1: _Some conceptual problems in quantum theory. a) Young’s two-slit experiment when the intensity of the light source is so low that only one photon is emitted at a time. Since the photons are never observed to split into two, how does any one photon know never to travel to a minimum of the interference pattern? b) Schrödinger’s cat in a closed box containing a phial of cyanide. If a radioactive atom decays within a certain period, the phial breaks and kills the cat. According to quantum theory, within this period of time, the cat is in a linearly superposed state of aliveness and deadness. c) A spin-0 source emits two “entangled” spin-1/2 particles in a suitably superposed spin state. According to quantum theory, measuring the spin of the left-hand particle instantaneously causes the spin state of the right-hand particle to a definite “up” or “down”, no matter how far apart the two measuring devices are, contrary to the spirit of Einstein’s theory of relativity._ observe superposed states remains a very controversial topic, even amongst quantum experts. Putting these two problems together creates the apparent effect of non-local causality: what Einstein referred to as \"spooky action at a distance\". This was his most profound objection to quantum theory. Two spin-1/2 particles (eg electrons) are emitted from a zero angular-momentum source in a type of correlated superposed state known as an \"entangled\" state (Fig 1c). Let's say that the spin of the left-hand particle is then measured to be \"up\", relative to some direction \\(\\mathbf{n}\\). According to standard quantum theory, this instantaneously causes the entangled spin state to reduce to something definite, so that the spin of the distant particle is necessarily \"down\" with respect to \\(\\mathbf{n}\\). The phrase \"spooky action at a distance\" refers to the notion that a spin measurement on the left-hand particle has apparently instantaneously caused the spin of the right-hand particle to change from having an indeterminate value, to having some definite value, no matter how far apart these two particles are. No wonder Einstein was upset with this notion; invoking instantaneous action at a distance undermines his most valuable contribution to physics: the theory of relativity (which says, broadly speaking, that the effects of some localised cause can propagate no faster than the speed of light). Einstein believed that there must be some underlying theory, deeper than conventional quantum theory, which was both realistic and local; realistic in the sense that quantum states can always be assigned definite values (ie up or down and not some strange combination of both), and local in the sense that distant measurements cannot instantaneously cause these spin values to change, ie no \"spooky action at a distance\". However, a celebrated mathematical result known as Bell's theorem (Bell, 1993) is usually interpreted as saying that if Einstein was right, the correlations between spin measurements of such entangled particle pairs must satisfy a certain inequality. Experimentally, this inequality is known to be violated. Hence, conventional wisdom has it that Einstein was wrong not to believe in spooky actions at a distance. (A readable version of Bell's theorem for the non-specialist can be found in Rae; 1986) Whilst Bell's theorem appears to imply some form of non-local causality in quantum theory, there is something profoundly slippery going on here, because it can also be shown that quantum systems cannot be used to send information faster than the speed of light. What on earth is going on? **3. Counterfactual Definiteness and the Prototype Model of Weather Chaos** In this section the prototype model of weather chaos is used to call into question an implicit assumption in the proof of Bell's theorem - one rarely mentioned explicitly in text-book proofs. This assumption is rather metaphysical (consistent with the slipperiness mentioned at the end of the previous section) and is called \"counterfactual definiteness\". The notion of \"counterfactual definiteness\" is used to define what is meant by non-local causality: that some remote \"cause\" can lead to an instantaneous \"effect\" locally. By questioning the notion of counterfactual definiteness, we are in turn calling into doubt the meaningfulness of the notion of non-local causality. Fig 1c showed the situation when both the left and right hand particles of an entangled particle pair were measured with magnets oriented in the same direction **n**. However, in order to establish Bell's theorem, we need to consider correlations between pairs of measurements when the magnets have different orientations, let's say **n** for the left-hand magnets and **n'** for the right-hand magnets. It is also necessary to assume that it is meaningful to ask: what would the spin of a left-hand particle have been had we actually measured it with magnets oriented in the **n'** direction (or, conversely, what would the spin of the right-hand particle have been had we actually measured it with magnets oriented in the **n** direction)? Note that by definition this question could never be actually answered experimentally. In fact it is an example of a counterfactual question, a question about things that didn't happen, but our intuition suggests might have happened. It is easy to think of other examples of counterfactual questions. Would the world have gone to war in the last century if the assassin's bullet had missed Archduke Franz Ferdinand in Sarajevo in 1914? Would the weather in London have been sunny today if some Amazonian butterfly had not flapped its wings exactly one month earlier? Although we can't know the answer to these questions for sure, we nevertheless feel intuitively that each should in principle have answers. But is intuition a reasonable guide in these matters? If we analyse the operational meaning of such counterfactual questions, we are required to imagine, at some instant in time, a hypothetical dynamically-unconstrained perturbation to the actual universe, affecting only one small part of the universe, keeping the rest unchanged. Hence we must imagine a localised perturbation to the path of a specific speeding bullet in Sarajevo, or to the wings of a specific butterfly in Amazonia. In the case of Bell's theorem, the corresponding imagined perturbation is one that changes the orientation of the magnets, keeping unchanged the rest of the universe, including in particular the particle whose spin is being measured. The counterfactual questions are then in principle answerable by somehow integrating forward in time the dynamical equations of motion of the universe from these hypothetical perturbed states. Now if we were imagining gigantic hypothetical perturbations, such as would move whole galaxies around willy-nilly, we might well worry whether our imagination was being consistent with the laws of physics! However, issues of physical consistency seem less important for the counterfactuals above, because the imagined perturbations can be considered in some sense to be arbitrarily small. (For example, if you think the flap of a butterfly's wing is too large a perturbation, imagine instead some even smaller perturbation to a neuron in the butterfly's brain that somehow causes it to not flap at the key moment). Chaos theory suggests that, in this respect, our intuition may indeed be an unreliable guide. Consider, for example, the prototypical Lorenz (1963) equations \\[\\begin{split}\\dot{X}&=-\\sigma X+Y\\\\ \\dot{Y}&=-XZ+rX-Y\\\\ \\dot{Z}&=XY-bZ\\end{split} \\tag{2}\\]whose attractor is illustrated (schematically) in Figure 2. (Here, _X,Y,Z_ are the components of the state vector of the Lorenz model. The parameters \\(r\\), \\(b\\) and \\(\\sigma\\) are considered fixed, and have values 28, 8/3 and 10 respectively in Figure 2.) _Figure 2. An illustration of the Lorenz (1963) attractor._ Following this, consider a \"Lorenzian universe\" whose two \"laws of physics\" are i) equation (2) and ii) a \"cosmic\" initial state lying of the attractor of (2). Now, at some time \\(t\\), imagine some hypothetical dynamically-unconstrained perturbation which changes one component of the Lorenzian state vector, keeping the other components fixed. Specifically, let _X_(t)\\(\\rightarrow\\)_X_(t)+\\(\\delta\\)_X_ keeping _Y_(t) and _Z_(t) unchanged. Similar to the counterfactual perturbations above, \\(\\delta\\)_X_ has been posited without regard to the \"laws of physics\" of our Lorenzian universe. Now, because of the fractal nature of the attractor, then no matter how small is \\(\\delta\\)_X_, the resulting perturbed state is almost certainly (ie with probability one with respect to the continuum measure of phase space) off the attractor. Hence the imagined perturbed state (_X_(t)+\\(\\delta\\)_X_, _Y_(t), _Z_(t)) is inconsistent with the second Lorenzian \"law of physics\". That is to say, if were to ask the counterfactual question: would \\(Y\\) have been positive at _t\\({}_{2}\\)_\\(>\\)_t\\({}_{1}\\) if X had differed from its actual value at _t\\({}_{1}\\)_ by some imagined tiny amount \\(\\delta\\)_X_, then, according to the laws of physics of the Lorenzian universe, this counterfactual question is almost certainly neither true nor false. This argument calls into doubt counterfactual reasoning, and hence the notion of causality based on counterfactual reasoning. As discussed below, without counterfactual definiteness we cannot use the experimental violation of the Bell inequalities to infer non-local causality, and hence \"spooky action at a distance\". However, is there any evidence that low-order chaos somehow underlies the fundamentals of quantum theory? This question has been discussed by the experts. For example, David Deutsch, one of the founding fathers of the quantum computer (which one day, who knows, may be used to make weather forecasts) explains in his popular book \"The Fabric of Reality\"(Deutsch, 1998): \"Chaos theory is about limitations on predictability in classical physics, stemming from the fact that almost all classical systems are inherently unstable It is about an extreme sensitivity to initial conditions Thus it is said that in principle the flap of a butterfly's wing in one hemisphere of the planet could cause a hurricane in the other hemisphere. The infeasibility of weather forecasting and the like is then attributed to the impossibility of accounting for every butterfly on the planet. However, real hurricanes and real butterflies obey quantum theory, not classical mechanics. The instability that would rapidly amplify slight mis-specifications of an initial classical state is simply not a feature of quantum-mechanical systems. In quantum mechanics, small deviations from a specified initial state tend to cause only small deviations from the predicted final state. Instead, accurate prediction is made difficult by quite a different effect.\" In the above paragraph, Deutsch (who believes strongly in the many-worlds interpretation) is essentially remarking that the Schrodinger equation is linear and stable, whilst chaos is nonlinear and unstable. We appear to have a fundamental incompatibility. Roger Penrose is one of the few of the leading scientists working in the field who follows Einstein in believing that standard quantum theory is incomplete. He maintains that a satisfactory quantum-compatible theory of gravity will never emerge from the normal rules of quantum theory (a view which would rule out string theory as the basis of a theory of everything). Moreover, Penrose also presents evidence that any underlying theory of the quantum world should be based on some notion of non-computability (Penrose, 1994). A class of (mathematically well-posed) propositions is referred to as non-computable if there is no algorithm that can be guaranteed to determine the truth or falsehood of each member of the class. As an example of non-computability, consider the famous Mandelbrot set (see Fig 2). The point \\(c\\) on the Argand plane is said to belong to the Mandelbrot set if, starting from \\(z{=}0\\), the sequence of iterates of \\(|z|\\) under \\(z{\\rightarrow}z^{2}{+}c\\), does not diverge to infinity. The notion of non-computability arises here because (Blum et al, 1998) no algorithm can decide whether any given point on the plane lies in the Mandelbrot set. **Figure 3**: _The fractal Mandelbrot set. No algorithm that can decide whether any given point in the plane belongs to the Mandelbrot set (Blum et al 1998). As such the Mandelbrot set is non-recursive, or non-computable. Penrose (1994) has argued that any realistic theory which underpins standard quantum theory must incorporate elements of non-computability._If one applied our discussion about the \"Lorenz universe\", then one could say that it would be impossible to decide algorithmically whether points in the Mandelbrot set continue to so belong under dynamically-unconstrained perturbations. In this sense, non-computability seems also to illustrate the problems with counterfactual reasoning. However, Penrose (1994) asks: \"Does the phenomenon of chaos provide the needed non-computable physical basis ? An example that is often quoted in this connection is the detailed long-range prediction of the weather.\" but then answers: \"I should make it clear that despite such profound difficulties for deterministic prediction, all the normal systems that are referred to as chaotic are to be included in what I call computational The predicted weather may well not be the weather that actually occurs, but it is perfectly plausible as a weather!.. \" Deutsch and Penrose lie at different ends of the spectrum of informed opinion concerning the meaning of quantum theory. And yet both reject low-order chaos theory as a route to uncovering this \"meaning\". One problem is that true fractality is not a finite-time property of a chaotic system; you would have to wait an infinite time before the true fractal nature of these dynamics is manifest. The real world of physics cannot wait that long! **4. The Meteorological Butterfly Effect and the Upscale Cascade** Are there deterministic systems, studied in meteorology, where total loss of predictability occurs in finite time? Yes! In this section is discussed the notion of the upscale cascade of uncertainty in self-similar multi-scale systems, leading to a much more radical loss of predictability than can be found in low-order chaos. The term \"butterfly effect\" is attributed to Ed Lorenz, the father of modern chaos theory (see Lorenz, 1993). However, what I describe as the \"meteorological butterfly effect\" is not discussed in Lorenz (1963), but in his paper on upscale propagation of small-scale uncertainty in infinite dimensional (eg turbulent) systems (Lorenz 1969). There is an important conceptual difference. As Deutsch correctly points out, at the heart of chaos lies the notion of linear instability associated with exponential growth in amplitudes. By contrast, the meteorological butterfly effect is concerned with the growth in scale of arbitrarily small-scale initial perturbations. Uncertainty in the flap of a butterfly's wings leads to uncertainty in some gust, which leads to uncertainty in some cumulus cloud, which leads to uncertainty in some cyclone. This transfer of uncertainty is profoundly nonlinear; on the scale of a butterfly, a flap of its wings is not necessarily a small-amplitude perturbation. The arguments in section 3, questioning the notion of counterfactual reasoning, are brought into yet sharper focus for multi-scale systems with low-dimensional attractors and large (potentially infinite) numbers of degrees of freedom. Hence, as above, we can argue that it is virtually certain that a hypothetical dynamically-unconstrainedperturbation to a small-scale variable (a flap of a butterfly's wings) leaving all other larger-scale variables fixed, would take the system off the attractor. However, there is a second reason for considering such multi-scale systems: uncertainty due to a finite-amplitude but arbitrarily small-scale perturbation can propagate nonlinearly upscale, and infect the uncertainty of some given large-scale component of the flow, _in finite time_. This paradigm appears to apply to 3D fluid turbulence (eg Vallis, 1985). A scaling argument (which the uninterested reader can skip) goes as follows. In the so-called Kolmogorov inertial sub-range, fluid kinetic energy \\(E(k)\\) per horizontal wavenumber \\(k\\) scales as \\(E(k)\\sim k^{-5/3}\\) so that fluid velocity \\(u(k)\\sim k^{-1/3}\\). Let us assume that the time it takes for error at wave number \\(2k\\) to infect wave number k (ie to propagate upscale one \"octave\") is proportional to the \"eddy turn over time\" \\(\\tau(k)\\sim(ku(k))^{-1}\\sim k^{-2/3}\\). Then the time \\(\\Omega\\big{(}k_{{}_{N}}\\big{)}\\) it takes error to propagate \\(N_{{}_{0}}\\) octaves from wavenumber \\(k_{{}_{N}}\\) to some large scale \\(k_{{}_{L}}\\) of interest is given by the geometric series: \\[\\Omega\\big{(}k_{{}_{N}}\\big{)}=\\sum_{n=0}^{N_{0}-1}\\tau\\left(2^{n}k_{{}_{L}}\\right) \\tag{3}\\] Since \\(\\tau\\sim 2^{-2n/3}<1\\), \\(\\Omega\\big{(}k_{{}_{N}}\\big{)}\\) tends to a finite limit as \\(k_{{}_{N}}\\to\\infty\\) Mathematically, this can also be viewed as a statement about possible non-uniqueness of solutions of the 3D fluid equations (but not 2D or quasi-geostrophic equations; Vallis 1985): two initial states, which are identical except for some arbitrarily small-scale differences (ie identical in some suitable functional-analytic sense), can diverge finitely on finite scales and in finite time. For the inviscid (yet deterministic) Euler equations of fluid mechanics, there are clear examples of such non-uniqueness (Shnirelman, 1997), but no generic proof exists. Oddly, no counterproof exists for the viscous Navier-Stokes equations. If you can prove or disprove the existence or otherwise of a finite-time predictability horizon for the latter equations, submit your solution to the Clay Mathematics Institute Millenium Prize committee, and win yourself a million dollars ([http://www.claymath.org/millennium](http://www.claymath.org/millennium)). Here is a problem on predictability of weather which ranks on a par (financially at least) with the most famous unsolved problems in mathematics, such as the Riemann Hypothesis! There is more the flavour of Penrose's non-computability in this issue of non-uniqueness, than with ordinary low-order chaos. For example, based on the scaling argument associated with equation (3), forecasting even the sign of the relative vorticity of some large-scale circulation element could not, even in principle, be computed beyond the finite-time predictability horizon. However, to make progress in this direction, we would still have to overcome the apparent incongruuity (as highlighted in Deutsch's quote in the section 3) between the complex linear dynamics of the Schrodinger equation, and the real nonlinear dynamics which describes loss of predictability in meteorology. ## 5 The Upscale Cascade and a New Perspective on Complex Numbers In this section we try to reconcile the apparent incongruity between the complex linear dynamics of the Schrodinger equation, and the real nonlinear dynamics of the upscale cascade. A clue to a possible reconciliation lies in the power-law properties (eg \\(E(k)\\sim k^{-5/3}\\)) of the upscale cascade, indicative of self-similarity. We see self-similarity quite clearly when we zoom into the non-computable Mandelbrot set, obtained by a simple recursive mapping on the complex plane. As mentioned in section two, complex numbers play an essential role in quantum theory. Complex analysis is used in many meteorological studies (eg Charney, 1947). On the other hand, I recall the comment of a family friend who told me once that she gave up studying mathematics at school when \\(\\sqrt{-1}\\) was introduced. She felt that this was the last straw - corrupting schoolchildren's minds with such nonsensical ideas! We professionals have a tendency to scoff at such naivety! Given the mathematical consistency (and indeed beauty and power) of the algebra of complex numbers, who cares what \\(\\sqrt{-1}\\) \"really\" means - other than as a symbol with certain properties? And in any case, if we meteorologists really have a conceptual hang-up with complex numbers, we can always solve meteorological problems like the Charney problem (with less efficiency, perhaps) using real analysis. By contrast, complex numbers lie at the heart of quantum theory; \\(\\sqrt{-1}\\) appears explicitly in Schrodinger's equation (1) and related to this, \\(\\left|\\psi\\right\\rangle\\) is technically an element of a complex Hilbert space. As discussed, the phenomenon of quantum interference is linked to the wavefunction as a complex state vector. Quantum theorists can't be as complacent as meteorologists in ignoring the \"meaning\" of \\(\\sqrt{-1}\\). And yet they are! Having accepted the idea that Schrodinger's cat might exist in a linear superposition of states, I have yet to meet a quantum expert who expresses any further concern that this superposition might indeed be complex! But what is the physical reality of \\(\\left|\\text{Alive cat}\\right\\rangle+\\sqrt{-1}\\left|\\text{Dead cat}\\right\\rangle?\\) My family friend (not to mention the poor cat) would be in a state of apoplexy at the very thought! Is it possible that there might be some connection between the self-similar upscale cascade and complex numbers? In the last section, we considered the representation of some fluid state by a sequence \\[S=\\left\\{a_{1},a_{2},a_{3},a_{4},a_{5},a_{6},a_{7},a_{8}, ,a_{2^{N}}\\right\\} \\tag{5}\\] of coefficients eg of a spherical harmonic basis. In the next section I want to consider representing the wavefunction \\(\\left|\\psi\\right\\rangle\\) of in terms of sets of sequences such as (5), but where the elements of \\(S\\) are just \"1\"s or \"-1\"s. ie \\(a_{j}\\in\\left\\{1,-1\\right\\}.\\) To fix ideas, imagine \\(S\\) to be associated with the first \\(2^{\\text{N}}\\) binary digits of a number like \\(\\pi\\) or \\(\\sqrt{2}\\). The plan in this section is to reinterpret complex numbers as operators on \\(S.\\) To do this, let us first define the negation of \\(S\\) as\\[-S=\\left\\{-a_{1},-a_{2},-a_{3},-a_{4},-a_{5},-a_{6},-a_{7},-a_{8}, \\right\\} \\tag{6}\\] so that \\(-(-S)=S\\). Using this, we define the operator \\(i\\) to act identically on successive pairs of elements of \\(S\\), negating every second element, and then reversing the order of the elements, so that \\[i(S)=\\left\\{-a_{2},a_{1},-a_{4},a_{3},-a_{6},a_{5},-a_{8},a_{7} \\right\\} \\tag{7}\\] Then it is easily shown \\[i(i(S)=-S \\tag{8}\\] so that \\(i\\) can be interpreted as a \"square root of minus one\". Moreover, by putting \\[\\begin{array}{l}i^{1/2}(S)=\\left\\{-a_{4},a_{3},a_{1},a_{2},-a_{8},a_{7},a_{5},a_{6} \\right\\}\\\\ i^{1/4}(S)=\\left\\{-a_{8},a_{7},a_{5},a_{6},a_{1},a_{2},a_{3},a_{4} \\right\\} \\end{array} \\tag{9}\\] (where \\(i^{1/2}\\) operates identically on successive quadruplets of elements of \\(S\\), \\(\\ i^{1/4}\\) on successive octuplets of elements), the reader can easily verify that \\[\\begin{array}{l}i^{1/2}(i^{1/2}(S))=i(S)\\\\ i^{1/4}(i^{1/4}(S))=i^{1/2}(S)\\end{array} \\tag{10}\\] consistent with multiplication of complex numbers. The pattern of permutations associated with these operators is shown in Fig 4; it is indeed an idealisation of the self-similar upscale cascade of uncertainty described in section 4. Using self-similarity to extend this pattern, it is straightforward to define the permutation operator \\(i^{\\alpha}\\) where \\(\\alpha\\) is any number with a finite binary expansion, a so-called dyadic rational. As \\(\\alpha\\) gets smaller and smaller, \\(\\ i^{\\alpha}\\) moves elements to the front of the sequence (to the \"large scales\") from further and further back in the sequence (from the \"small scales\"). Like the fluid equations in the inviscid limit, \\(\\ i^{\\alpha}\\) has a singular limit as \\(\\alpha\\)\\(\\rightarrow\\)0. One way to see this is to imagine that \\(S\\) is the binary expansion of a real number, and consider \\(\\ i^{\\alpha}\\) as a function on the reals. In fact \\(\\ i^{\\alpha}\\) is singular for any \\(\\alpha\\). As a result, \\(\\ S(\\lambda)=i^{2\\lambda/\\pi}(S)\\)is only definable on the circle \\(\\ 0\\leq\\lambda\\leq 2\\pi\\) when \\(\\lambda\\) is a dyadic rational multiple of \\(\\pi\\). More specifically, \\(S(\\lambda)\\) cannot be continued to the irrational angles like conventional continuous functions such as \\(\\ e^{i\\lambda}\\). We will see the consequence of this shortly. Consider now the expression \\[S^{\\prime}=\\cos\\lambda^{\\prime}\\;S+\\sin\\lambda^{\\prime}\\;i(S) \\tag{11}\\] which parallels the familiar additive formula \\(\\cos\\lambda+i\\sin\\lambda\\) for a general unit complex number. For simplicity, suppose \\(0<\\lambda^{\\prime}<\\pi\\,/\\,2\\). Then we interpret this formula in the present context as follows. If \\(\\cos\\lambda^{\\prime}\\) is dyadic rational, then \\[S^{\\prime}=\\left\\{a^{\\prime}_{1},a^{\\prime}_{2},a^{\\prime}_{3},a^{\\prime}_{4}, a^{\\prime}_{5},a^{\\prime}_{6},a^{\\prime}_{7},a^{\\prime}_{8}, ,a^{\\prime}_{2^{N}} \\right\\}, \\tag{12}\\] in equation (11) denotes a sequence whose correlation with \\(S\\) equals \\(\\cos\\lambda^{\\prime}\\). Conversely2, if \\(\\sin\\lambda^{\\prime}\\) is dyadic rational, then \\(S^{\\prime}\\) denotes a sequence whose correlation with \\(i(S)\\) equals \\(\\sin\\lambda^{\\prime}\\). As such, \\(S^{\\prime}\\) will not vary continuously with \\(\\lambda^{\\prime}\\), and, as before, we can operate on such \\(S^{\\prime}\\) with \\(i^{a}\\), and represent these on the circle with the formula \\(S^{\\prime}(\\lambda)=i^{2(\\lambda-\\lambda^{\\prime})/\\pi}(S^{\\prime})\\). Footnote 2: If \\(0<\\cos\\lambda^{\\prime}<1\\) is dyadic rational, then \\(\\sin\\lambda^{\\prime}\\) is not, and vice versa. Consider now the following _key_ question: do there exist angles \\(\\lambda\\) where both \\(S(\\lambda)\\) and \\(S(\\lambda)\\) are simultaneously defined. If \\(S(\\lambda)\\) and \\(S(\\lambda)\\) were familiar \"classical\" functions on the circle (eg \\(e^{i\\lambda}\\) and \\(e^{i(\\lambda-\\lambda^{\\prime})}\\)), then both would be simultaneously defined for all angles \\(\\lambda\\). It might therefore be naively imagined that by making \\(N\\) in equations (5) and (12) sufficiently large, there would be arbitrarily many angles where both \\(S(\\lambda)\\) and \\(S(\\lambda)\\) are simultaneously well defined. However, this is not the case. (The cognoscent may recognise a similarity between this result and the Heisenberg uncertainty principle in quantum theory, whereby potential measurements based on non-commuting observables cannot simultaneously have well-defined outcomes.) The proof of this result is the central core of this work and makes the sequence construction \"non-classical\" and plausibly quantum theoretic. We can prove there are no simultaneously-defined directions by _reductio ad absurdum_ using a fundamental number-theoretic property of the cosine function. Figure 4: A schematic of the permutation-operator representation of the unit complex numbers, motivated by the upscale cascade of uncertainty associated with the meteorological butterfly effect. Assume there are points on the circle at which \\(S(\\lambda)\\) and \\(S(\\lambda)\\) are both simultaneously defined; then by the discussion above, \\(\\lambda^{\\prime}\\) must be a dyadic rational multiple of \\(\\pi\\). For all such \\(\\lambda^{\\prime}\\) in the interval \\((0,\\pi\\,/\\,2)\\) it is known that \\(\\cos\\lambda^{\\prime}\\) is irrational (Jahnel, 2005). However, this contradicts the fact that \\(\\cos\\lambda^{\\prime}\\) is a correlation coefficient between two finite sequences, and hence must be a rational number. Hence there are no points on the circle at which \\(S(\\lambda)\\) and \\(S(\\lambda)\\) are both simultaneously defined. Using the idealisation of the upscale cascade, as embodied in this reformulation of the complex numbers, we have managed to develop a fundamentally granular (ie generically singular) mathematical structure, without requiring infinite-time integration of the equations of motion to give us the asymptotic notion of a fractal attractor. If quantum theory could be completely reformulated, replacing the standard Hilbert space with sequences of the \\(\\operatorname{type}S(\\lambda)\\), then we might be able to conclude that the evolution of the real universe is not indeterminate, does not have weird superposed states, and is not non-locally causal. Is this possible? ## 6 Binary Sequences, Quantum DNA and the Implicate Order The discussion so far is leading us to consider reinterpreting the quantum wave-function \\(\\left|\\psi\\right\\rangle\\) in terms of binary sequences. The individual elements of these sequences would correspond to elements of quantum reality - what John Bell called \"be-ables\" (Bell, 1993). The proposal is that the sequences define the sample space over which quantum probabilities and quantum correlations are obtained. In such a reinterpretation, the elements of quantum reality never correspond to superpositions of values, but always definite values. Moreover, just as a gene's DNA encodes information about the organism as a whole, \\(S(\\lambda)\\) encodes the linkage between \\(\\left|\\psi\\right\\rangle\\) and environment in which \\(\\left|\\psi\\right\\rangle\\) belongs. This notion of an encoded linkage between the parts and the whole, has been referred to by the great \\(20^{\\text{th}}\\) century quantum theorist David Bohm, as nature's \"implicate order\" (Bohm, 1980). Bohm's ideas were inspired in part by fluid dynamical phenomena. The notion of implicate order can be grasped by considering equation (2). A partial time series of the \\(X\\) component presents an apparently random (or, what Bohm would have called, \"explicate\") order. On the other hand, with long enough time series and sophisticated nonlinear algorithms, we may be able to reconstruct the encoded implicate order - in this case the underlying fractal attractor. With this in mind, let us return to the problem of the remote measurements of an ensemble of pairs of entangled spin-1/2 particles as described in sections 2 and 3. The particles themselves belong to a larger system comprising \\(N\\) other elementary spin-1/2 systems including, for example, the measuring apparatuses. The implicate order associated with this \"belonging\" is encoded in \\(S(\\lambda)\\). One of the properties of \\(S(\\lambda)\\) is that it is defined on a set of \\(2^{N}\\) angles, and not on the continuum. However, since in reality N is a very very large number, so that the set of allowed angles appears for all practical purposes to be dense on the circle, such granularity does not, at first sight, appear significant. However, there is an ontological sense in which this granularity literally makes all the difference in the world. For example, we have seen that the set of angles where \\(S(\\lambda)\\) is well defined, is necessarily disjoint from the set of angles where \\(S(\\lambda)\\) is well defined, even though either set is effectively dense on the circle, and even though \\(S\\) and \\(S\\) may be arbitrarily well correlated with each another. This means that the sample space of directions (eg relative to the distant stars) at which the left hand particles have well-defined states, is effectively dense on the circle, but nevertheless disjoint from the sample space at which the right-hand particle has well defined states (and vice versa). Hence if we ask: what would the state of the left hand particle be relative to one of the directions for which the right-hand particle states are well defined, the answer is \"undefined\". This is no more than a variant of the argument used in section 3 to rule out counterfactual reasoning (where an arbitrarily-small reorientation of the underlying attractor will take a given point in state space \"off the attractor\"). As discussed in section 3, if we cannot appeal to counterfactual reasoning, we cannot conclude that quantum phenomena necessarily exhibit non-local causality. The correlation structure associated with the entangled sequences can also account for quantum interference. For example, from equation (11) the correlation between \\(S\\) and \\(S\\) varies between -1 (destructive interference) and 1 (constructive interference) as \\(\\lambda\\)' is varied. I want to conclude this section by discussing briefly how this formalism can be extended to describe the wave-function of general multi-state quantum systems. In section 4 we represented the complex number \\(i\\) as a permutation operator acting on successive pairs of elements of a binary sequence \\(S\\). However, this is by no means the only way of representing \\(\\vee\\)(-1) using permutation operators. If instead we consider possible representations of \\(\\vee\\)(-1) based on permutations of successive quadruplets of elements of \\(S\\), then it is easy to find the following set: \\[\\begin{array}{l}I(S)=\\{-a_{2},a_{1},a_{4},-a_{3},-a_{6},a_{5},a_{8},-a_{7} \\}\\\\ J(S)=\\{-a_{3},-a_{4},a_{1},a_{2},-a_{7},-a_{8},a_{5},a_{6}.\\cdot.\\}\\\\ K(S)=\\{-a_{4},a_{3},-a_{2},a_{1},-a_{8},a_{7},-a_{6},a_{5}.\\cdot.\\}\\end{array} \\tag{15}\\] It can be easily checked that \\(I^{2}(S)=J^{2}(S)=K^{2}(S)=-S\\) and that, in addition, \\(KJI(S)=-S\\). These are the quaternion relations (in operator form). The quaternions were discovered by William Rowan Hamilton, whose name also described an operator in Schrodinger's equation (1). Similar to the construction leading to equation we can also define families of quaternionic sequences eg \\(S^{\\prime}=\\cos\\theta\\)\\(I(S)+\\sin\\theta\\)\\(J(S)\\) which combine elements of the quaternionic sequences _I(S)_ and _J(S)_ such that if \\(\\cos\\theta\\) is dyadic rational, the correlation between S' and _I(S)_ is equal to \\(\\cos\\theta\\), whilst if \\(\\sin\\theta\\) is dyadic rational the correlation between S' and _J(S)_ is equal to \\(\\sin\\theta\\) (see the footnote above). Using (15) and the notion of self-similarity, seven further representations of \\(\\vee\\)(-1) can be found from permutations of successive octuplets of \\(S\\). Continuing inductively, the notion of self similarity allows us to generate a set of \\(2^{N}\\)-1 further \\(\\vee\\)(-1) permutation operators acting on successive \\(2^{N}\\)-tuplets of sequence elements. Briefly, then, a composite system comprising \\(N\\) elemental spin-1/2 systems is represented by binary sequence \\(S_{j}\\), \\(1\\leq j\\leq N\\) comprising at least \\(2^{N}\\) elements; the degrees of freedom being associated with the \\(2^{N}\\)-1 permutation representations of \\(\\backslash\\)(-1), which, together with the identity operator, corresponds to that required by quantum theory. The correlation coefficient between pairs of sequences plays the role of the Hilbert space inner product. The suggestion being made here is that by reinterpreting the conventional continuum field of complex numbers in terms of deterministic granular permutation operators, the quantum wave-function of compound objects (eg you, me, or the cosmos as a whole) can be represented in terms of a set of intricately-encoded binary sequences (somewhat like the DNA of a gene, but in a 4 dimensional space-time setting). Taken one at a time, these sequences look as if they are random (the explicate order) and yet they encode the entanglement of elemental quantum objects as demanded by the conventional Hilbert space representation of quantum theory (the implicate order). The work described here is essentially a kinematic reformulation of quantum theory. I have not yet discussed how dynamical evolution associated with the Schrodinger equation itself is expressed in this reformulation. This is work for the future. Also, one may legitimately ask whether any of this has any relevance for experimental physics. In this respect it is possible that the present formulation may give a more complete description of entanglement for 4 (or more) entangled spin-1/2 particles, than appears possible in conventional quantum theory. The reason for this relates a theorem in algebraic topology, which states that the Hilbert state space for 4 or more entangled spin-1/2 particles cannot be decomposed (\"fibrated\" is the correct technical word) into smaller state spaces. The formulation described here, being neither topological nor algebraic, is not restricted by this theorem. Curiously, the graviton (the quantum of the still unknown quantum theory of gravity) can be considered as a composite of 4 spin-1/2 particles. ## 7 Conclusions Meteorology is an extraordinarily interdisciplinary subject, with quantitative links to many of the applied sciences. However, in this, the \\(100^{\\rm th}\\) anniversary of Einstein's _annus mirabilis_, the possible relevance of nonlinear thinking about predictability of weather has been considered, to try to resolve the two key that led Einstein to ultimately reject quantum theory as a fundamental theory of physics: indeterminacy and non-local causality. We started by discussing one of the implicit assumptions in the theorem that is conventionally interpreted to imply non-local causality. This implicit assumption is that of counterfactual definiteness. The attractor of the Lorenz (1963) model of low-order chaos was used to illustrate the notion that there may be profound constraints on the freedom to vary local variables in the manner suggested by what might appear intuitively to be sensible (counterfactual) reasoning. We discussed the self-similar upscale cascade (\"meteorological butterfly effect\") as a means of overcoming some of the objections to marrying chaos theory and quantum theory. A combinatoric idealisation of the self-similar upscale cascade was then constructed which mimicked complex numbers - the latter playing an essential role in quantum theory's Schrodinger equation. This construction was used to suggest a reformulation of quantum theory where a universe of \\(N\\) elemental quantum systems was represented by a set of \\(N\\) sequences of binary values. Much as a gene's DNA can be thought of as encoding properties of the organism as a whole, it was suggested that self-similar families of these representations of \\(\\sqrt[n]{}\\)(-1) could encode the intricacy of quantum-theoretic entanglement relationships in compound objects. In conclusion, a case has been made, motivated by nonlinear meteorological thinking, that God does not (or at least need not) play dice, that complex-numbered undead cats do not stalk the earth unobserved, that the universe is not continually splitting into multiple copies, and that the world in which we live, whilst profoundly holistic, does not exhibit non-local causality, or \"spooky action at a distance\". ## Acknowledgements The research that led to the work described in this paper has benefited from discussions both with philosophically-minded meteorological colleagues, and with quantum experts from mathematics, philosophy and theoretical physics departments in Europe and the Unites States. Some of these are listed in (Palmer 2004). I am grateful to my meteorological colleagues for numerous helpful discussions. I am also grateful to the real experts for taking the ideas of a professional meteorologist seriously, and most especially for allowing me to present my results in departmental seminars, workshops and conferences. Without this exposure, with all the scrutiny that it implies, I would not have had the confidence to write this paper. ## Appendix A Glossary of Terms ### Bell's Theorem Based on correlations between measurements on ensembles of pairs of entangled quantum particles. For a large class of putative theories which are locally causal, Bell's theorem requires these correlations to satisfy a certain inequality. The inequality is violated experimentally. As a result, Bell's theorem is usually cited as the fundamental reason Einstein was wrong to believe that non-local causality was not a fundamental feature of quantum physics. ### Complex numbers and quaternions The continuum field of complex numbers is widely used in dynamical meteorology. Quaternions are a generalisation of complex numbers and can be related to rotations in 3D physical space. In this paper, both complex numbers and quaternions are represented in terms of self-similar permutation operators acting on the elements of binary sequences. ### Counterfactual Reasoning Reasoning about the consequences of something which did not happen, but which intuition suggests might have happened, eg would it have been sunny in London today if a certain butterfly in Amazonia which in reality did flap its wings, had in fact not flapped its wings? An implicit assumption in the derivation of Bell's theorem is that counterfactual questions necessarily have definite answers. We use chaos theory to cast doubt on the validity of counterfactual reasoning. #### Explicate/Implicate Order A concept put forward by the 20th century quantum theorist David Bohm, to describe the holistic structure of the physical universe. The implicate order describes some implicit intertwining of the degrees of freedom of a system, not apparent in a partial set of observations. Chaos theory is a manifestation of these ideas - a partial sequence of values of a chaotic variable exhibits random explicit order. With a large-enough sequence and sophisticated algorithms, the encoded implicate order - the underlying fractal attractor - can be revealed #### Finite-time predictability horizon An apparent property of 3D fluid equations, at least in the inviscid limit, whereby after some finite time, the large-scale evolution of the system is sensitive to arbitrarily small-scale features in the initial state. #### Indeterminacy/ \"Does God Play Dice?\" One of Einstein's concerns about quantum theory: that during the measurement process, the evolution of the quantum state does not appear to evolve by deterministic laws. #### Low-Order Chaos/ fractal attractor/self-similarity. A nonlinear deterministic dynamical system governed by a small number of differential (or difference) equations can give rise to apparent randomness. This apparent randomness is associated with the existence of a fractal attractor in the system's state space. An attractor is a set of points to which state-space trajectories of the underlying dynamical equations are attracted. The fact that the set is fractal means that it has fine scale structure which persists under repeated magnification of the set. Such fractal structure is said to be self-similar. The phenomenon is known as chaos about which volumes has been written. #### Many-worlds interpretation Some quantum experts (eg David Deutsch, who developed much of the seminal theory for quantum computing) maintain that each of the possible quantum measurement outcomes (such as the alive /dead cat alternatives in Fig 1) are realised in different parallel universes. #### Meteorological Butterfly Effect A description of the loss of predictability in terms of the upscale cascade, rather than low-order chaos. Compared with the volumes written about low-order chaos, relatively little has been written about the meteorological butterfly effect. #### Non-computability A class of (mathematically well-posed) propositions is referred to as non-computable if there is no algorithm that can be guaranteed to determine the truth or falsehood of each member of the class. The fractal Mandelbrot set is non-computable. The mathematical physicist Roger Penrose has speculated that a more complete formulation of quantum theory must have non-computable elements. #### Non-local causality/\"Spooky Action at a Distance\" Einstein's principal concern about quantum theory: that a distant measurement could instantaneously cause a change the state of a quantum system \"here\". #### Schrodinger Equation and Hilbert Space The Schrodinger equation is a linear dynamical equation which describes the evolution of the quantum wave-function, at least during periods where the wave-function is not being \"measured\"; the latter being described conventionally in terms of a random \"reduction\" process. The state space in which wave-function solutions of the Schrodinger equation reside is known as a Hilbert Space. Because of the existence of \\(\\sqrt[]{}\\)(-1) in the Schrodinger equation, the Hilbert Space is necessarily complex. In quantum theory, the Hilbert Space dimension increases exponentially with the number of quantum particles considered. Any alternate theory must account for not only the complex nature of Hilbert space, but also this exponential \"vastness\". #### Upscale cascade The nonlinear mechanism whereby uncertainty propagates eg in fluid-dynamical equations from small scales to large scales. The mathematical characteristics of the cascade have power-law structure, suggesting scale-invariant (ie self-similar) properties. ## References * Bell (1993) Bell, J.S., 1993: Speakable and unspeakable in quantum mechanics. Cambridge University Press. * Blum et al. (1998) Blum, L., Cucker, F., Shub, M. and Smale, 1998: Complexity and real computation. Springer. * Bohm (1980) Bohm, D., 1980: Wholeness and the Implicate Order. Routledge and Kegan Paul, London. * Charney (1947) Charney, J. G., 1947: The dynamics of long waves in a baroclinic westerly current. J. Meteorol., 4, 135-162. * Deutsch (1998) Deutsch, 1998: The Fabric of Reality, Penguin Books * Jahnel (2005) Jahnel, 2005: When does the (co)sine of a rational angle give a rational number? [http://www.uni-math.gwdg.de/jahnel/linkstopaperse.html](http://www.uni-math.gwdg.de/jahnel/linkstopaperse.html) * Lorenz (1963) Lorenz, E.N., 1963: Deterministic non-periodic flow. J. Atmos.Sci., 20, 130-141 * Lorenz (1969) Lorenz, E.N. 1969: The predictability of a flow which possesses many scales of motion. Tellus, 21289-307. Lorenz, E.N., 1993: The Essence of Chaos. University of Washington Press. Palmer, T.N., 2004: A granular permutation-based representation of complex numbers and quaternions: elements of a possible realistic quantum theory. Proc.Roy.Soc.A, 460, 1039-1055. Palmer, T.N., A. Alessandri, U.Andersen, P.Canteloube, M.Davey, P.Delelecuse, M.Deque, E. Diez, F.J. Doblas-Reyes, H.Feddersen, R. Graham. S.Gualdi, J.-F.Gueremy, R.Hagedorn, M.Hoshen, N.Keenlyside, M.Latif, A.Lazar, E.Maisonnave, V.Marletto, A.P.Morse, B.Orfila, P.Rogel, J.-M.Terres, M.C.Thomson, 2004: Development of a European Multi-Model Ensemble System for Seasonal to Inter-Annual Prediction. Bull. Amer. Meteor.Soc., 85, 853-872. Penrose, R., 1989: The Emperor's New Mind. Oxford University Press Penrose, R., 1994: The Road to Reality. Jonathan Cape. London. Percival, I., 1998: Quantum State Diffusion. Cambridge University Press. Rae, A., 1986: Quantum physics: illusion or reality. Cambridge University Press. Shnirelman, A., 1997: On the nonuniqueness of weak solutions to the Euler equations. Commun. Pure Appl. Math., 50, 1260-1286. Stewart, I, 1997: Does God Play Dice? Penguin Books Stewart, I. 2004: In the lap of the gods. New Scientist, 25 September 2004 issue, 29-33. Vallis. G.K., 1985: Remarks on the predictability properties of two- and three-dimensional flow. Q.J.Meteorol.Soc., 111, 1039-1047.
Meteorology is a wonderfully interdisciplinary subject. But can nonlinear thinking about predictability of weather and climate contribute usefully to issues in fundamental physics? Although this might seem extremely unlikely at first sight, an attempt is made to answer the question positively. The long-standing conceptual problems of quantum theory are outlined, focussing on indeterminacy1 and non-local causality; problems that led Einstein to reject quantum mechanics as a fundamental theory of physics. These conceptual problems are considered in the light of both low-order chaos and the more radical (and less well known) paradigm of the finite-time predictability horizon associated with the self-similar upscale cascade of uncertainty in a turbulent fluid. The analysis of these dynamical systems calls into doubt one of the key pieces of logic used in quantum non-locality theorems: that of counterfactual reasoning. By considering an idealisation of the upscale cascade (which provides a novel representation of complex numbers and quaternions), a case is made for reinterpreting the quantum wave-function as a set of intricately-encoded binary sequences. In this reinterpretation, it is argued that the quantum world has no need for dice-playing deities, undead cats, multiple universes, or \"spooky action at a distance\". Footnote 1: A glossary of some of key terms used in the paper is given in the appendix
Provide a brief summary of the text.
arxiv-format/0405059v3.md
# Consistency of the Adiabatic Theorem M.S. Sarandy, L.-A. Wu, D.A. Lidar Chemical Physics Theory Group, Department of Chemistry, and Center for Quantum Information and Quantum Control, University of Toronto, 80 St. George Street, Toronto, Ontario, M5S 3H6, Canada ## I Introduction The adiabatic theorem [1; 2; 3; 4] is one of the oldest and most widely used general tools in quantum mechanics. The theorem concerns the evolution of systems subject to slowly varying Hamiltonians. Roughly, its content is that if a state is an instantaneous eigenstate of a sufficiently slowly varying \\(H\\) at one time then it will remain an eigenstate at later times, while its eigenenergy evolves continuously. When the slowness assumption is relaxed transitions become weakly allowed [5; 6; 7; 8]. The role of the adiabatic theorem in the study of slowly varying quantum mechanical systems spans a vast array of fields and applications, such as the Landau-Zener theory of energy level crossings in molecules [9; 10], quantum field theory [11], and Berry's phase [12]. In recent years geometric phases [13] have been proposed to perform quantum information processing [14; 15; 16], with adiabaticity assumed in a number of schemes for geometric quantum computation (e.g., [17; 18; 19; 20]). Additional interest in adiabatic processes has arisen in connection with the concept of adiabatic quantum computing, in which slowly varying Hamiltonians appear as a promising mechanism for the design of new quantum algorithms and even as an alternative to the conventional quantum circuit model of quantum computation [21; 22; 23]. More recently, in Ref. [24], the adiabatic theorem was generalized to the case of open quantum systems, i.e., quantum systems coupled to an external environment. Instead of making use of eigenstates of the Hamiltonian, adiabaticity is defined through the Jordan canonical form of the generator of the master equation governing the dynamics of the system. This new framework allowed for the derivation of an adiabatic approximation which includes the case of systems evolving in the presence of noise. This issue is particularly important in the context of quantum information processing, where environment induced decoherence is viewed as a fundamental obstacle on the path to the construction of quantum computers (e.g., [25]). The aim of this paper is to review the adiabatic approximation in quantum mechanics for both closed and open quantum systems as well as to point out how an incorrect manipulation of the adiabatic theorem can yield an inconsistent result. Indeed, in a recent paper entitled \"Inconsistency in the application of the adiabatic theorem\" [26] the authors argue that there may be an inconsistency in the adiabatic theorem for closed quantum systems. We show here how this inconsistency can be resolved. Related discussions can be found in Refs. [27; 28; 29]. ## II The quantum adiabatic approximation for closed systems ### Condition on the Hamiltonian Let us begin by reviewing the adiabatic approximation in closed quantum systems, which evolve unitarily through a time-dependent Schrodinger equation \\[H(t)\\left|\\psi(t)\\right\\rangle=i\\,|\\dot{\\psi}(t)\\rangle, \\tag{1}\\] where \\(H(t)\\) denotes the Hamiltonian and \\(\\left|\\psi(t)\\right\\rangle\\) is a quantum state in a \\(D\\)-dimensional Hilbert space. We use units where \\(\\hbar=1\\). For simplicity we assume that the spectrum of \\(H(t)\\) is entirely discrete and nondegenerate. Thus we can define an instantaneous basis of eigenenergies by \\[H(t)\\left|n(t)\\right\\rangle=E_{n}(t)\\left|n(t)\\right\\rangle, \\tag{2}\\]with the set of eigenvectors \\(|n(t)\\rangle\\) chosen to be orthonormal. In this simplest case, where to each energy level there corresponds a unique eigenstate we can _define adiabaticity as the regime associated to an independent evolution of the instantaneous eigenvectors of_\\(H(t)\\). This means that instantaneous eigenstates at one time evolve continuously to the corresponding eigenstates at later times, and that their corresponding eigenenergies do not cross. In particular, if the system begins its evolution in a particular eigenstate \\(|n(0)\\rangle\\) then it will evolve to the instantaneous eigenstate \\(|n(t)\\rangle\\) at a later time \\(t\\), without any transition to other energy levels. It is conceptually useful to point out that the relationship between slowly varying Hamiltonians and adiabatic behavior can be demonstrated directly from a simple manipulation of the Schrodinger equation: recall that \\(H(t)\\) can be diagonalized by a unitary similarity transformation \\[H_{d}(t)=U^{-1}(t)\\,H(t)\\,U(t), \\tag{3}\\] where \\(H_{d}(t)\\) denotes the diagonalized Hamiltonian and \\(U(t)\\) is a unitary transformation. Multiplying Eq. (1) by \\(U^{-1}(t)\\) and using Eq. (3) we obtain \\[H_{d}\\,|\\psi\\rangle_{d}=i\\,|\\dot{\\psi}\\rangle_{d}-i\\,\\dot{U}^{-1}|\\psi\\rangle, \\tag{4}\\] where \\(|\\psi\\rangle_{d}\\equiv U^{-1}|\\psi\\rangle\\) is the state of the system in the basis of eigenvectors of \\(H(t)\\). Upon considering that \\(H(t)\\) changes slowly in time, i.e. \\(dH(t)/dt\\approx 0\\), we may also assume that the unitary transformation \\(U(t)\\) and its inverse \\(U^{-1}(t)\\) are slowly varying operators, yielding \\[H_{d}(t)\\,|\\psi(t)\\rangle_{d}=i\\,|\\dot{\\psi}(t)\\rangle_{d}. \\tag{5}\\] Thus, since \\(H_{d}(t)\\) is diagonal, the system evolves separately in each energy sector, ensuring the validity of the adiabatic approximation. Now let \\[g_{nk}(t)\\equiv E_{n}(t)-E_{k}(t) \\tag{6}\\] be the energy gap between level \\(n\\) and \\(k\\) and let \\(T\\) be the total evolution time. One may then state _a general validity condition for adiabatic behavior_ as follows: \\[\\max_{0\\leq t\\leq T}\\left|\\frac{\\langle k|\\dot{H}|n\\rangle}{g_{nk}}\\right|\\, \\ll\\,\\min_{0\\leq t\\leq T}|g_{nk}|\\,. \\tag{7}\\] Note that the left-hand side of Eq. (7) has dimensions of frequency and hence must compared to the relevant physical frequency scale, which can be proved to be given by the gap \\(g_{nk}\\)[4; 30]. [In fact, Eq. (7) will be seen to be a direct consequence of the adiabatic condition derived in Subsection II.2.] The interpretation of the adiabaticity condition (7) is that for all pairs of energy levels, the expectation value of the time-rate-of-change of the Hamiltonian, in units of the gap, must be small compared to the gap. For a discussion of the adiabatic regime when there is no gap in the energy spectrum see Ref. [31]. In order to obtain Eq. (7), let us expand \\(|\\psi(t)\\rangle\\) in terms of the basis of instantaneous eigenvectors of \\(H(t)\\): \\[|\\psi(t)\\rangle=\\sum_{n=1}^{D}a_{n}(t)\\,e^{-i\\int_{0}^{t}dt^{\\prime}E_{n}(t^{ \\prime})}\\,|n(t)\\rangle, \\tag{8}\\] with \\(a_{n}(t)\\) being complex functions of time. Substitution of Eq. (8) into Eq. (1) and multiplying the result by \\(\\langle k(t)|\\), we have \\[\\dot{a}_{k}=-\\sum_{n}a_{n}\\langle k|\\dot{n}\\rangle\\,e^{-i\\int_{0}^{t}dt^{ \\prime}g_{nk}(t^{\\prime})}. \\tag{9}\\] A useful expression for \\(\\langle k|\\dot{n}\\rangle\\), for \\(k\ eq n\\), can be found by taking a time derivative of Eq. (2) and multiplying the resulting expression by \\(\\langle k|\\), which reads \\[\\langle k|\\dot{n}\\rangle=\\frac{\\langle k|\\dot{H}|n\\rangle}{g_{nk}}\\quad(n\ eq k). \\tag{10}\\]Therefore Eq. (9) can be written as \\[\\dot{a}_{k}=-a_{k}\\langle k|\\dot{k}\\rangle-\\sum_{n\ eq k}a_{n}\\frac{\\langle k| \\dot{H}|n\\rangle}{g_{nk}}\\,e^{-i\\int_{0}^{t}dt^{\\prime}g_{nk}(t^{\\prime})}. \\tag{11}\\] Adiabatic evolution is ensured if the coefficients \\(a_{k}(t)\\) evolve independently from each other, i.e., if their dynamical equations do not couple. As is apparent from Eq. (11), this requirement is fulfilled when the condition (7) is imposed. In the case of a degenerate spectrum of \\(H(t)\\), Eq. (10) holds only for eigenstates \\(|k\\rangle\\) and \\(|n\\rangle\\) for which \\(E_{n}\ eq E_{k}\\). Taking into account this modification in Eq. (11), it is not difficult to see that the adiabatic approximation generalizes to the statement that each degenerate eigenspace of \\(H(t)\\), instead of individual eigenvectors, has independent evolution, whose validity conditions given by Eq. (7) are to be considered over eigenvectors with distinct energies. Thus, in general one can define adiabatic dynamics of closed quantum systems as follows: **Definition II.1**: _A closed quantum system is said to undergo adiabatic dynamics if its Hilbert space can be decomposed into decoupled Schrodinger-eigenspaces with distinct, time-continuous, and non-crossing instantaneous eigenvalues of \\(H(t)\\)._ ### Condition on the total evolution time A very useful alternative is to express the adiabaticity condition in terms of the total evolution time \\(T\\). We shall consider for simplicity a nondegenerate \\(H(t)\\); the generalization to the degenerate case is also possible. Taking the initial state as the eigenvector \\(|m(0)\\rangle\\), with \\(a_{m}(0)=1\\), the condition for adiabatic evolution can be stated as follows: \\[T\\gg\\frac{\\mathcal{F}}{\\mathcal{G}^{2}}, \\tag{12}\\] where \\[\\mathcal{F}=\\max_{0\\leq s\\leq 1}\\left|\\langle k(s)|\\frac{dH(s)}{ds}|m(s) \\rangle\\right|,\\ \\ \\ \\ \\ \\mathcal{G}=\\min_{0\\leq s\\leq 1}\\left|g_{mk}(s)\\right|. \\tag{13}\\] Eq. (12) can be interpreted as stating the total evolution time must be much larger than the norm of the time-derivative of the Hamiltonian divided by the square of the energy gap. It gives an important validity condition for the adiabatic approximation, which has been used, e.g., to determine the running time required by adiabatic quantum algorithms [21; 22; 23]. By using the time variable transformation (16), one can show that Eq. (12) is indeed equivalent to the adiabatic condition (7) on the Hamiltonian. To derive Eq. (12), let us rewrite Eq. (11) as follows [32]: \\[e^{i\\gamma_{k}(t)}\\,\\frac{\\partial}{\\partial t}\\left(a_{k}(t)\\,e^{-i\\gamma_{k} (t)}\\right)=-\\sum_{n\ eq k}a_{n}\\frac{\\langle k|\\dot{H}|n\\rangle}{g_{nk}}\\,e^ {-i\\int_{0}^{t}dt^{\\prime}g_{nk}(t^{\\prime})}, \\tag{14}\\] where \\(\\gamma_{k}(t)\\) denotes the Berry's phase [12] associated to the state \\(|k\\rangle\\): \\[\\gamma_{k}(t)=i\\int_{0}^{t}dt^{\\prime}\\langle k(t^{\\prime})|\\dot{k}(t^{\\prime })\\rangle. \\tag{15}\\] Now let us define a normalized time \\(s\\) through the variable transformation \\[t=sT,\\ \\ 0\\leq s\\leq 1. \\tag{16}\\] Then, by performing the change \\(t\\to s\\) in Eq. (14) and integrating we obtain \\[a_{k}(s)\\,e^{-i\\gamma_{k}(s)}=a_{k}(0)-\\sum_{n\ eq k}\\int_{0}^{s}ds^{\\prime} \\frac{F_{nk}(s^{\\prime})}{g_{nk}(s^{\\prime})}e^{-iT\\int_{0}^{s^{\\prime}}ds^{ \\prime\\prime}g_{nk}(s^{\\prime\\prime})}, \\tag{17}\\] where \\[F_{nk}(s)=a_{n}(s)\\,\\langle k(s)|\\frac{dH(s)}{ds}|n(s)\\rangle\\,e^{-i\\gamma_{k }(s)}. \\tag{18}\\]However, for an adiabatic evolution as defined above, the coefficients \\(a_{n}(s)\\) evolve without any mixing, which means that \\(a_{n}(s)\\approx a_{n}(0)\\,e^{i\\gamma_{n}(s)}\\). Therefore \\[F_{nk}(s)=a_{n}(0)\\,\\langle k(s)|\\frac{dH(s)}{ds}|n(s)\\rangle\\,e^{-i(\\gamma_{k} (s)-\\gamma_{n}(s))}. \\tag{19}\\] In order to arrive at a condition on \\(T\\) it is useful to separate out the fast oscillatory part from Eq. (17). Thus, the integrand in Eq. (17) can be rewritten as \\[\\frac{F_{nk}(s^{\\prime})}{g_{nk}(s^{\\prime})}e^{-iT\\int_{0}^{s^{\\prime}}ds^{ \\prime\\prime}g_{nk}(s^{\\prime\\prime})}=\\frac{i}{T}\\left[\\frac{d}{ds^{\\prime}} \\left(\\frac{F_{nk}(s^{\\prime})}{g_{nk}^{2}(s^{\\prime})}e^{-iT\\int_{0}^{s^{ \\prime}}ds^{\\prime\\prime}g_{nk}(s^{\\prime\\prime})}\\right)-\\,e^{-iT\\int_{0}^{s^ {\\prime}}ds^{\\prime\\prime}g_{nk}(s^{\\prime\\prime})}\\frac{d}{ds^{\\prime}}\\left( \\frac{F_{nk}(s^{\\prime})}{g_{nk}^{2}(s^{\\prime})}\\right)\\right]. \\tag{20}\\] Substitution of Eq. (20) into Eq. (17) results in \\[a_{k}(s)\\,e^{-i\\gamma_{k}(s)}=a_{k}(0)+\\frac{i}{T}\\sum_{n\ eq k}\\left(\\frac{F_ {nk}(0)}{g_{nk}^{2}(0)}-\\frac{F_{nk}(s)}{g_{nk}^{2}(s)}e^{-iT\\int_{0}^{s}ds^{ \\prime}g_{nk}(s^{\\prime})}+\\,\\int_{0}^{s}ds^{\\prime}\\,e^{-iT\\int_{0}^{s^{ \\prime}}ds^{\\prime\\prime}g_{nk}(s^{\\prime\\prime})}\\frac{d}{ds^{\\prime}}\\frac{ F_{nk}(s^{\\prime})}{g_{nk}^{2}(s^{\\prime})}\\right). \\tag{21}\\] A condition for the adiabatic regime can be obtained from Eq. (21) if the last integral vanishes for large \\(T\\). Let us assume that, as \\(T\\rightarrow\\infty\\), the energy difference remains nonvanishing. We further assume that \\(d\\{F_{nk}(s^{\\prime})/g_{nk}^{2}(s^{\\prime})\\}/ds^{\\prime}\\) is integrable on the interval \\([0,s]\\). Then it follows from the Riemann-Lebesgue lemma [33] that the last integral in Eq. (21) vanishes in the limit \\(T\\rightarrow\\infty\\) (due to the fast oscillation of the integrand) [34]. What is left are therefore only the first two terms in the sum over \\(n\ eq k\\) of Eq. (21). Thus, a general estimate of the time rate at which the adiabatic regime is approached can be expressed by \\[T\\gg\\frac{F}{g^{2}}, \\tag{22}\\] where \\[F=\\max_{0\\leq s\\leq 1}|a_{n}(0)\\,\\langle k(s)|\\frac{dH(s)}{ds}|n(s)\\rangle|, \\hskip 14.226378ptg=\\min_{0\\leq s\\leq 1}|g_{nk}(s)|\\,, \\tag{23}\\] with max and min taken over all \\(k\\) and \\(n\\). Eq. (12) is then obtained as the special case when the system starts its evolution in a particular eigenstate of \\(H(t)\\). ### Higher-order corrections to the adiabatic approximation When the Hamiltonian of a quantum system changes slowly, but not extremely slowly, the degenerate eigenspaces of \\(H(t)\\) (or individual eigenvectors in the case of nondegenerate spectrum) will not evolve completely independently from each other and, therefore, the dynamical equation (9) will weakly couple distinct eigenspaces of \\(H(t)\\). Then, for non-extremely slowly varying Hamiltonians, the adiabatic solution is actually a zeroth-order approximation and higher-order corrections must be considered. Some higher-order adiabatic approximation methods have been proposed [5; 6; 7; 8]. Here we shall review, for the non-degenerate case, the method proposed by Wu in Ref. [8]. Let us begin by expanding the state vector \\(|\\psi(t)\\rangle\\) in the instantaneous eigenbasis, as in Eq. (8). Then, by using the normalized time \\(s\\) introduced in Eq. (16) we obtain the following matrix form for the Schrodinger equation \\[\\frac{d\\psi(s)}{ds}=K(s)\\psi(s), \\tag{24}\\] where \\(K(s)\\) is an anti-Hermitian matrix with elements \\[K_{mn}(s)=-\\,\\langle m(s)|\\,\\frac{d}{ds}\\,|n(s)\\rangle\\exp\\left(iT\\int_{0}^{s} ds^{\\prime}\\,g_{mn}(s^{\\prime})\\right). \\tag{25}\\] The matrix \\(K(s)\\) can be separated into a diagonal matrix \\(D(s)\\) and an off-diagonal matrix \\(O(s)\\), yielding \\[K(s)=D(s)+O(s). \\tag{26}\\] The evolution operator \\(U(s)\\) for the system satisfies the equation \\[\\frac{dU(s)}{ds}=K(s)U(s),\\quad(\\mbox{with}\\,\\,\\,U(0)=1) \\tag{27}\\]which, after integration and use of Eq. (26), becomes \\[U(s)=1+\\int_{0}^{s}ds_{1}\\left[D(s_{1})+O(s_{1})\\right]+\\int_{0}^{s}ds_{1}\\int_{0} ^{s_{1}}ds_{2}\\left[D(s_{1})+O(s_{1})\\right][D(s_{2})+O(s_{2})]+\\,\\dots \\tag{28}\\] Now let us define \\[U^{(0)}(s)=1+\\int_{0}^{s}ds_{1}\\,D(s_{1})+\\int_{0}^{s}ds_{1}\\int_{0}^{s_{1}}ds_ {2}\\,D(s_{1})D(s_{2})+\\,\\dots, \\tag{29}\\] which involves only the diagonal parts. Moreover, for \\(n>0\\), we denote \\(U^{(n)}(s)\\) as the sum of all the integrals with \\(n\\) off-diagonal \\(O(s)\\) factors in the integrand. Therefore the evolution operator can be expanded in powers of the off-diagonal matrices \\(O(s)\\) as \\(U(s)=\\sum_{n=0}U^{(n)}(s)\\). It can be shown [8] that the \\(n^{\\rm th}\\) term \\(U^{(n)}(s)\\) can be expressed through the lower order term \\(U^{(n-1)}(s)\\) by means of the recurrence equation \\[U^{(n)}(s)=\\int_{0}^{s}ds^{\\prime}\\,U^{(0)}(s^{\\prime})\\,O(s^{\\prime})\\,U^{(n- 1)}(s^{\\prime}). \\tag{30}\\] The expression above means that, by knowing the zeroth-order evolution operator \\(U^{(0)}(s)\\), which exactly yields the adiabatic approximation, one can obtain \\(U^{(1)}(s)\\), and then \\(U^{(2)}(s)\\) and so on. The adiabatic case corresponds to no transitions, while a correction \\(U^{(n)}(s)\\) of order \\(n\\geq 1\\) implies the existence of \\(n\\) transitions between different energy levels [8]. From this perspective one can interpret the adiabatic approximation as the zeroth order term in a perturbation theory in the number of transitions between energy levels connected by the time-varying Hamiltonian. ## III The quantum adiabatic approximation for open quantum systems In this section we review our recently introduced generalization of the adiabatic theorem to the case of open quantum systems [24]. The motivations for considering such a generalization are many. The most fundamental is that the concept of a closed system is, of course, an idealization, and in reality all experimentally accessible systems are open. Thus applications of the adiabatic theorem for open systems include, among others, geometric phases (where open system effects have received considerable recent attention, e.g., Refs. [35]), quantum information processing, and molecular dynamics in condensed phases. In the following we first introduce notation for open systems, then discuss the generalized adiabatic theorem. ### The dynamics of open quantum system Consider a quantum system \\(S\\) coupled to an environment, or bath \\(B\\) (with respective Hilbert spaces \\({\\cal H}_{S},{\\cal H}_{B}\\)), evolving unitarily under the total system-bath Hamiltonian \\(H_{SB}\\). The exact system dynamics is given by tracing over the bath degrees of freedom [36] \\[\\rho(t)={\\rm Tr}_{B}[U(t)\\rho_{SB}(0)U^{\\dagger}(t)], \\tag{31}\\] where \\(\\rho(t)\\) is the system state, \\(\\rho_{SB}(0)=\\rho(0)\\otimes\\rho_{B}(0)\\) is the initially uncorrelated system-bath state, and \\(U(t)={\\cal T}{\\rm exp}(-i\\int_{0}^{t}H_{SB}(t^{\\prime})dt^{\\prime})\\) (\\({\\cal T}\\) denotes time-ordering). Such an evolution is completely positive and trace preserving [36; 37; 38]. Under certain approximations, it is possible to convert Eq. (31) into the convolutionless form \\[\\dot{\\rho}(t)\\ =\\ {\\cal L}(t)\\rho(t). \\tag{32}\\] An important example is \\[\\dot{\\rho}(t)\\ =\\ -i\\left[H(t),\\rho(t)\\right]+\\frac{1}{2}\\sum_{i=1}^{N}\\left([ \\Gamma_{i}(t),\\rho(t)\\Gamma_{i}^{\\dagger}(t)]+[\\Gamma_{i}(t)\\rho(t),\\Gamma_{i }^{\\dagger}(t)]\\right). \\tag{33}\\] Here \\(H(t)\\) is the time-dependent effective Hamiltonian of the open system and \\(\\Gamma_{i}(t)\\) are time-dependent operators describing the system-bath interaction. In the literature, Eq. (33) with time-_in_dependent operators \\(\\Gamma_{i}\\) is usually referred to as the Markovian dynamical semigroup, or Lindblad equation [36; 38; 39; 40] [see also Ref. [41] for a simple derivation of Eq. (33) from Eq. (31)]. However, the case with time-dependent coefficients is also permissible under certain restrictions [42]. The Lindblad equation requires the assumption of a Markovian bath with vanishing correlation time. Equation (32) can be more general; for example, it applies to the case of non-Markovian convolutionless master equations studied in Ref. [43]. Here we will consider the class of convolutionless master equations (32). In a slight abuse of nomenclature, we will henceforth refer to the time-dependent generator \\(\\mathcal{L}(t)\\) as the Lindblad superoperator, and the \\(\\Gamma_{i}(t)\\) as Lindblad operators. Conceptually, the difficulty in the transition of an adiabatic approximation from closed to open quantum systems is that the notion of Hamiltonian eigenstates is lost, since the Lindblad superoperator - the generalization of the Hamiltonian - cannot in general be diagonalized. It is then not a priori clear what should take the place of the adiabatic eigenstates. This difficulty was solved in Ref. [24] by introducing the idea that this role is played by _adiabatic Jordan blocks of the Lindblad superoperator_. The Jordan canonical form [44], with its associated left- and right-eigenvectors, is in this context the natural generalization of the diagonalization of the Hamiltonian. In this direction, it is convenient to work in the superoperator formalism, wherein the density matrix is represented by a \\(D^{2}\\)-dimensional \"coherence vector\" \\[\\left|\\rho\\right\\rangle\\rangle=\\left(\\begin{array}{cccc}\\rho_{1}&\\rho_{2}& \\cdots&\\rho_{D^{2}}\\end{array}\\right)^{t}, \\tag{34}\\] and the Lindblad superoperator \\(\\mathcal{L}\\) becomes a \\(D^{2}\\times D^{2}\\)-dimensional supermatrix [38]. We use the double bracket notation to indicate that we are not working in the standard Hilbert space of state vectors. More generally, coherence vectors live in Hilbert-Schmidt space: a state space of linear operators endowed with an inner product that can be defined, for general vectors \\(u\\) and \\(v\\), as \\[(u,v)\\equiv\\langle\\langle u|v\\rangle\\rangle\\equiv\\frac{1}{\\mathcal{N}}\\text{ Tr}\\left(u^{\\dagger}v\\right). \\tag{35}\\] where \\(\\mathcal{N}\\) is a normalization factor. Adjoint elements \\(\\langle\\langle v|\\) in the dual state space are given by row vectors defined as the transpose conjugate of \\(\\left|v\\right\\rangle\\rangle\\): \\(\\langle\\langle v|=(v_{1}^{*},v_{2}^{*}, ,v_{D^{2}}^{*})\\). A density matrix can then be expressed as a discrete superposition of states over a complete basis in this vector space, with appropriate constraints on the coefficients so that the requirements of Hermiticity, positive semi-definiteness and unit trace of \\(\\rho\\) are observed. Thus, representing the density operator in general as a coherence vector, we can rewrite Eq. (32) in a superoperator language as \\[\\mathcal{L}(t)\\left|\\rho(t)\\right\\rangle\\rangle=\\left|\\dot{\\rho}(t)\\right\\rangle\\rangle, \\tag{36}\\] where \\(\\mathcal{L}\\) is now a supermatrix. This master equation generates non-unitary evolution, since \\(\\mathcal{L}(t)\\) is non-Hermitian and hence generally non-diagonalizable. However, one can always transform \\(\\mathcal{L}\\) into the Jordan canonical form [44], where it has a block-diagonal structure. This is achieved via the similarity transformation \\[\\mathcal{L}_{J}(t)=S^{-1}(t)\\,\\mathcal{L}(t)\\,S(t), \\tag{37}\\] where \\(\\mathcal{L}_{J}(t)=\\text{diag}(J_{1}, ,J_{m})\\) denotes the Jordan form of \\(\\mathcal{L}(t)\\). The Jordan blocks \\(J_{\\alpha}\\), of dimension \\(n_{\\alpha}\\), are always of the form: \\[J_{\\alpha}=\\left(\\begin{array}{cccc}\\lambda_{\\alpha}&1&0& &0\\\\ 0&\\lambda_{\\alpha}&1& &0\\\\ \\vdots&\\ddots&\\ddots&\\ddots&\\vdots\\\\ 0& &0&\\lambda_{\\alpha}&1\\\\ 0& & &0&\\lambda_{\\alpha}\\end{array}\\right). \\tag{38}\\] To each Jordan block are associated a left and a right eigenvector with eigenvalue \\(\\lambda_{\\alpha}\\), which can in general be complex. The number \\(m\\) of Jordan blocks is given by the number of linearly independent eigenstates of \\(\\mathcal{L}(t)\\), with each eigenstate associated to a different block \\(J_{\\alpha}\\). Since \\(\\mathcal{L}(t)\\) is non-Hermitian, we generally do not have a basis of eigenstates, whence some care is required in order to find a basis for describing the density operator. It can be shown [24] that instantaneous right \\(\\left\\{\\left|\\mathcal{D}_{\\beta}^{(j)}(t)\\right\\rangle\\right\\}\\) and left \\(\\left\\{\\left\\langle\\langle\\mathcal{E}_{\\alpha}^{(i)}(t)\\right|\\right\\}\\) bases in the state space of linear operators can always be systematically constructed, with the following suitable features: \\(\\bullet\\) Orthonormality condition: \\[\\langle\\langle\\mathcal{E}_{\\alpha}^{(i)}(t)|\\mathcal{D}_{\\beta}^{(j)}(t) \\rangle\\rangle=\\,_{J}\\langle\\langle\\mathcal{E}_{\\alpha}^{(i)}|\\mathcal{D}_{ \\beta}^{(j)}\\rangle\\rangle_{J}=\\delta_{\\alpha\\beta}\\delta^{ij}. \\tag{39}\\] \\(\\bullet\\) Invariance of the Jordan blocks under the action of the Lindblad super-operator: \\[\\mathcal{L}(t)\\left|\\mathcal{D}_{\\alpha}^{(j)}(t)\\right\\rangle \\rangle=\\left|\\mathcal{D}_{\\alpha}^{(j-1)}(t)\\right\\rangle+\\lambda_{\\alpha}( t)\\left|\\mathcal{D}_{\\alpha}^{(j)}(t)\\right\\rangle\\rangle. \\tag{40}\\] \\[\\langle\\langle\\mathcal{E}_{\\alpha}^{(i)}(t)|\\,\\mathcal{L}(t)= \\langle\\langle\\mathcal{E}_{\\alpha}^{(i+1)}(t)|+\\langle\\langle\\mathcal{E}_{ \\alpha}^{(i)}(t)|\\,\\lambda_{\\alpha}(t). \\tag{41}\\] with \\(|\\mathcal{D}_{\\alpha}^{(-1)}\\rangle\\rangle\\equiv 0\\) and \\(\\langle\\langle\\mathcal{E}_{\\alpha}^{(n_{\\alpha})}|\\equiv 0\\). Here the subscripts enumerate Jordan blocks (\\(\\alpha\\in\\{1, ,m\\}\\)), while the superscripts enumerate basis states inside a given Jordan block (\\(i,j\\in\\{0, ,n_{\\alpha}-1\\}\\)). ### Adiabatic conditions for open quantum system Before stating explicitly the conditions for adiabatic evolution, we provide a formal definition of adiabaticity for the case of open systems: **Definition III.1**: _An open quantum system is said to undergo adiabatic dynamics if its Hilbert-Schmidt space can be decomposed into decoupled Lindblad-Jordan-eigenspaces with distinct, time-continuous, and non-crossing instantaneous eigenvalues of \\(\\mathcal{L}(t)\\)._ This definition is a natural extension for open systems of the idea of adiabatic behavior. Indeed, in this case the master equation (32) can be decomposed into sectors with different and separately evolving Lindblad-Jordan eigenvalues. The more familiar notion of closed-system adiabaticity is obtained as a special case when the Lindblad superoperator is Hermitian: in that case it can be diagonalized and the Jordan blocks all become one-dimensional, with corresponding real eigenvalues (that correspond to energy _differences_). The splitting into Jordan blocks of the Lindblad superoperator is achieved through the choice of a basis which preserves the Jordan block structure as, for example, the sets of right \\(\\left\\{|\\mathcal{D}_{\\beta}^{(j)}(t))\\rangle\\right\\}\\) and left \\(\\left\\{\\langle\\langle\\mathcal{E}_{\\alpha}^{(i)}(t)|\\right\\}\\) vectors introduced above. Such a basis generalizes the notion of Schrodinger-eigenvectors. Based on this concept of adiabaticity, we state below (without the proofs) several theorems which have been derived in Ref. [24]. **Theorem III.2**: _A sufficient condition for open quantum system adiabatic dynamics as given in Definition III.1 is:_ \\[\\max_{0\\leq s\\leq 1}\\left|\\sum_{p=1}^{(n_{\\alpha}-i)}\\left(\\prod_{q=1}^{p} \\sum_{k_{q}=0}^{(j-S_{q-1})}\\right)\\frac{\\langle\\langle\\mathcal{E}_{\\alpha}^{ (i+p-1)}|\\frac{d\\mathcal{L}}{ds}|\\mathcal{D}_{\\beta}^{(j-S_{p})}\\rangle\\rangle }{(-1)^{S_{p}}\\,\\omega_{\\beta\\alpha}^{p+S_{p}}}\\right|\\ll 1, \\tag{42}\\] _where \\(s=t/T\\) is the scaled time and_ \\[\\omega_{\\beta\\alpha}(t)=\\lambda_{\\beta}(t)-\\lambda_{\\alpha}(t),\\quad S_{q}= \\sum_{s=1}^{q}k_{s}\\quad\\text{(with }\\,S_{0}=0),\\;\\;\\left(\\prod_{q=1}^{p}\\sum_{k_{q}=0}^{(j-S_{q-1})}\\right)\\equiv \\sum_{k_{1}=0}^{j-S_{0}}\\cdots\\sum_{k_{p}=0}^{j-S_{p-1}}, \\tag{43}\\] _and where \\(\\lambda_{\\beta}\ eq\\lambda_{\\alpha}\\), with \\(i\\) and \\(j\\) denoting arbitrary indices associated to the Jordan blocks \\(\\alpha\\) and \\(\\beta\\), respectively._ The role of the energy differences that appear in the equations for the closed case is played here by the (in general complex-valued) difference between Jordan eigenvalues \\(\\omega_{\\beta\\alpha}\\), while the norm of the time-derivative of the Hamiltonian is replaced here by the norm of \\(\\frac{d\\mathcal{L}}{ds}\\), evaluated over and inside Jordan blocks. Eq. (42) gives a means to estimate the accuracy of the adiabatic approximation via the computation of the time derivative of the Lindblad superoperator acting on right and left vectors. The norm used in Eq. (42) can be simplified by considering only the term with maximum absolute value, which results in: **Corollary III.3**: _A sufficient condition for open quantum system adiabatic dynamics is_ \\[\\mathcal{N}_{ij}^{n_{\\alpha}n_{\\beta}}\\max_{0\\leq s\\leq 1}\\left|\\frac{ \\langle\\langle\\mathcal{E}_{\\alpha}^{(i+p-1)}|\\frac{d\\mathcal{L}}{ds}|\\mathcal{ D}_{\\beta}^{(j-S_{p})}\\rangle\\rangle}{\\omega_{\\beta\\alpha}^{p+S_{p}}}\\right|\\ll 1, \\tag{44}\\] _where the \\(\\max\\) is taken for any \\(\\alpha\ eq\\beta\\), and over all possible values of \\(i\\in\\{0, ,n_{\\alpha}-1\\}\\), \\(j\\in\\{0, ,n_{\\beta}-1\\}\\), and \\(p\\), with_ \\[\\mathcal{N}_{ij}^{n_{\\alpha}n_{\\beta}}=\\frac{(n_{\\alpha}-i+1+j)!}{(1+j)!(n_{ \\alpha}-i)!}-1. \\tag{45}\\] The factor \\(\\mathcal{N}_{ij}^{n_{\\alpha}n_{\\beta}}\\) given in Eq. (45) is just the number of terms of the sums in Eq. (42). We have included a superscript \\(n_{\\beta}\\), even though there is no explicit dependence on \\(n_{\\beta}\\), since \\(j\\in\\{0, ,n_{\\beta}-1\\}\\). Furthermore, an adiabatic condition for a slowly varying Lindblad super-operator can directly be obtained from Eq. (42), yielding: **Corollary III.4**: _A simple sufficient condition for open quantum system adiabatic dynamics is \\(\\dot{\\mathcal{L}}\\approx 0\\)._Note that this condition is in a sense too strong, since it need not be the case that \\(\\dot{\\cal L}\\) is small in general (i.e., for all its matrix elements). Just as in the closed-systems case, one can also express the condition for adiabaticity in terms of the total time of evolution. To this end, we expand the density matrix for an arbitrary time \\(t\\) in the instantaneous right eigenbasis \\(\\left\\{|{\\cal D}_{\\beta}^{(j)}(t)\\rangle\\right\\}\\) as \\[|\\rho(t)\\rangle\\rangle=\\frac{1}{2}\\sum_{\\beta=1}^{m}\\,\\sum_{j=0}^{n_{\\beta}-1 }p_{\\beta}^{(j)}(t)\\,e^{\\int_{0}^{t}\\lambda_{\\beta}(t^{\\prime})dt^{\\prime}}\\,|{ \\cal D}_{\\beta}^{(j)}(t)\\rangle, \\tag{46}\\] where \\(m\\) is the number of Jordan blocks and \\(n_{\\beta}\\) is the dimension of the block \\(J_{\\beta}\\). We emphasize that we are assuming that there are no eigenvalue crossings in the spectrum of the Lindblad superoperator during the evolution. We also define \\[V_{\\beta\\alpha}^{(ijp)}(s)=p_{\\beta}^{(j)}(s)\\langle\\mathcal{E}_{\\alpha}^{(i +p-1)}(s)|\\frac{d\\mathcal{L}(s)}{ds}|{\\cal D}_{\\beta}^{(j-S_{p})}(s)\\rangle\\rangle \\tag{47}\\] and \\[\\Omega_{\\beta\\alpha}(t)=\\int_{0}^{t}\\omega_{\\beta\\alpha}(t^{\\prime})\\,dt^{ \\prime}. \\tag{48}\\] Then the following adiabatic time condition can be established [24]: **Theorem III.5**: _Consider an open quantum system governed by a Lindblad superoperator \\(\\mathcal{L}(s)\\). Then the adiabatic dynamics in the interval \\(0\\leq s\\leq 1\\) occurs if and only if the following time conditions, obtained for each coefficient \\(p_{\\alpha}^{(i)}(s)\\), are satisfied:_ \\[T\\ \\gg\\ \\max_{0\\leq s\\leq 1}\\Bigg{|}\\sum_{\\beta\\,|\\,\\lambda_{\\beta}\ eq \\lambda_{\\alpha}}\\sum_{j,p}{(-1)^{S_{p}}\\left[\\frac{V_{\\beta\\alpha}^{(ijp)}(0) }{\\omega_{\\beta\\alpha}^{p+S_{p}+1}(0)}-\\frac{V_{\\beta\\alpha}^{(ijp)}(s)\\,e^{T \\,\\Omega_{\\beta\\alpha}(s)}}{\\omega_{\\beta\\alpha}^{p+S_{p}+1}(s)}+\\int_{0}^{s} ds^{\\prime}\\,e^{T\\,\\Omega_{\\beta\\alpha}(s^{\\prime})}\\frac{d}{ds^{\\prime}}\\frac{V_{ \\beta\\alpha}^{(ijp)}(s^{\\prime})}{\\omega_{\\beta\\alpha}^{p+S_{p}+1}(s^{\\prime })}\\right]}\\Bigg{|}. \\tag{49}\\] Theorem III.5 provides a very general condition for adiabaticity in open quantum systems. Equation (49) simplifies in a number of situations. * Adiabaticity is guaranteed whenever \\(V_{\\beta\\alpha}^{(ijp)}(s)\\) vanishes for all \\(\\lambda_{\\alpha}\ eq\\lambda_{\\beta}\\). * Adiabaticity is similarly guaranteed whenever \\(V_{\\beta\\alpha}^{(ijp)}(s)\\), which can depend on \\(T\\) through \\(p_{\\beta}^{(j)}\\), vanishes for all \\(\\lambda_{\\alpha},\\lambda_{\\beta}\\) such that \\(\\mathrm{Re}(\\Omega_{\\beta\\alpha})>0\\) and does not grow faster, as a function of \\(T\\), than \\(\\exp(T|\\,\\mathrm{Re}\\Omega_{\\beta\\alpha}|)\\) for all \\(\\lambda_{\\alpha},\\lambda_{\\beta}\\) such that \\(\\mathrm{Re}(\\Omega_{\\beta\\alpha})<0\\). * When \\(\\mathrm{Re}(\\Omega_{\\beta\\alpha})=0\\) and \\(\\mathrm{Im}(\\Omega_{\\beta\\alpha})\ eq 0\\) the integral in inequality (49) vanishes in the infinite time limit due to the Riemann-Lebesgue lemma [33], as in the closed case discussed before. In this case, again, adiabaticity is guaranteed provided \\(p_{\\beta}^{(j)}(s)\\) [and hence \\(V_{\\beta\\alpha}^{(ijp)}(s)\\)] does not diverge as a function of \\(T\\) in the limit \\(T\\to\\infty\\). * When \\(\\mathrm{Re}(\\Omega_{\\beta\\alpha})>0\\), the adiabatic regime can still be reached for large \\(T\\) provided that \\(p_{\\beta}^{(j)}(s)\\) contains a decaying exponential which compensates for the growing exponential due to \\(\\mathrm{Re}(\\Omega_{\\beta\\alpha})\\). * Even if there is an overall growing exponential in inequality (49), adiabaticity could take place over a finite time interval \\([0,T_{*}]\\) and, afterwards, disappear. In this case, which would be an exclusive feature of open systems, the crossover time \\(T_{*}\\) would be determined by an inequality of the type \\(T\\gg a+b\\exp(cT)\\), with \\(c>0\\). The coefficients \\(a,b\\) and \\(c\\) are functions of the system-bath interaction. Whether the latter inequality can be solved clearly depends on the values of \\(a,b,c\\), so that a conclusion about adiabaticity in this case is model dependent. A simpler sufficient condition can be derived from Eq. (49) by considering the term with maximum absolute value in the sum. This procedure leads to the following corollary: **Corollary III.6**: _A sufficient time condition for the adiabatic regime of an open quantum system governed by a Lindblad superoperator \\(\\mathcal{L}(t)\\) is_ \\[T\\ \\gg\\ \\mathcal{M}_{ij}^{n_{\\alpha}n_{\\beta}}\\max_{0\\leq s\\leq 1}\\Bigg{|}\\frac{V_{ \\beta\\alpha}^{(ijp)}(0)}{\\omega_{\\beta\\alpha}^{p+S_{p}+1}(0)}-\\frac{V_{\\beta \\alpha}^{(ijp)}(s)\\,e^{T\\,\\Omega_{\\beta\\alpha}(s)}}{\\omega_{\\beta\\alpha}^{p+ S_{p}+1}(s)}+\\int_{0}^{s}ds^{\\prime}\\,e^{T\\,\\Omega_{\\beta\\alpha}(s^{\\prime})} \\frac{d}{ds^{\\prime}}\\frac{V_{\\beta\\alpha}^{(ijp)}(s^{\\prime})}{\\omega_{\\beta \\alpha}^{p+S_{p}+1}(s^{\\prime})}\\Bigg{|}, \\tag{50}\\]_where \\(\\max\\) is taken over all possible values of the indices \\(\\lambda_{\\alpha}\ eq\\lambda_{\\beta}\\), \\(i\\), \\(j\\), and \\(p\\), with_ \\[\\mathcal{M}_{ij}^{n_{\\alpha}n_{\\beta}}=\\sum_{\\beta\\,|\\,\\lambda_{\\beta}\ eq \\lambda_{\\alpha}}\\sum_{j=0}^{(n_{\\beta}-1)}\\sum_{p=1}^{(n_{\\alpha}-i)}\\left( \\prod_{q=1}^{p}\\sum_{k_{q}=0}^{(j-S_{q-1})}\\right)1=\\Lambda_{\\beta\\alpha}\\left[ \\frac{(n_{\\alpha}+n_{\\beta}-i+1)!}{(n_{\\alpha}-i+1)!n_{\\beta}!}-n_{\\beta}-1,\\right] \\tag{51}\\] _where \\(\\Lambda_{\\beta\\alpha}\\) denotes the number of Jordan blocks such that \\(\\lambda_{\\alpha}\ eq\\lambda_{\\beta}\\)._ Further discussion of the physical significance of these adiabaticity conditions, as well as an illustrative example, can be found in Ref. [24]. The application of the adiabatic theorem for open quantum systems to problems in quantum information processing (e.g., in the context of adiabatic quantum computing [21; 22; 23]) and geometric phases [45; 46; 47], seems particularly appealing as a venue for future research. ## IV The Marzlin-Sanders inconsistency The adiabatic theorem can be deceptively simple when it is not carefully interpreted. In a recent paper entitled K.-P. Marzlin and B.C. Sanders argue that there may be an inconsistency in the adiabatic theorem for closed quantum systems [26], when the change in instantaneous adiabatic eigenstates is significant. Here we simplify their argument and show where exactly is the fallacy that leads one to conclude that there is such an inconsistency. ### The condition for adiabaticity revisited Let us consider a quantum system evolving unitarily under the Schrodinger equation (1). At the initial time \\(t_{0}\\) the system is assumed to be in the particular instantaneous energy eigenstate \\(|E_{0}(t_{0})\\rangle\\). For a general time \\(t\\) the evolution of the system is described by \\[|\\psi(t)\\rangle=U(t,t_{0})|E_{0}(t_{0})\\rangle, \\tag{52}\\] where \\(U(t,t_{0})=\\mathcal{T}\\exp(-i\\int_{t_{0}}^{t}H(t^{\\prime})dt^{\\prime})\\) is the unitary evolution operator. Assuming that the Hamiltonian changes slowly in time and that \\(|E_{0}(t_{0})\\rangle\\) is non-degenerate, the adiabatic theorem implies that \\[|\\psi(t)\\rangle=e^{-i\\int^{t}E_{0}}e^{i\\beta_{0}}|E_{0}(t)\\rangle \\tag{53}\\] where \\(\\int^{t}E_{0}\\equiv\\int_{t_{0}}^{t}E_{0}(t^{\\prime})dt^{\\prime}\\) and the Berry's phase \\(\\beta_{0}\\) is given by \\(\\beta_{0}=i\\int(E_{0}|\\dot{E}_{0}\\rangle\\). Therefore the substitution of Eq. (53) into the instantaneous eigenbasis of \\(H(t)\\), defined by \\(H(t)|E_{n}(t)\\rangle=E_{n}(t)|E_{n}(t)\\rangle\\), yields \\[H(t)|\\psi(t)\\rangle=E_{0}(t)|\\psi(t)\\rangle, \\tag{54}\\] which simply states that the wave function is an instantaneous eigenstate of the Hamiltonian in the adiabatic regime. Substituting Eq. (54) into the Schrodinger equation (1) one obtains \\[i|\\dot{\\psi}\\rangle=E_{0}(t)|\\psi(t)\\rangle. \\tag{55}\\] It is important to observe that the above equation has been derived by using the fact that _the adiabatic solution must be an instantaneous eigenstate of the Hamiltonian_. Moreover one can see that the adiabatic wave function is really a solution of the adiabatic Schrodinger equation by substituting Eq. (53) into Eq. (55), from which one obtains \\[\\left(1-|E_{0}(t)\\rangle\\langle E_{0}(t)|\\right)|\\dot{E_{0}}(t)\\rangle=0. \\tag{56}\\] In order to show that Eq. (56) is obeyed in the adiabatic regime we can project this equation by multiplying it by each instantaneous basis vector \\(\\langle E_{n}(t)|\\): \\[\\langle E_{n}(t)|\\left(1-|E_{0}(t)\\rangle\\langle E_{0}(t)|\\right)|\\dot{E_{0}}( t)\\rangle=0. \\tag{57}\\] Therefore we obtain that the above equation is satisfied, for each \\(n\\), if the adiabatic constraints [see Eq. (7)] are obeyed \\[\\left|\\frac{\\langle E_{n}(t)|\\dot{E_{0}}(t)\\rangle}{E_{0}(t)-E_{n}(t)}\\right| \\ll 1,\\ \\left(n\ eq 0\\right) \\tag{58}\\] ### The inconsistent step Now suppose that we wish to solve the adiabatic Schrodinger equation (55), but (incorrectly) ignore the fact that it has been derived by assuming that \\(|\\psi(t)\\rangle\\) is an instantaneous eigenstate of \\(H(t)\\). Then, imposing the initial condition \\(|\\psi(t_{0})\\rangle=|E_{0}(t_{0})\\rangle\\), one easily finds that Eq. (55) is satisfied by the following wave function: \\[|\\psi(t)\\rangle=e^{-i\\int^{t}E_{0}}|E_{0}(t_{0})\\rangle. \\tag{59}\\] However, this \\(|\\psi(t)\\rangle\\) is clearly inconsistent with Eq. (54) and therefore is an illegal solution, since it generally is not an instantaneous eigenstate of the Hamiltonian. Indeed, the general adiabatic solution is given by Eq. (53), which includes the Berry's phase and \\(|E_{0}(t)\\rangle\\), as opposed to \\(|E_{0}(t_{0})\\rangle\\). In fact, if we take Eq. (59) as the adiabatic wave function and substitute it into Eq. (53) we obtain \\[e^{i\\beta_{0}}|E_{0}(t)\\rangle=|E_{0}(t_{0})\\rangle. \\tag{60}\\] Then multiplying Eq. (60) by \\(\\langle E_{0}(t_{0})|\\) \\[e^{i\\beta_{0}}\\langle E_{0}(t_{0})|E_{0}(t)\\rangle=1. \\tag{61}\\] This inconsistency is precisely the one claimed by Marzlin and Sanders in Ref. [26], Eq. (6). Note that this result is obtained there in a somewhat more complicated manner, by considering the adiabatic solution of a time-reversed wave function \\(|\\bar{\\psi}(t)\\rangle=U^{\\dagger}(t,t_{0})|E_{0}(t_{0})\\rangle\\). We note that the solution for \\(|\\bar{\\psi}(t)\\rangle\\) [their Eq. (4)] is very similar to our Eq. (59). Hence, in the same way that we have been led to an inconsistent result due to a deliberately wrong adiabatic solution for \\(|\\psi(t)\\rangle\\), Ref. [26] has been led to an inconsistent solution for their \\(|\\bar{\\psi}(t)\\rangle\\). ## V Conclusion We have reviewed the adiabatic dynamics of both closed and open quantum systems. In the case of closed systems the adiabatic limit is the case where initial Schrodinger-eigenspaces evolve independently, without any transitions between eigenspaces; this limit can be relaxed and a perturbation theory can be developed in the number of transitions. In the case of open systems the notion of Schrodinger-eigenspaces is replaced by independently evolving Lindblad-Jordan blocks. A corresponding perturbation theory has not yet been developed. We have also shown that the inconsistency in the adiabatic theorem claimed in Ref. [26] is a consequence of an improper adiabatic solution for the wave function. One arrives at an inconsistent result by taking the _instantaneous_ adiabatic eigenstates and integrating them over _all time_ using the adiabatic Schrodinger equation. The adiabatic theorem remains a valuable and consistent tool for studying the dynamics of slowly evolving quantum systems. ###### Acknowledgements. The research of the authors is sponsored by CNPq-Brazil (to M.S.S.), and the Sloan Foundation, PREA and NSERC (to D.A.L.). ## References * (1) P. Ehrenfest, Ann. d. Phys. **51**, 327 (1916). * (2) M. Born and V. Fock, Zeit. f. Physik **51**, 165 (1928). * (3) T. Kato, J. Phys. Soc. Jap. **5**, 435 (1950). * (4) A. Messiah, _Quantum mechanics_ (North-Holland, Amsterdam, 1962), Vol. 2. * (5) M.V. Berry, Proc. R. Soc. London A **414**, 31 (1987). * (6) N. Nakagawa, Ann. Phys. **179**, 145 (1987). * (7) C.P. Sun, J. Phys. A **21**, 1595 (1988). * (8) Z. Wu, Phys. Rev. A **40**, 2184 (1989). * (9) L.D. Landau, Zeitschrift **2**, 46 (1932). * (10) C. Zener, Proc. Roy. Soc. London Ser. A **137**, 696 (1932). * (11) M. Gell-Mann and F. Low, Phys. Rev. **84**, 350 (1951). * (12) M.V. Berry, Proc. Roy. Soc. (Lond.) **392**, 45 (1989). * (13) F. Wilczek and A. Zee, Phys. Rev. Lett. **52**, 2111 (1984). * (14) P. Zanardi and M. Rasetti, Phys. Lett. A **264**, 94 (1999). * (15) J. Pachos, P. Zanardi, and M. Rasetti, Phys. Rev. A **61**, 010305 (2000). * (16) J.A. Jones, V. Vedral, A. Ekert, and G. Castagnoli, Nature **403**, 869 (2000). * (17) J. Pachos and S. Chountasis, Phys. Rev. A **62**, 052318 (2000). * (18) L.-M. Duan, J. I. Cirac, and P. Zoller, Science **292**, 1695 (2001). * (19) I. Fuentes-Guridi, J. Pachos, S. Bose, V. Vedral, and S. Choi, Phys. Rev. A **66**, 022102 (2002). * (20) L. Faoro, J. Siewert, and R. Fazio, Phys. Rev. Lett. **90**, 028301 (2003). * (21) E. Farhi, J. Goldstone, S. Gutmann, and M. Sipser, LANL Preprint quant-ph/0001106. * (22) E. Farhi, J. Goldstone, S. Gutmann, J. Lapan, A. Lundgren, and D. Preda, Science **292**, 472 (2001). * (23) D. Aharonov, W. v. Dam, J. Kempe, Z. Landau, S. Lloyd, O. Regev, LANL Preprint quant-ph/0405098. * (24) M.S. Sarandy and D.A. Lidar, LANL Preprint quant-ph/0404147, Phys. Rev. A (2004), in press. * (25) D.A. Lidar and K.B. Whaley, in _Irreversible Quantum Dynamics_, Vol. 622 of _Lecture Notes in Physics_, edited by F. Benatti and R. Floreanini (Springer, Berlin, 2003), p. 83, LANL Preprint quant-ph/0301032. * (26) K.-P. Marzlin and B.C. Sanders, Phys. Rev. Lett. **93**, 160408 (2004). * (27) D.M. Tong, K. Singh, L.C. Kwek, and C.H. Oh, LANL Preprint quant-ph/0406163 (2004). * (28) A.K. Pati and A.K. Rajagopal, LANL Preprint quant-ph/0405129 (2004). * (29) Z. Wu and H. Yang, LANL Preprint quant-ph/0410118 (2004). * (30) A. Mostafazadeh, _Dynamical Invariants, Adiabatic Approximation, and the Geometric Phase_ (Nova Science Publishers, New York, 2001). * (31) J.E. Avron and A. Elgart, Phys. Rev. A **58**, 4300 (1998); Commun. Math. Phys. **203**, 445 (1999). * (32) K. Gottfried and T.-M. Yan, _Quantum Mechanics: Fundamentals_ (Springer, New York, 2003). * (33) J.W. Brown and R.V. Churchill, _Fourier series and boundary value problems_ (McGraw-Hill, New York, 1993). * (34) The Riemann-Lebesgue lemma can be stated through the proposition: Let \\(f:[a,b]\\rightarrow{\\bf C}\\) be an integrable function on the interval \\([a,b]\\). Then \\(\\int_{a}^{b}\\,dx\\,e^{inx}f(x)\\to 0\\) as \\(n\\rightarrow\\pm\\infty\\). * (35) L.-B. Fu, J.-L. Chen, J. Phys. A **37**, 3699 (2004); E. Sjoqvist, LANL Preprint quant-ph/0404174; K.-P. Marzlin, S. Ghose, B.C. Sanders, LANL Preprint quant-ph/0405052; D.M. Tong, E. Sjoqvist, L.C. Kwek, C.H. Oh, LANL Preprint quant-ph/0405092; R.S. Whitney, Y. Makhlin, A. Shnirman, Y. Gefen, LANL Preprint cond-mat/0405267; I. Kamleitner, J.D. Cresser, B.C. Sanders, LANL Preprint quant-ph/0406018. * (36) H.-P. Breuer and F. Petruccione, _The Theory of Open Quantum Systems_ (Oxford University Press, Oxford, 2002). * (37) K. Kraus, Ann. of Phys. **64**, 311 (1971). * (38) R. Alicki and K. Lendi, _Quantum Dynamical Semigroups and Applications_, No. 286 in _Lecture Notes in Physics_ (Springer-Verlag, Berlin, 1987). * (39) V. Gorini, A. Kossakowski, and E.C.G Sudarshan, J. Math. Phys. **17**, 821 (1976). * (40) G. Lindblad, Commun. Math. Phys. **48**, 119 (1976). * (41) D.A. Lidar, Z. Bihary, and K.B. Whaley, Chem. Phys. **268**, 35 (2001). * (42) K. Lendi, Phys. Rev. A **33**, 3358 (1986). * (43) H.-P. Breuer, Phys. Rev. A **70**, 012106 (2004). * (44) R.A. Horn and C.R. Johnson, _Matrix Analysis_ (Cambridge University Press, Cambridge, UK, 1999). * (45) A.C.A. Pinto and M.T. Thomaz, J. Phys. A **36**, 7461 (2003). * (46) A. Carollo, I. Fuentes-Guridi, M.F. Santos, and V. Vedral, Phys. Rev. Lett. **92**, 020402 (2004). * (47) I. Kamleitner, J.D. Cresser, and B.C. Sanders, e-print quant-ph/0406018 (2004).
We review the quantum adiabatic approximation for closed systems, and its recently introduced generalization to open systems (M.S. Sarandy and D.A. Lidar, e-print quant-ph/0404147). We also critically examine a recent argument claiming that there is an inconsistency in the adiabatic theorem for closed quantum systems [K.P. Marzlin and B.C. Sanders, Phys. Rev. Lett. 93, 160408 (2004)] and point out how an incorrect manipulation of the adiabatic theorem may lead one to obtain such an inconsistent result. Adiabatic Theorem; Berry's Phases; Open Quantum Systems; Quantum Computation. pacs: 03.65.Ta, 03.65.Yz, 03.67.-a, 03.65.Vf
Summarize the following text.
arxiv-format/0405183v1.md
# Renormalization flow of QED Holger Gies Institute for Theoretical Physics, Heidelberg University Philosophenweg 16, 69120 Heidelberg, Germany Joerg Jaeckel Institute for Theoretical Physics, Heidelberg University Philosophenweg 16, 69120 Heidelberg, Germany ###### pacs: 12.20.-m, 11.15.Tk, 11.10.Hi + Footnote †: preprint: HD-THEP-04-21 Though quantum field theory celebrates its greatest triumph with quantum electrodynamics (QED), the high-energy behavior of QED remains a sore spot, since it is inaccessible to the otherwise successful perturbative concepts. For instance, keeping the renormalized coupling \\(e_{\\rm R}\\) fixed, small-coupling perturbation theory predicts its own failure in the ultraviolet (UV) in the form of the Landau pole singularity: \\[\\frac{1}{e_{\\rm R}^{2}}-\\frac{1}{e_{\\Lambda}^{2}}=\\beta_{0}\\,\\ln\\frac{\\Lambda }{m_{\\rm R}},\\quad\\beta_{0}=\\frac{N_{\\rm f}}{6\\pi^{2}}. \\tag{1}\\] The coupling \\(e_{\\Lambda}\\) at the UV cutoff \\(\\Lambda\\) diverges for \\(\\Lambda\\to\\Lambda_{\\rm L}=m_{\\rm R}\\exp(1/(\\beta_{0}e_{\\rm R}^{2}))\\). It was early realized [1] that this behavior can signal the failure of QED as a fundamental quantum field theory which should be valid on all length scales. From a different viewpoint, keeping the initial UV coupling \\(e_{\\Lambda}\\) fixed, the renormalized coupling \\(e_{\\rm R}\\) vanishes in the limit \\(\\Lambda\\to\\infty\\), resulting in a free, or \"trivial\", theory with complete charge screening. Already in the drawing of the renormalization group (RG), a possible alternative scenario was discussed [2] in which an interacting UV-stable fixed point of the RG transformation, \\(e_{\\Lambda}^{2}\\to e_{*}^{2}\\in(0,\\infty)\\) for \\(\\Lambda\\to\\infty\\), facilitates a finite UV completion of QED (\"asymptotic safety\" [3]). However, no sign of such a fixed point has been found so far. On the contrary, nonperturbative lattice simulations have provided evidence for triviality [4; 5]. Moreover, careful extrapolation of raw lattice data shows that the Landau pole singularity is outside the physical parameter space owing to the onset of spontaneous chiral symmetry breaking (\\(\\chi\\)SB) [4]. This strong-coupling phenomenon of \\(\\chi\\)SB has also been observed in analytical studies using truncated Dyson-Schwinger equations (DSE) in a quenched approximation [6]. In addition to the fundamental character of this problem as a matter of principle, the high-energy fate of QED or other standard-model building blocks and its further extensions can give us direct bounds on the scale where new physical phenomena may be expected. In particular Landau pole singularities of the type of Eq. (1) are used to constrain properties of hypothetical particles, such as the Higgs scalar in the standard model [7]. Our work is moreover motivated by the recent observation that a hypothetically nontrivial U(1) sector of the standard model with a UV-stable fixed point has the potential to solve the hierarchy problem of the Higgs sector [8]. In this letter, we report on nonperturbative results obtained from the RG flow equation for the effective average action \\(\\Gamma_{k}\\)[9]. We work in Euclidean spacetime continuum where our methods can easily bridge a wide range of scales, allow for the full implementation of chiral symmetry as well as a simple inclusion of bare masses (explicit \\(\\chi\\)SB terms), and furnish unquenched calculations. The effective average action is a free-energy functional that interpolates between the initial UV action \\(\\Gamma_{k=\\Lambda}\\) and the full quantum action \\(\\Gamma_{k\\to 0}\\). The infrared (IR) regulator scale \\(k\\) separates the fluctuations with momenta \\(p^{2}\\gtrsim k^{2}\\), the effect of which has already been included in \\(\\Gamma_{k}\\), from those with smaller momenta which have not yet been integrated out. The full RG trajectory is given by the solution to the flow equation (\\(t=\\ln(k/\\Lambda)\\)), \\[\\partial_{t}\\Gamma_{k}[\\phi]=\\frac{1}{2}\\,{\\rm STr}\\,\\partial_{t}R_{k}\\,( \\Gamma_{k}^{(2)}[\\phi]+R_{k})^{-1}, \\tag{2}\\] where \\(\\Gamma_{k}^{(2)}\\) denotes the second functional derivative with respect to the fields \\(\\phi=(A_{\\mu},\\bar{\\psi},\\psi)\\), and the regulator function \\(R_{k}\\) implements the infrared regularization at \\(p^{2}\\simeq k^{2}\\). Effectively, Eq. (2) is a smooth realization of the Wilsonian momentum-shell integration, being dominated by momenta \\(p^{2}\\simeq k^{2}\\). On microscopic scales, QED is defined by the action \\[S_{\\Lambda}=\\int_{x}\\left(\\frac{1}{4}F_{\\mu\ u}F_{\\mu\ u}+\\bar{\\psi}i\ ot{D}[A ]\\psi+\\bar{\\psi}\\gamma_{5}m_{\\Lambda}\\psi\\right), \\tag{3}\\] which involves the microscopic UV parameters \\(e_{\\Lambda}\\) and \\(m_{\\Lambda}\\), and \\(D_{\\mu}[A]=\\partial_{\\mu}-{\\rm i}e_{\\Lambda}A_{\\mu}\\). Further possible gauge - invariant interactions are RG irrelevant by power counting. Invoking the universality hypothesis, the IR physics should only depend on the parameters occurring inEq. (3). This hypothesis can nevertheless be questioned: since the coupling increases towards the UV, higher operators can acquire large anomalous dimensions that spoil naive power counting and enlarge the set of RG relevant operators, offering new routes to UV completion. These operators may in turn exert a strong influence on the running gauge coupling and potentially induce an interacting fixed point. In order to test this scenario quantitatively, we study how the running of the gauge coupling can be affected by photonic self-interactions of the type \\[\\Gamma_{k,A}=\\!\\int_{x}\\!\\!W(\\theta)=\\!\\!\\int_{x}\\!\\left(W_{1} \\theta+\\frac{W_{2}}{2}\\theta^{2}+\\frac{W_{3}}{3!}\\theta^{3}+\\ldots\\right)\\!, \\tag{4}\\] where \\(\\theta=(1/4)F_{\\mu\ u}F_{\\mu\ u}\\). Thus, we include infinitely many fluctuation-induced photon operators in our truncation of \\(\\Gamma_{k}\\) (\\(W_{1}\\equiv Z_{F}\\) denotes the wave function renormalization of the photon). Of course, there are further tensor structures involving, e.g., the dual field strength that can contribute to the flow, but we do not expect their influence on the running coupling to be qualitatively different from those of Eq. (4). Moreover, our truncation neglects the momentum dependence of the couplings \\(W_{i}\\). Since it is natural to assume that their strength will drop off with increasing external momenta, we expect that momentum dependencies imply a weaker influence on the gauge coupling than is estimated by Eq. (4). Note that this argument could be invalidated by the occurrence of yet unknown photonic bound states giving rise to momentum poles in the couplings \\(W_{i}\\). Fluctuations induce not only photonic but also fermionic self-interactions, the lowest order of which we include in the fermionic part of the truncation, \\[\\Gamma_{k,\\psi} = \\int_{x}\\bigg{(}\\bar{\\psi}(\\mathrm{i}Z_{\\psi}\\partial\\!\\!\\!/+Z_{1 }e_{\\mathrm{A}}\\!\\!\\!/+Z_{\\psi}m\\gamma_{5})\\psi\\] \\[+\\frac{1}{2}\\big{[}Z\\!-\\!\\bar{\\lambda}_{-}(\\mathrm{V}\\!\\!\\!/- \\mathrm{A})+Z_{+}\\bar{\\lambda}_{+}(\\mathrm{V}\\!\\!+\\!\\!\\mathrm{A})\\big{]}\\bigg{)},\\] where \\((\\mathrm{V}\\pm\\mathrm{A}):=(\\bar{\\psi}\\gamma_{\\mu}\\psi)^{2}\\mp(\\bar{\\psi} \\gamma_{\\mu}\\gamma_{5}\\psi)^{2}\\). These fermionic interactions do not only influence the running of the gauge coupling but are also essential for the approach to \\(\\chi\\)SB in reminiscence of the Nambu-Jona-Lasinio (NJL) model. The \\(k\\)-dependent dimensionless running couplings are related to the bare couplings \\(e_{\\mathrm{A}},\\bar{\\lambda}_{\\pm}\\) by \\[e=\\frac{e_{\\mathrm{A}}Z_{1}}{Z_{F}^{1/2}Z_{\\psi}},\\quad\\lambda_{ \\pm}=\\frac{Z_{\\pm}k^{2}\\bar{\\lambda}_{\\pm}}{Z_{\\psi}^{2}}. \\tag{6}\\] QED initial conditions for the flow are defined by \\(Z_{F},Z_{\\psi},Z_{1}\\big{|}_{\\mathrm{A}}=1\\) and \\(W_{i>1},\\bar{\\lambda}_{\\pm}\\big{|}_{\\mathrm{A}}=0\\). Inserting this truncation into Eq. (2), we obtain the \\(\\beta\\) functions for \\(e\\), \\(\\lambda_{\\pm}\\), \\(m\\), \\(Z_{F}\\), \\(W_{i}\\) and \\(Z_{\\psi}\\), once the regulator \\(R_{k}\\) is specified. Of central interest is the photon anomalous dimension \\(\\eta_{F}=-\\partial_{t}\\ln Z_{F}\\) which contains the photon self-interaction contributions to the \\(\\beta_{e^{2}}\\) function, \\(\\beta_{e^{2}}=\\eta_{F}e^{2}+\\ldots\\) (cf. Eq. (9) below). In order to deal with the photon sector of Eq. (4), we use techniques developed in [10] that employ background-field-dependent and chiral-symmetry-preserving regulators of the form \\[R_{k}^{\\psi}(\\mathrm{i}\ ot{D}) = Z_{\\psi}\\mathrm{i}\ ot{D}\\ \\,\\,r_{\\mathrm{F}}\\big{(}(\\mathrm{i}\ ot{D})^{2}/k^{2}\\big{)},\\] \\[R_{k}^{A}(\\bar{\\Gamma}_{k,A}^{(2)}) = \\bar{\\Gamma}_{k,A}^{(2)}\\ \\,\\,r\\big{(}\\bar{\\Gamma}_{k,A}^{(2)}/(Z_{F}k^{2}) \\big{)}, \\tag{7}\\] where the bar indicates background-field dependence, and \\(r(y),r_{\\mathrm{F}}(y)\\) denote dimensionless regulator shape functions. As a result, we arrive at an asymptotic series for \\(\\eta_{F}\\) to all orders of the coupling, \\[\\eta_{F}=\\sum_{n=1}^{\\infty}a_{n}(r;m,\\eta_{\\psi})\\,\\left(\\frac{e ^{2}}{16\\pi^{2}}\\right)^{n}=\\frac{N_{\\mathrm{f}}}{6\\pi^{2}}e^{2}+\\mathcal{O}( e^{4},m^{2}e^{2}),\\] where the coefficients \\(a_{n}\\) depend functionally on the regulator shape functions \\(r,r_{\\mathrm{F}}\\). Here, the structure of the all-order result arises from the feedback of the flow of the \\(W_{i}\\)'s on \\(\\eta_{F}\\), whereas the global shape of the function \\(W(\\theta)\\) has been neglected [10]. To one loop, we obtain the correct universal \\(\\beta_{e^{2}}\\) function coefficient, since \\(\\beta_{e^{2}}=\\eta_{F}e^{2}+\\ldots\\). To higher order, the result is explicitly regulator dependent as it should be, since only the existence of zeroes of the \\(\\beta_{e^{2}}\\) function and their critical exponents are universal.1 Now, QED could evade triviality if a UV-stable fixed point in \\(\\beta_{e^{2}}\\) and \\(\\eta_{F}\\) existed for all regulators. By contrast, our results show that \\(\\eta_{F}(e_{\\star}^{2})=0\\) has only the solution \\(e_{\\star}^{2}=0\\) for _all_ regulator shape functions \\(r,r_{\\mathrm{F}}\\). In fact, for all physically admissible regulators a lower bound \\(0<\\eta_{F}^{\\text{1-loop}}/2\\leq\\eta_{F}[r]\\) exists for all values \\(e^{2}>0\\). In the strong-coupling regime, this lower bound is satisfied by Litim's optimized regulator [11], Footnote 1: Already the two-loop coefficient is regulator dependent, since we are using a mass-dependent regularization scheme. \\[r_{\\mathrm{F}}(y)=\\frac{1}{\\sqrt{y}}(1-\\sqrt{y})\\theta(1-y), \\quad r(y)=\\frac{1}{y}(1-y)\\theta(1-y),\\] for which the all-order anomalous dimension yields a simple integral representation, \\[\\eta_{F} = \\frac{e^{2}N_{\\mathrm{f}}}{6\\pi^{2}}\\frac{1-\\eta_{\\psi}}{1+m^{2} /k^{2}}\\big{[}1-I(e^{2})],\\] \\[I(e^{2})=\\frac{1}{\\pi^{2}}\\int_{0}^{\\infty}ds\\,s^{2}\\,K_{2}(2 \\sqrt{s})\\,\\mathrm{Li}_{2}\\left(e^{-\\frac{4\\pi^{2}}{\\pi}\\sqrt{\\frac{s}{s}}} \\right),\\] involving a modified Bessel function \\(K_{2}\\) and the polylogarithm \\(\\mathrm{Li}\\). In the strong-coupling limit, \\(e^{2}\\to\\infty\\), the integral goes to \\(I(e^{2})\\to 1/2\\), such that the strong-coupling limit finally approaches half the one-loop result.2 Moreover, the explicit electron mass dependence illustrates the threshold behavior: once the IR scale \\(k\\) drops below the electron mass scale, fluctuations become strongly suppressed and the flow essentially stops. The fermionic self-interactions also contribute directly to the \\(\\beta_{e^{2}}\\) function. The detailed form can be read off from a Ward-Takahashi identity as demonstrated in [8], \\[\\partial_{t}e^{2}\\equiv\\beta_{e^{2}}=\\eta_{F}e^{2}+2e^{2}\\frac{\\sum_{i=\\pm}c_{ i}\\partial_{t}\\lambda_{i}}{1+\\sum_{i=\\pm}c_{i}\\lambda_{i}}, \\tag{9}\\] where \\(c_{+}=N_{\\rm f}/(4\\pi)^{2}\\), \\(c_{-}=(N_{\\rm f}+1)/(4\\pi)^{2}\\) for the optimized regulator. From this representation, it is apparent that if \\(\\eta_{F}\\) does not induce a UV-stable interacting fixed point, no such fixed point can be induced at all, since \\(\\partial_{t}\\lambda_{\\pm}\\!\\to\\!0\\) at a global fixed point. (Explicit representations of the \\(\\lambda_{\\pm}\\) flows can be found in [8].) As one of our main results, we therefore exclude such a fixed point for the resolution of the triviality problem. We have confirmed that even higher-order fermionic and fermion-gauge field interactions cannot modify the qualitative structure of Eq. (9). The gauge coupling hence is generally not bounded from above for increasing UV cutoff. In order to deal with the phenomenon of \\(\\chi\\)SB that we expect for strong-coupling, we use partial bosonization techniques as developed in [12] in order to study the formation of the chiral condensate and a dynamical fermion mass. Moreover, we can treat dynamical as well as explicit fermion masses on the same footing by translating the fermion self-interactions as well as the fermion mass into a bosonic sector of the form \\[\\Gamma_{k,\\phi}=\\int Z_{\\phi}|\\partial_{\\mu}\\phi|^{2}+U(\\phi)+\\bar{h}(\\bar{ \\psi}_{\\rm R}\\psi_{\\rm L}\\phi-\\bar{\\psi}_{\\rm L}\\psi_{\\rm R}\\phi^{*}). \\tag{10}\\] Here we concentrate on the scalar boson in the \\(s\\) channel. We truncate the scalar potential to the simple form \\[U(\\phi)=\\bar{m}_{\\phi}^{2}\\phi^{*}\\phi+\\frac{1}{2}\\bar{\\lambda}_{\\phi}(\\phi^{ *}\\phi)^{2}-\\frac{1}{2}\\bar{\ u}(\\phi+\\phi^{*}), \\tag{11}\\] where the \\(\\bar{\ u}\\) term breaks chiral symmetry explicitly and thus carries the information about an explicit electron mass; if \\(\\bar{\ u}=0\\) vanishes at any scale it vanishes at all scales by chiral symmetry (massless QED). Spontaneous \\(\\chi\\)SB is monitored by the sign of \\(\\bar{m}_{\\phi}^{2}\\), negative values indicating an induced chiral condensate. Following the techniques of [12], we trade the four-fermion interactions and the electron mass for the parameters occurring in Eq. (10), such that, for instance, the resulting electron mass can be deduced from \\[m=\\bar{h}|\\phi_{0}|/Z_{\\psi}, \\tag{12}\\] where \\(\\phi_{0}\\) denotes the minimum of the potential (11). We would like to stress that the fermion-boson translation employed here is a highly efficient technique for controlling the (approximate) chiral symmetry together with its explicit breaking by the mass; no fine-tuning of the bare mass is necessary and there is no proliferation of symmetry-breaking operators as in a purely fermionic formulation. Together with the \\(\\beta\\) functions for the bosonic sector (see [13]), we can evaluate the RG trajectory of the complete system for a variety of initial conditions. Although the number of parameters has seemingly increased, the system remains solely determined by the choice of the gauge coupling and the electron mass, owing to the existence of an IR stable \"bound-state\" fixed point [12; 13]. This is a manifestation of universality: the physics at large distance scales is independent of the details of the microscopic interactions. For the quantitative analysis, we work in the Landau gauge which is a fixed point of the RG, and we concentrate on the \\(N_{\\rm f}\\!=1\\) case where the \"chiral\" symmetry is given by \\(U_{\\rm F}(1)\\times U_{\\rm A}(1)\\), i.e., fermion-number and axial \\(U(1)\\)'s with \\(\\chi\\)SB corresponding to the breaking of \\(U_{\\rm A}(1)\\). At zero bare mass, \\(m_{\\rm A}=0\\), i.e., without explicit \\(\\chi\\)SB, our analysis reveals two phases separated by a critical coupling \\(e_{\\rm cr}^{2}\\). For \\(e_{\\rm A}^{2}\\leq e_{\\rm cr}^{2}\\), chiral symmetry is preserved and the electron remains massless. For \\(e_{\\rm A}^{2}>e_{\\rm cr}^{2}\\), \\(\\chi\\)SB renders the electron massive and a Goldstone boson arises from the \\(\\phi\\) field. Switching on an explicit electron mass, the transition between the two phases turns into a crossover with the light mode of the \\(\\phi\\) field interpolating between a positronium bound state and a pseudo-Goldstone boson. In our truncation, the value of the critical coupling is \\(e_{\\rm cr}^{2}=38.41\\). For comparison, we also mention the result for \\(e_{\\rm cr}^{2}\\) in the quenched approximation, \\(e_{\\rm cr,q}^{2}\\simeq 14.81\\), which is in reasonable agreement with the quenched DSE result [6] in the Landau gauge, \\(e_{\\rm cr,qDSE}^{2}=4\\pi^{2}/3\\simeq 13.16\\). Note that our approximation includes non-ladder diagrams such that gauge-dependences are reduced [14]. The relation \\(e_{\\rm cr}^{2}>e_{\\rm cr,q}^{2}\\) results from the fact that unquenched fluctuations imply charge screening; therefore larger bare couplings are necessary for \\(\\chi\\)SB. In Fig.1, we plot the resulting renormalized values of the gauge coupling and electron mass, \\[e_{\\rm R}=\\lim_{k\\to 0}e,\\quad m_{\\rm R}=\\lim_{k\\to 0}m, \\tag{13}\\] as functions of the bare parameters. Shown are lines of constant bare mass \\(m_{\\rm A}\\). For finite \\(m_{\\rm A}\\), the curves exhibit a linear regime and a pole. This displays the crossover behavior from a \\(\\chi\\)SB dominated mass at strong coupling (linear regime) to an explicit mass term at weak coupling; the limiting pole corresponds to \\(m_{\\rm R}\\simeq m_{\\rm A}\\) for weak coupling. If we attempt to move the cutoff to infinity but keep \\(m_{\\rm R}\\) fixed, we need to take the limit \\(m_{\\rm R}/\\Lambda\\to 0\\) which can only be approached on the curve \\(m_{\\rm A}/\\Lambda\\to 0\\). In this limit \\(\\ln(m_{\\rm R}/\\Lambda)\\to-\\infty\\), the renormalized coupling goes to \\(e_{\\rm R}\\to 0\\). This is the manifestation of triviality: the whole range of bare couplings \\(0\\leq e_{\\rm A}^{2}\\leq e_{\\rm cr}^{2}\\) for \\(m_{\\rm R}\\) fixed is mapped onto a single point \\(e_{\\rm R}^{2}=0\\). For a non-trivial theory, at least one curve would have to intersect the \\(1/e_{\\rm R}^{2}\\) axis at some finite \\(e_{\\rm R}^{2}\\) for \\(m_{\\rm R}/\\Lambda\\to 0\\). On the other hand, if we want to keep \\(e_{\\rm R}>0\\) fixed, we are forced to accept a finite value for \\(m_{\\rm R}/\\Lambda>0\\). Fixing the electron mass to its physical value also determines the absolute value of the cutoff, once the bare mass is fixed. The maximal cutoff value is obtained for vanishing bare mass \\(m_{\\Lambda}/\\Lambda\\to 0\\), and we find \\(\\Lambda_{\\rm max}^{m_{\\Lambda}=0}\\sim 10^{278\\pm 8}\\)GeV for QED parameters. Yet, the limit \\(m_{\\Lambda}\\to 0\\) does not correspond to \"ordinary\" QED, since the electron mass is then fully generated by \\(\\chi\\)SB, and a massless Goldstone boson arises. In order to rediscover \"ordinary\" QED in the IR with given \\(e_{\\rm R}\\) and \\(m_{\\rm R}\\), we have to choose a sufficiently large bare mass \\(m_{\\Lambda}\\) in order to lift the pseudo-Goldstone boson to a positronium state with mass \\(\\simeq 2m_{\\rm R}\\). This implies a small reduction of the maximal UV scale. For given renormalized mass and coupling, we observe that the maximum possible bare coupling \\(e_{\\Lambda}\\) occurs for \\(m_{\\Lambda}\\to 0\\), which is a supercritical but still finite number. This fact describes the absence of the Landau pole singularity: for given physical IR parameters, large bare coupling values are inaccessible owing to \\(\\chi\\)SB, in agreement with [4]. We would like to stress that the maximal UV scale is regulator dependent. Considering QED as being embedded in an underlying theory, the latter should become visible at this scale. In this sense, the regulator dependence corresponds to the physical threshold behavior towards the underlying theory. Next we check whether QED can evade triviality in an unusual way: we fine-tune the system onto \\(e_{\\rm cr}^{2}\\) from above with \\(m_{\\rm R}/\\Lambda\\to 0\\), such that the IR spectrum consists of a light fermion, a free photon (since \\(e_{\\rm R}\\to 0\\)), and a Goldstone boson with Yukawa coupling to the fermion. In other words, QED with \\(\\chi\\)SB could have a Yukawa theory as low-energy limit. However, we have confirmed explicitly that this Yukawa coupling is also trivial in the limit of \\(\\Lambda\\to\\infty\\) in much the same way as the gauge coupling. We would finally like to point to open questions of the present investigation. First, our truncation of the fermion sector is organized as a derivative expansion. This is justified if the fermion anomalous dimension \\(\\eta_{\\psi}\\) remains small. In the Landau gauge, we have confirmed that this is indeed the case even at strong coupling, so our truncation is self-consistent. Nevertheless, as is visible in Eq. (8), a potentially large fermion anomalous dimension could strongly modify the UV behavior. Even though this may not happen in the QED universality class, a fermionic system with large \\(\\eta_{\\psi}\\) and strong UV momentum dependences can offer new routes to UV completion of interacting QFT's. Secondly, it seems worthwhile to extend our studies to theories with strong NJL-like interactions. The UV flow of systems with strong gauge and four-fermion couplings still is unknown territory, the exploration of which is dedicated to future work. The authors are grateful to Christian Fischer for useful discussions and acknowledge financial support by the DFG under contract Gi 328/1-2. ## References * (1) L.D. Landau, in _Niels Bohr and the Development of Physics_, ed. W. Pauli, Pergamon Press, London, (1955). * (2) M. Gell-Mann and F. E. Low, Phys. Rev. **95**, 1300 (1954). * (3) S. Weinberg, in _C76-07-23.1_ HUTP-76/160, Erice Subnucl. Phys., 1, (1976). * (4) M. Gockeler, R. Horsley, V. Linke, P. Rakow, G. Schierholz and H. Stuben, Phys. Rev. Lett. **80**, 4119 (1998). * (5) S. Kim, J. B. Kogut and M. P. Lombardo, Phys. Lett. B **502**, 345 (2001); Phys. Rev. D **65**, 054015 (2002). * (6) V. A. Miransky, Nuovo Cim. A **90**, 149 (1985);R. Alkofer and L. von Smekal, Phys. Rept. **353**, 281 (2001). * (7) T. Hambye and K. Riesselmann, Phys. Rev. D **55**, 7255 (1997). * (8) H. Gies, J. Jaeckel and C. Wetterich, hep-ph/0312034. * (9) C. Wetterich, Phys. Lett. B **301**, 90 (1993). * (10) H. Gies, Phys. Rev. D **66**, 025006 (2002); Phys. Rev. D **68**, 085015 (2003). * (11) D. F. Litim, Phys. Lett. B **486**, 92 (2000). * (12) H. Gies and C. Wetterich, Phys. Rev. D **65**, 065001 (2002); J. Jackel and C. Wetterich, Phys. Rev. D **68**, 025020 (2003). * (13) H. Gies and C. Wetterich, Phys. Rev. D **69**, 025001 (2004). * (14) K. I. Aoki, K. i. Morikawa, J. I. Sumi, H. Terao and M. Tomoyose, Prog. Theor. Phys. **97**, 479 (1997). Figure 1: Map of the bare couplings \\((1/e_{\\rm A}^{2},\\log_{10}(m_{\\Lambda}/\\Lambda))\\) to the plane of renormalized couplings \\((1/e_{\\rm R}^{2},\\log_{10}(m_{\\rm R}/\\Lambda))\\). The dashed vertical lines denote lines of constant bare mass in the bare coupling plane which are mapped onto the solid lines in the renormalized coupling plane (sub- and supercritical values of the bare coupling are denoted by green and red, respectively). The solid red line is the line of vanishing bare mass (the thin black line, its 1-loop counterpart). Its pre-image is a vertical line at \\(-\\infty\\). Note that the region below this line is inaccessible, i.e., for a certain fixed value of the renormalized coupling we have a minimal renormalized mass in units of the cutoff. Hence it is impossible to send the cutoff to infinity while keeping both renormalized mass and coupling fixed. This demonstrates triviality.
We investigate textbook QED in the framework of the exact renormalization group. In the strong-coupling region, we study the influence of fluctuation-induced photonic and fermionic self-interactions on the nonperturbative running of the gauge coupling. Our findings confirm the triviality hypothesis of complete charge screening if the ultraviolet cutoff is sent to infinity. Though the Landau pole does not belong to the physical coupling domain owing to spontaneous chiral symmetry breaking (\\(\\chi\\)SB), the theory predicts a scale of maximal UV extension of the same order as the Landau pole scale. In addition, we verify that the \\(\\chi\\)SB phase of the theory which is characterized by a light fermion and a Goldstone boson also has a trivial Yukawa coupling.
Write a summary of the passage below.
arxiv-format/0406198v1.md
# Fault-Tolerant Quantum Computation via Exchange interactions M. Mohseni Department of Physics, University of Toronto, 60 St. George St., Toronto, ON, M5S 1A7, Canada D.A. Lidar Chemical Physical Theory Group, and Center for Quantum Information and Quantum Control, University of Toronto, 80 St. George St., Toronto, ON, M5S 3H6, Canada ###### pacs: 03.67.Lx,03.67.Pp,03.65.Yz In the \"standard paradigm\" of quantum computing (QC) a universal set of quantum logic gates is enacted via the application of a complete set of single-qubit gates, along with a non-trivial (entangling) two-qubit gate [1]. It is in this context that the theory of fault tolerant quantum error correction (QEC) (e.g., [2]), and the well-known associated threshold results (e.g., [3]), have been developed. These results are of crucial importance since they establish the in-principle viability of QC, despite the adverse effects of decoherence and inherently inaccurate controls. However, some of the assumptions underpinning the standard paradigm may translate into severe technical difficulties in the laboratory implementation of QC, in particular in solid-state devices. Any quantum system comes equipped with a set of \"naturally available\" interactions, i.e., interactions which are inherent to the system and are determined by its symmetries, and are most easily controllable. For example, the symmetries of the Coulomb interaction dictate the special scalar form of the Heisenberg exchange interaction, which features in a number of promising solid-state QC proposals [4]. The introduction of single-spin operations requires a departure from this symmetry, and typically leads to complications, such as highly localized magnetic fields [5], powerful microwave radiation that can cause excessive heating, or \\(g\\)-tensor engineering/modulation [6]. For these reasons the \"encoded universality\" (EU) alternative to the standard paradigm has been developed (e.g., [7]). In EU, single-qubit interactions with external control fields are replaced by \"encoded\" single-qubit operations, implemented on logical qubits via control of exchange interactions between their constituent physical qubits. It has been shown that such an exchange-only approach is also capable of universal QC, on the (decoherence-free) subspace spanned by the encoded qubits [8]. Explicit pulse sequences have been worked out for the implementation of encoded logic gates in the case when only the exchange interaction is available [9; 10], which can be simplified by assuming the controllability of a global, time-dependent magnetic field [11; 12]. The issue of the robustness of encoded universal QC in the presence of decoherence has been addressed in a number of publications, mostly using a combination of decoherence-free subspaces (DFSs) and dynamical decoupling methods [10; 13]. However, in contrast to the case of the standard paradigm, so far a theory of fault tolerant QEC has not yet been developed for encoded universal QC. The difficulty originates from the fact that EU constructions use only a subspace of the full system Hilbert space, and hence are subject to leakage errors to the orthogonal subspace. Standard QEC theory then breaks down under the restriction of using only a limited set of interactions, since these interactions are not universal over the orthogonal subspace, and cannot, using pre-established methods, be used to fix the leakage problem. Here we show for the first time how to extend the theory of fault tolerant QEC so as to encompass encoded universal QC. This establishes also the fault tolerance of a class of DFSs, for which prior fault tolerance results were of a heuristic nature [14]. _Encoded Universality._-- We first briefly review the concept of EU in the context of a particularly simple encoding of one logical qubit into the states of two neighboring physical qubits: \\(|0_{L}\\rangle_{i}=|0_{2i-1}\\rangle\\otimes|1_{2i}\\rangle\\), \\(|1_{L}\\rangle_{i}=|1_{2i-1}\\rangle\\otimes|0_{2i}\\rangle\\), where \\(|0\\rangle\\) (\\(|1\\rangle\\)) is the \\(+1\\) (\\(-1\\)) eigenstate of the Pauli matrix \\(\\sigma_{z}\\). We shall refer to this encoding as a \"two-qubit universal code\" (2QUC), and more generally to EU encodings involving \\(n\\) qubits per logical qubit as \"\\(n\\)QUC\". In Ref. [11] it was shown how to construct a universal set of encoded quantum logic gates for the 2QUC, generated from the widely applicable class of (effective or real) exchange Hamiltonian of the form \\(H_{\\rm ex}\\equiv\\sum_{i<j}H_{ij}\\), where \\[H_{ij}=\\sum_{i<j}J_{ij}(X_{i}X_{j}+Y_{i}Y_{j})+J_{ij}^{z}Z_{i}Z_{j}. \\tag{1}\\]Here \\(X_{i},Y_{i},Z_{i}\\) represent the Pauli matrices \\(\\sigma_{x},\\sigma_{y},\\sigma_{z}\\) acting on the \\(i\\)th physical qubit. The Heisenberg interaction is the case \\(J_{ij}=J_{ij}^{z}\\) (e.g., electron and nuclear spin qubits, [4], while the XXZ and XY models are, respectively, the cases \\(J_{ij}\ eq J_{ij}^{z}\ eq 0\\) (e.g., electrons on helium, [15]) and \\(J_{ij}\ eq 0,J_{ij}^{z}=0\\) (e.g., quantum dots in cavities, [16]). In essentially all pertinent QC proposals one can control the \\(J_{ij}\\) for \\(\\left|i-j\\right|\\lesssim 2\\), though not independently from \\(J_{ij}^{z}\\). As usual in the EU discussion we do not assume that the technically challenging single-qubit external operations of the form \\(\\sum f_{i}^{x}(t)X_{i}+f_{i}^{y}(t)Y_{i}\\) are available. We do assume that a (global) free Hamiltonian \\(H_{0}=\\sum_{i}\\frac{1}{2}\\,\\omega_{i}Z_{i}\\) with non-degenerate \\(\\omega_{i}\\)'s can be exploited for QC in the sense that the \\(\\omega_{i}\\) are collectively controllable, e.g., via the application of a global magnetic field. Note that \\(\\overline{X}_{2i-1,2i}\\) and \\(\\overline{Z}_{2i-1,2i}\\), where \\(\\overline{X}_{ij}\\equiv\\frac{1}{2}(X_{i}X_{i}+Y_{i}Y_{j})\\), \\(\\overline{Z}_{ij}\\equiv\\frac{1}{2}(Z_{i}-Z_{j})\\), generate an su(2) algebra on the \\(i\\)th 2QUC, while \\(\\overline{ZZ}_{i,i+1}\\equiv Z_{2i}Z_{2i+1}\\) generates a controlled-phase (CP) gate between the \\(i,i+1\\)th 2QUCs. Here bars denote logical operations on the 2QUC, so that, e.g., \\(\\left|0_{L}\\right\\rangle_{i}\\stackrel{{\\overline{X}_{2i-1,2i}}}{ {\\leftrightarrow}}\\left|1_{L}\\right\\rangle_{i}\\). Given only the ability to control the \\(J_{ij}\\), explicit encoded logic gates can be derived using the identity \\[C_{I_{k}}^{\\phi}\\circ\\exp(i\\theta I_{i}) \\equiv \\exp(-i\\phi I_{k})\\exp(i\\theta I_{i})\\exp(i\\phi I_{k}) \\tag{2}\\] \\[= \\exp[i\\theta(I_{i}\\cos\\phi+I_{j}\\sin\\phi)],\\] valid for su(2) generators satisfying the commutation relations \\([I_{i},I_{j}]=iI_{k}\\) (and cyclic permutations). E.g., an encoded CNOT gate over control (subscript \\(C\\), qubits \\(1,2\\)) and target (subscript \\(T\\), qubits \\(3,4\\)) 2QUCs can be constructed as follows for the XY model: \\(\\overline{CNOT}=\\overline{W}_{T}\\overline{CP}\\,\\overline{W}_{T}\\), where \\(\\overline{W}_{T}=e^{i\\frac{\\pi}{2}}e^{-i\\frac{\\pi}{4}\\overline{X}_{34}}e^{-i \\frac{\\pi}{4}\\overline{Z}_{34}}e^{-i\\frac{\\pi}{4}\\overline{X}_{34}}\\) is the encoded Hadamard gate, \\[\\overline{CP}=i\\{C_{\\overline{X}_{13}}^{\\pi/4}\\circ C_{\\overline{X}_{12}}^{ \\pi/2}\\circ e^{-i\\frac{\\pi}{2}\\overline{X}_{23}}\\}e^{-i\\frac{\\pi}{8}(Z_{1}-Z_ {2})}e^{-i\\frac{\\pi}{8}(Z_{3}-Z_{4})} \\tag{3}\\] is the encoded controlled-phase gate. For the Heisenberg and XXZ models one has \\[e^{-itJ_{2i,2i+1}^{x}\\overline{Z}_{i,i+1}}=e^{-itH_{2i,2i+1}}C_{\\overline{Z}_ {2i-1,2i}}^{\\pi}\\circ e^{-itH_{2i,2i+1}}, \\tag{4}\\] which is equivalent to the \\(\\overline{CP}\\) gate when \\(tJ_{2i,2i+1}^{z}=\\pi/4\\). Importantly, in all these cases universal encoded QC is possible via relaxed control assumptions, namely control of only the parameters \\(J_{i,i+1}\\) and a global magnetic field. _Hybrid 2QUC-Stabilizer codes._-- Our solution for fault tolerant EU involves a concatenation of 2QUC and the method of stabilizer codes of QEC theory [2]. We define a hybrid \\(n\\)QUC-Stabilizer code (henceforth, \"S-\\(n\\)QUC\") as the stabilizer code in which each physical qubit state \\(\\left|\\psi\\right\\rangle=\\alpha\\left|0\\right\\rangle+\\beta\\left|1\\right\\rangle\\) is replaced by the \\(n\\)QUC qubit state \\(\\left|\\psi_{U}\\right\\rangle=\\alpha\\left|0_{U}\\right\\rangle+\\beta\\left|1_{U}\\right\\rangle\\). With this replacement \\(X_{i}\\) on physical qubit \\(i\\) must be replaced by its encoded version \\(\\overline{X}_{i}\\), and similarly for \\(Y_{i}\\) and \\(Z_{i}\\). Thus, physical-level operations on the stabilizer code are replaced by encoded-level operations on the 2QUC. This replacement rule also applies to give the new stabilizer for the S-\\(n\\)QUC. For example, suppose we concatenate the 2QUC with the three-qubit phase-flip code \\(\\left|+\\right\\rangle^{\\otimes 3},\\left|-\\right\\rangle^{\\otimes 3}\\), where \\(\\left|\\pm\\right\\rangle=(\\left|0\\right\\rangle\\pm\\left|1\\right\\rangle)/\\sqrt{2}\\). The stabilizer of the latter is generated by \\(X_{1}X_{2},X_{2}X_{3}\\). Then the stabilizer for the hybrid S-2QUC \\(\\left|0_{H}\\right\\rangle=\\frac{1}{2\\sqrt{2}}(\\left|01\\right\\rangle+\\left|10 \\right\\rangle)^{\\otimes 3}\\), \\(\\left|1_{H}\\right\\rangle=\\frac{1}{2\\sqrt{2}}(\\left|01\\right\\rangle-\\left|10 \\right\\rangle)^{\\otimes 3}\\) is just \\(S=\\{\\overline{X}_{1}\\overline{X}_{2},\\overline{X}_{2}\\overline{X}_{3}\\}\\), with \\(\\overline{X}_{i}=X_{2i-1}X_{2i}\\). We will assume that it is possible to make measurements directly in the 2QUC basis. This involves, e.g., distinguishing a singlet \\((\\left|01\\right\\rangle-\\left|10\\right\\rangle)/\\sqrt{2}\\) from a triplet state \\((\\left|01\\right\\rangle+\\left|10\\right\\rangle)/\\sqrt{2}\\), or performing a non-demolition measurement of the first qubit in each 2QUC logical qubit; these tasks are currently under active investigation, e.g., [17]. In conjunction with the encoded universal gate set, it is then evidently possible to perform the entire repertoire of quantum operations needed to compute fault tolerantly on the 2QUC, using standard stabilizer-QEC methods [2]. Note that because the stabilizer code is, in our case, built from 2QUC qubits, it is _a priori not_ designed to fix errors on the physical qubits. Thus, our next task is to consider these physical-level errors. _Physical phase flips_.-- Let \\(\\mathcal{C}\\) be a stabilizer code that can correct a single phase flip error, \\(Z_{i}\\), on any of the physical qubits. Therefore at least one of the generators of its stabilizer anticommutes with the error \\(Z_{i}\\). This implies that there is at least one stabilizer generator which includes the operator \\(X_{i}\\) or \\(Y_{i}\\). Consider the hybrid code \\(\\mathcal{C}^{\\prime}\\) resulting from concatenating \\(\\mathcal{C}\\) and an \\(n\\)QUC. The stabilizer of \\(\\mathcal{C}^{\\prime}\\) is found by replacing \\(X_{i},Y_{i}\\) or \\(Z_{i}\\) by \\(\\overline{X}_{i},\\overline{Y}_{i}\\) or \\(\\overline{Z}_{i}\\) respectively. Therefore at least one of the generators of the stabilizer of \\(\\mathcal{C}^{\\prime}\\) includes the operator \\(\\overline{X}_{i}\\) or \\(\\overline{Y}_{i}\\), for all \\(i\\). In the case of a 2QUC we have \\(\\overline{X}_{i}=X_{2i-1}X_{2i}\\) and \\(\\overline{Y}_{i}=Y_{2i-1}Y_{2i}\\), both of which anti-commute with \\(Z_{2i-1}\\) and \\(Z_{2i}\\). Moreover, one readily verifies that arbitrary products of error operators anti-commute with at least one stabilizer generator, or have trivial effect. Therefore the corrigibility condition of errors on stabilizer codes [1] are satisfied, and hence _a phase flip error on any physical qubit in a hybrid S-2QUC is always correctible._ _Physical bit flip_.-- In contrast to physical-level phase flips, bit flips, \\(\\{X_{2i-1},Y_{2i-1},X_{2i},Y_{2i}\\}\\), cause leakage from the 2QUC subspace via transitions to the orthogonal, \"leakage\" subspace spanned by \\(\\{\\left|0_{2i-1}0_{2i}\\right\\rangle,\\left|1_{2i-1}1_{2i}\\right\\rangle\\}\\). The generators of the encoded su(2) on a 2QUC qubit, \\(\\overline{X}_{2i-1,2i},\\overline{Z}_{2i-1,2i}\\), annihilate this subspace, and hence will fail to produce the desired effect if used to implement standard QEC operations. _Two-physical-qubit errors_.-- Lastly we need to consider the case of two physical-level errors affecting two qubits of the same 2QUC block (the case of two errors on two qubits in different 2QUC blocks is already covered by the considerations above). Listing all possible such errors we find that (i) \\(\\{XX=\\overline{X},XY=-\\overline{Y},YX=0\\}\\)\\(\\overline{Y},YY=\\overline{X},ZZ=-\\overline{I}\\)) act as single encoded-qubit errors, and thus are correctible by the stabilizer QEC, and (ii) \\(\\{XZ,YZ,ZX,ZY\\}\\) all act as leakage errors. We conclude that our task is to find a way to solve the leakage problem by using only the available interactions. We do this in two steps: first we construct a \"leakage correction unit\" (LCU) assuming perfect pulses, then we consider fault tolerance in the presence of imperfections in the LCU and computational operations. _Leakage correction unit_.-- We assume that we can reliably prepare a 2QUC ancilla qubit in the state \\(\\left|0_{L}\\right\\rangle\\). We now define an LCU as the unitary operator \\(L\\) whose action (up to phase) is: \\[L|0_{L}\\rangle|0_{L}\\rangle = |0_{L}\\rangle|0_{L}\\rangle\\quad L\\left|0_{1}0_{2}\\right\\rangle|0_ {L}\\rangle=|0_{L}\\rangle|0_{3}0_{4}\\rangle\\] \\[L|1_{L}\\rangle|0_{L}\\rangle = |1_{L}\\rangle|0_{L}\\rangle\\quad L\\left|1_{1}1_{2}\\right\\rangle|0_ {L}\\rangle=|0_{L}\\rangle|1_{3}1_{4}\\rangle \\tag{5}\\] Here the first (second) qubit is the data (ancilla) qubit, and the action of \\(L\\) on the remaining 12 basis states is completely arbitrary. The LCU thus _conditionally_ swaps a leaked data qubit with the ancilla, resetting the data qubit to \\(\\left|0_{L}\\right\\rangle\\); this corresponds to a logical error on the data qubit, which can be fixed by the stabilizer code. Note that \\(L\\) entangles the data and ancilla qubits, which means that we can determine with certainty if a leakage correction has occurred or not by measuring the state of ancilla. We next show how to construct the transformation \\(L\\) from the available interactions. We decompose \\(L\\) in general as follows: \\(L=\\sqrt{SWAP}\\times\\sqrt{SWAP^{\\prime}}\\), where \\[\\sqrt{SWAP} = \\exp[-i\\frac{\\pi}{4}(\\overline{X}_{13}+\\overline{X}_{24})] \\tag{6}\\] \\[\\sqrt{SWAP^{\\prime}} = \\exp[-i\\frac{\\pi}{4}(\\overline{X}_{13}Z_{2}Z_{4}+\\overline{X}_{2 4}Z_{1}Z_{3})] \\tag{7}\\] and \\(\\exp[-i\\frac{\\pi}{4}\\overline{X}_{ij}]\\) is just the square-root of swap gate between physical qubits \\(i\\) and \\(j\\). The gate \\(\\sqrt{SWAP}\\) applies this operation on qubits \\(1,3\\) and \\(2,4\\) in parallel. Depending on whether the eigenvalues of \\(Z_{2}Z_{4}\\) and \\(Z_{1}Z_{3}\\) are \\(+1\\) or \\(-1\\) on the four basis states of Eq. (5), the gates \\(\\sqrt{SWAP}\\) and \\(\\sqrt{SWAP^{\\prime}}\\) multiply constructively (destructively) to generate a full swap (identity). _Circuits for the LCU_.-- Eq. (7) involves four-body spin interactions. We next show how to construct these from available two-body interactions. For systems with XY-type of exchange interactions [16] the \\(\\sqrt{SWAP}\\) gate consumes a single pulse. A circuit for performing \\(\\sqrt{SWAP^{\\prime}}\\) is given in Fig. 1. For the class of Heisenberg systems [4], and for XXZ-type systems [15], we refocus the Ising term \\(J^{z}_{ij}Z_{i}Z_{j}\\), and use the following identity: \\[\\sqrt{SWAP^{\\prime}} = \\{C^{\\pi/4}_{Z_{2}Z_{3}}\\circ C^{\\pi/2}_{\\overline{X}_{12}}\\circ \\exp[-i\\pi\\overline{X}_{14}/4]\\}\\times \\tag{8}\\] \\[\\{C^{\\pi/4}_{Z_{1}Z_{4}}\\circ C^{\\pi/2}_{\\overline{X}_{12}}\\circ \\exp[-i\\pi\\overline{X}_{23}/4]\\}\\] To generate \\(\\overline{X}_{ij}\\) and \\(Z_{i}Z_{j}\\) we use [recall Eq. (1)] \\[e^{-itH_{ij}/2}C^{\\pi/2}_{Z_{i}}\\circ e^{\\pm itH_{ij}/2}=e^{-2itJ^{z}_{ij} \\overline{X}_{ij}}\\circ\\mbox{ or }e^{-itJ^{z}_{ij}Z_{i}Z_{j}}\\mbox{ }_{(\\cdot)} \\tag{9}\\] which is an example of recoupling [11]. The \\(Z_{i}\\)-pulses required for this can, in turn, be generated as follows: \\[e^{it\\sum_{i}\\frac{1}{2}\\omega_{i}Z_{i}}C^{\\pi/4}_{H_{ik}}\\circ e^{-it\\sum_{i }\\frac{1}{2}\\omega_{i}Z_{i}}=e^{\\frac{1}{2}it\\Delta_{ki}Z_{i}}e^{\\frac{1}{2}it \\Delta_{ik}Z_{i}} \\tag{10}\\] where \\(i,k\\in\\{1, ,6\\}\\) and \\(\\Delta_{ik}\\equiv\\omega_{i}-\\omega_{k}\\). By adjusting the time so that \\(t\\Delta_{ki}=\\pi\\) and inserting Eq. (10) into Eq. (9) we generate the pulses \\(\\exp[i\\pi Z_{i}/2]\\) required in the conjugation step of Eq. (9), since the action on qubit \\(k\\) cancels out. Note that all spins not participating in the exchange interaction are unaffected by the procedure of Eq. (10). For all types of exchange interactions we have checked that the \\(\\sqrt{SWAP^{\\prime}}\\) can be also performed using only the two physical ancilla qubits \\(3,4\\), with the same number of physical pulses, by sacrificing to some degree the possibility of parallel operations within each LCU. In all cases the time required for realizing the LCU is, to within a factor of two, equal to that for performing a single \\(\\overline{CNOT}\\). We note that a non-unitary QEC leakage detection circuit was described in Ref. [2]. Unfortunately, this method is not in general applicable to \\(n\\)QUCs, since the required logic gates operate over the full system Hilbert space. _Constraints_ for unitary leakage-correcting operations, similar to our LCU, were derived in Ref. [10] for the 3QUC and Heisenberg-only computation, but no explicit circuit was given there. _Fault-tolerant computation on the S-2QUC_.-- So far we have assumed perfect gates. We now relax this assumption. Fault-tolerant computation is defined as a procedure in which if any component of a circuit fails to operate, at most one error appears in each encoded-block qubit [1; 2]. For a specific component to be fault-tolerant, the probability of error per operation should be below a certain threshold [3]. _Transversal_ quantum Figure 1: Circuit for the \\(\\sqrt{SWAP^{\\prime}}\\) operation in the XY model. Time flows from left to right. Data-physical qubits are numbered \\(1,2\\), while \\(3\\)-\\(6\\) are ancilla-physical qubits. An angle \\(\\phi\\) under an arrow connecting qubits \\(i,j\\) represents the pulse \\(\\exp[-i\\phi\\overline{X}_{ij}]\\). A total of \\(13\\) such pulses are required. The circles on the left represent a possible arrangement of qubits so that all are nearest neighbors throughout the pulse sequence. operations, such as the the normalizer elements CNOT, phase, and Hadamard (\\(W\\)), are those which can be implemented in pairwise fashion over physical qubits. This ensures that an error from an encoded block of qubits cannot spread into more than one physical qubit in another encoded block of qubit [1; 2]. Transversal operations become automatically fault-tolerant. In order to construct a universal fault-tolerant set of gates we should in addition be able to implement, e.g., a fault-tolerant encoded \\(\\pi/8\\) gate; although this gate is not transversal it can be realized by performing fault-tolerant measurements [1]. Let us denote by a double bar encoded gates that act on the S-\\(n\\)QUC. It is easy to see that \\(\\overline{\\overline{CNOT}}\\) and \\(\\overline{\\overline{W}}\\) can be implemented transversally using EU operations as above. Moreover, by inspection of Ref. [1] it is easy to see that all operations needed to construct the \\(\\pi/8\\) gate, in particular fault tolerant measurements and cat state preparation, can be done in the 2QUC basis, without any modification, as long as one can measure directly in the 2QUC basis (as discussed above). Hence, with respect to _logical_ errors on the 2QUC qubits, the hybrid S-\\(n\\)QUC preserves all the required fault-tolerance properties. This leaves physical-level phase and bit flip errors during _encoded logic gates_. We already showed that phase flip errors act as logical errors that the stabilizer QEC can correct. Bit flip errors are more problematic: a _single_ leakage error invalidates the stabilizer code block in which it occurs, since the QEC procedures are ineffective in the leakage subspace. Hence if such errors were to propagate during a logic operation such as \\(\\overline{\\overline{CNOT}}\\), they would - if left uncorrected - overwhelm the stabilizer level and result in catastrophic failure. We have verified that leakage errors propagate as either: (i) single physical-level leakage errors, remaining localized on the _same_ qubit, in the case of an error taking place _before_ or _between_ the unitary transformations that make up an encoded logic gate [18]; (ii) as two-qubit leakage errors if a single-qubit leakage error happened _during_ the latter transformations. In any case, _the solution is to invoke the LCU after each logic operation, and before the QEC circuitry_. The LCU turns a leakage error into a logical error, after which multilevel concatenated QEC [1; 2] can correct these errors with arbitrary accuracy. However, uncontrolled leakage error propagation during QEC _syndrome measurements_ must be avoided by inserting LCUs on each 2QUC after the cat-state preparation and before the verification step. The final possibility we must contend with are leakage errors taking place during _the operation of the LCU itself_. Such a faulty LCU could incorrectly change the state of the ancilla qubit in Eq. (5). Therefore finding the ancilla in either \\(\\left|00\\right\\rangle\\) or \\(\\left|11\\right\\rangle\\) is an inconclusive result. Now let \\(p_{\\mathrm{s}}\\) be the probability of success of the LCU operation in one trail (this depends on accurate gating of the interaction Hamiltonian, etc.). Let \\(\\omega=\\mathrm{Tr}(\\rho_{\\mathrm{f}}\\left|0_{L}\\right\\rangle\\left\\langle 0_{L}\\right|)\\) be the probability of finding the ancilla-qubit in the final state \\(\\left|0_{L}\\right\\rangle\\), where \\(\\rho_{\\mathrm{f}}\\) represents the final entangled state of data-qubit and ancilla (\\(\\omega\\) critically depends on the quantum channel error model). The probability, \\(p_{\\mathrm{c}}\\), of achieving _conclusive_ and _correct_ information about the state of the data-qubit (being in the logical subspace) is \\(p_{\\mathrm{c}}=(\\omega\\wedge p_{\\mathrm{s}})/\\omega\\). This is the conditional probability of LCU success when we already know that the ancilla is in state \\(\\left|0_{L}\\right\\rangle\\). Then \\(1-p_{\\mathrm{c}}\\) is the probability of achieving a _conclusive_ but _wrong_ result. We can arbitrarily boost the success probability of the LCU+measurement, \\(1-(1-p_{\\mathrm{c}})^{n}\\), to be higher than some constant \\(c_{\\circ}\\), by repeating this procedure until we obtain \\(n\\geq\\log_{1-p_{\\mathrm{c}}}(1-c_{\\circ})\\) consecutive no-leakage events. _Conclusion_.-- We have presented a theory of fault-tolerant QC for systems governed by XY, XXZ or Heisenberg exchange interactions, operated without single-qubit gates. In doing so, the theories of QEC and EU were reconciled for the first time by introducing a type of hybrid EU-stabilizer code. Leakage out of the EU code space was identified as the key problem and solved here using a fully constructive approach, within the EU framework of utilizing only the system's intrinsic interactions. Many elements of this theory can be directly generalized to other quantum systems with a known set of experimentally available Hamiltonians. These results confirm the viability of the EU paradigm, with its associated advantages of reduced quantum control constraints and improved experimental compatibility to the interactions that are naturally available in a given quantum system. Moreover, by constructing error correction operations from a Hamiltonian formulation, rather than from gates as the elementary building blocks, a more accurate and reliable calculation of the fault-tolerance threshold is possible than in previous approaches. This will be undertaken in a future publication. Support from OGSST and NSERC (to M.M.), the DARPA-QuIST program (managed by AFOSR under agreement No. F49620-01-1-0468) and the Sloan foundation (to D.A.L.), is gratefully acknowledged. We thank K. Khodjasteh and A. Shabani for useful discussions. ## References * (1) M.A. Nielsen, I.L. Chuang, _Quantum Computation and Quantum Information_ (Cambridge University Press, Cambridge, UK, 2000). * (2) D. Gottesman, eprint quant-ph/9705052; J. Preskill, in _Introduction to Quantum Computation and Information_ (World Scientific, Singapore, 1999), edited by H.K. Lo, S. Popescu and T.P. Spiller. * (3) E. Knill, R. Laflamme, W. Zurek, Science **279**, 342 (1998); A.M. Steane, Phys. Rev. A **68**, 042322 (2003). * (4) D. Loss, D.P. DiVincenzo, Phys. Rev. A **57**, 120 (1998); B.E. Kane, Nature **393**, 133 (1998); R. Vrijen _et. al_,Phys. Rev. A **62**, 012306 (2000). * (5) D.A. Lidar, J.H. Thywissen, J. Appl. Phys. **96**, 754 (2004). * (6) E. Yablonovitch _et. al_, Proc. of the IEEE **91**, 761 (2003). * (7) P. Zanardi, S. Lloyd, Phys. Rev. A **69**, 022313 (2004). * (8) D. Bacon _et. al_, Phys. Rev. Lett. **85**, 1758 (2000); J. Kempe _et. al_, Phys. Rev. A **63**, 042307 (2001). * (9) D.P. DiVincenzo _et. al_, Nature **408**, 339 (2000); J. Vala, K.B. Whaley, Phys. Rev. A **66**, 022304 (2002). * (10) J. Kempe _et. al_, Quant. Inf. Comp. **1**, 33 (2001). * (11) D.A. Lidar, L.-A. Wu, Phys. Rev. Lett. **88**, 017905 (2002). * (12) J. Levy, Phys. Rev. Lett. **89**, 147902 (2002). * (13) L.-A. Wu, M.S. Byrd, D.A. Lidar, Phys. Rev. Lett. **89**, 127901 (2002); L. Viola, Phys. Rev. A **66**, 012307 (2002); D.A. Lidar, L.-A Wu, Phys. Rev. A **67**, 032313 (2003); Y. Zhang _et. al_, Phys. Rev. A **69**, 042315 (2004). * (14) D.A. Lidar, D. Bacon, K.B. Whaley, Phys. Rev. Lett. **82**, 4556 (1999). * (15) P.M. Platzman, M.I. Dykman, Science **284**, 1967 (1999). * (16) A. Imamoglu _et. al_, Phys. Rev. Lett. **83**, 4204 (1999). * (17) M. Friesen _et. al_, Phys. Rev. Lett. **92**, 037901 (2004). * (18) To see this consider a generic unitary transformation \\(G_{ij}\\in\\{H_{ij}=J_{ij}(X_{i}X_{j}+Y_{i}Y_{j})+J_{ij}^{z}Z_{i}Z_{j},\\overline{ Z}_{ij}=(Z_{i}-Z_{j})/2\\}\\), and a single qubit errors \\(E_{i}\\in\\{X_{i},Z_{i}\\}\\). Then, using \\(U\\exp(A)U^{\\dagger}=\\exp(UAU^{\\dagger})\\) for unitary \\(U\\) we can commute \\(E_{i}\\) to the left while flipping signs in \\(G_{ij}\\) appropriately [e.g., \\(H_{ij}X_{i}=X_{i}\\{J_{ij}(X_{i}X_{j}-Y_{i}Y_{j})-J_{ij}^{z}Z_{i}Z_{j}\\}\\)]. The transformations with flipped sign combine to give a faulty logic gate on the 2QUC qubits, which is followed by the same \\(E_{i}\\) error.
Quantum computation can be performed by encoding logical qubits into the states of two or more physical qubits, and controlling a single effective exchange interaction and possibly a global magnetic field. This \"encoded universality\" paradigm offers potential simplifications in quantum computer design since it does away with the need to perform single-qubit rotations. Here we show that encoded universality schemes can be combined with quantum error correction. In particular, we show explicitly how to perform fault-tolerant leakage correction, thus overcoming the main obstacle to fault-tolerant encoded universality.
Summarize the following text.
arxiv-format/0406210v1.md
# Coherence properties of the two-dimensional Bose-Einstein condensate Christopher Gies [email protected] D. A. W. Hutchinson [email protected] Department of Physics, University of Otago, P.O. Box 56, Dunedin, New Zealand ## I Introduction Bose-Einstein condensation (BEC) in (quasi-) two-dimensional systems has only recently been obtained in the laboratory [1; 2]. Thus, many properties have yet to be explored both experimentally and theoretically. We present an investigation of an isotropic two-dimensional BEC with the aim of providing detailed predictions for comparison with future experiments. The manner in which dimensionality can fundamentally alter the physics of a system is clearly apparent in the Mermin-Wagner-Hohenberg theorem, which forbids a spontaneously broken symmetry with long range order in a homogeneous two-dimensional system [3; 4; 5]. In terms of the coherence function \\(G^{(1)}(\\mathbf{x},\\mathbf{x}^{\\prime})\\), this means that \\(\\lim_{|\\mathbf{x}-\\mathbf{x}^{\\prime}|\\rightarrow\\infty}G^{(1)}(\\mathbf{x}, \\mathbf{x}^{\\prime})\ eq 0\\), which can be seen as the definition of BEC [6; 7], is impossible for \\(T>0\\) in a uniform two-dimensional system. Thus, in a two-dimensional Bose gas BEC cannot occur at finite temperatures. Phase fluctuations make the formation of a globally coherent phase impossible. Despite this, a different transition of the Kosterlitz-Thouless (KT) type [8; 9; 10] to a state with an analytical decay in the coherence function is possible in the ideal system. With confinement in a harmonic trap, the modified density of states allows the 2D system to Bose condense. Nevertheless, below the critical temperature there is a large phase fluctuating regime in which the superfluid is best described as a quasicondensate [11]. Unlike a true BEC, phase coherence only extends over regions of a size smaller than the extent of the condensate, characterized by the coherence length. This regime has been referred to as the KT phase, although the physical state of the interacting system in this phase fluctuating regime has yet to be thoroughly investigated. Phase fluctuations can enter the uniform gas in the form of vortex/antivortex pairs, or topological charges, which unbind at the point of the KT transition. Thus, the phase fluctuating state may well be a regular lattice of pairs of opposite topological charges in the sense of the KT phase, but this is not the only possibility and further investigation is required. In a previous publication [12] we have discussed how the semi-classical approximation fails to describe BEC consistently in two dimensions and have shown results to prove that these problems can be removed by applying the more complex Hartree-Fock-Bogoliubov (HFB) formalism. The aim of the present publication is to present a detailed and more complete discussion of the properties of a two-dimensional BEC as is possible within the HFB-Popov approach. Our emphasis lies on the coherence properties which are crucial for the question of whether the superfluid state is best described as a BEC or as a quasicondensate. In the following section we outline the HFB formalism and explain our methods of obtaining solutions. Then, in Section III, we present our results, such as the density profile of the condensate and non-condensate, the excitation spectrum and the coherence function. In Section III.3 we present the momentum profile and coherence length of a phase fluctuating condensate, indicating how these could be measured in forthcoming experiments. Our work is concluded in Section IV. ## II Formalism ### Mean-field theory and HFB-Popov equations The time-independent, second quantized form of the grand-canonical many-body Hamilton operator for our system is given by \\[\\begin{split}\\hat{H}=&\\int\\!\\mathrm{d}^{2}r\\;\\hat{ \\psi}^{\\dagger}(\\mathbf{r})\\;\\left(\\hat{h}(\\mathbf{r})-\\mu\\right)\\,\\hat{\\psi}( \\mathbf{r})\\\\ &+\\frac{g}{2}\\,\\int\\!\\mathrm{d}^{2}r\\,\\hat{\\psi}^{\\dagger}( \\mathbf{r})\\hat{\\psi}^{\\dagger}(\\mathbf{r})\\,\\hat{\\psi}(\\mathbf{r})\\hat{\\psi}( \\mathbf{r})\\;.\\end{split} \\tag{1}\\]Here, \\(\\hat{h}({\\bf r})=-\\frac{\\hbar^{2}}{2m}\\,\\Delta+U_{\\rm trap}({\\bf r})\\) is the single particle Hamiltonian with the external potential \\(U_{\\rm trap}\\) of the atom trap, and \\(g\\) is the coupling parameter that characterizes interparticle scattering. For collision processes, we assume a hard sphere potential within the usual pseudo-potential approximation [13], i. e. \\(V({\\bf r}-{\\bf r}^{\\prime})=g\\,\\delta^{(2)}({\\bf r}-{\\bf r}^{\\prime})\\). For a dilute gas this is a good approximation, however care must be taken in determining the coupling constant \\(g\\). Usually it is derived from an approximation to the two-body T-matrix in the zero-energy and zero-momentum limit, as appropriate for scattering processes in an ultra-cold system. In three dimensions, the two-body T-matrix for a dilute gas is well described within the s-wave approximation, \\(g=4\\pi\\hbar^{2}a_{\\rm 3D}/m\\), where \\(a_{\\rm 3D}\\) is the s-wave scattering length. In two dimensions, however, the two-body T-matrix vanishes at zero energy [14]. Therefore, many-body effects introduced by the surrounding medium must be taken into account when studying two-dimensional gases. For a trapped gas, this leads to a spatially dependent coupling parameter \\(g({\\bf r})\\). Furthermore, the exact form of the coupling strength depends on the tightness of the confinement in the axial direction. With the parameters from [1], using the terminology of [14], we consider this system to be in the quasi-2D regime. Therefore, for the calculations undertaken in this work, we use the following approximation to the many-body T-matrix at zero temperature for the coupling parameter [14]: \\[g({\\bf r})=-\\frac{4\\,\\pi\\hbar}{m}\\,\\frac{1}{\\ln\\left(n_{c}({\\bf r})\\,g({\\bf r })\\,ma_{\\rm 2D}^{2}/4\\hbar^{2}\\right)}. \\tag{2}\\] The scattering length \\(a_{\\rm 2D}\\) in the quasi-2D regime is given by \\(a_{\\rm 2D}=4\\,\\sqrt{\\pi/B}\\,l_{z}e^{-\\sqrt{\\pi}\\,l_{z}/a_{\\rm 3D}}\\), \\(B\\approx 0.915\\). This result was first obtained by Petrov _et al._[15; 16] by considering the 2D scattering problem. We will present a detailed study of interactions in the 2D Bose condensed system elsewhere [17]. We decompose the Bose field operators, in the standard fashion [18; 19], into classical and fluctuation parts, \\(\\hat{\\psi}({\\bf r})\\simeq\\langle\\hat{\\psi}({\\bf r})\\rangle+\\delta\\hat{\\psi}({ \\bf r})=\\Psi_{0}({\\bf r})+\\delta\\hat{\\psi}({\\bf r})\\), where the condensate wave function \\(\\Psi_{0}({\\bf r})\\) is normalized to the number of particles in the ground state, i. e. \\(\\int\\!{\\rm d}^{2}r\\,|\\Psi_{0}({\\bf r})|^{2}=N_{0}\\). The Hamiltonian (1) can then be diagonalized by a unitary transformation to the quasiparticle operators \\(\\hat{\\alpha}_{i}\\), \\(\\hat{\\alpha}_{i}^{\\dagger}\\), \\(\\delta\\hat{\\psi}({\\bf r})=\\sum_{i}\\left(\\hat{\\alpha}_{i}\\,u_{i}({\\bf r})-\\hat {\\alpha}_{i}^{\\dagger}\\,v_{i}^{*}({\\bf r})\\right)\\), yielding the HFB-Hamiltonian \\[\\begin{split}\\hat{H}_{\\rm HFB}=&\\int\\!{\\rm d}^{2}r \\,\\Psi_{0}({\\bf r})\\,\\left(\\hat{h}({\\bf r})-\\mu+\\frac{1}{2}\\,g({\\bf r})\\,n_{c} ({\\bf r})\\right)\\,\\Psi_{0}({\\bf r})\\\\ &+\\sum_{i}E_{i}\\,\\hat{\\alpha}_{i}^{\\dagger}\\hat{\\alpha}_{i}-C \\end{split} \\tag{3}\\] where \\(\\hat{\\cal L}=\\hat{h}({\\bf r})-\\mu+2\\,g({\\bf r})\\,n({\\bf r})\\) and \\(n_{c}({\\bf r})\\), \\(\\tilde{n}({\\bf r})\\) and \\(n({\\bf r})=n_{c}({\\bf r})+\\tilde{n}({\\bf r})\\) are the condensate, non-condensate and total densities, respectively. The functions \\(u_{i}\\), \\(v_{i}\\) are referred to as quasiparticle amplitudes, and \\(E_{i}\\) are the quasiparticle energies. The first term in (3) is the condensate part and merely a \\(c\\)-number. The second term is the Hamiltonian for non-interacting quasiparticles and is formally equivalent to the case of the harmonic oscillator. The constant energy shift \\(C\\) arises from the Bogoliubov transformation [18] and from terms left over from the quartet operator averages of the fluctuation operators which are factorized in a fashion analogous to Wick's theorem [20; SS4.2]. However, this energy shift has no impact on the solution of the HFB equations. The form (3) of the Hamiltonian requires that the order parameter obeys the generalized Gross-Pitaevskii equation (GPE) \\[\\left(\\hat{h}({\\bf r})-\\mu\\right)\\Psi_{0}({\\bf r})+g({\\bf r})\\left(n_{c}({\\bf r })+2\\,\\tilde{n}({\\bf r})\\,\\right)\\Psi_{0}({\\bf r})=0 \\tag{4}\\] and that the quasiparticle amplitudes and energies obey the coupled Bogoliubov-de Gennes (BdG) equations \\[\\begin{split}\\hat{\\cal L}\\,u_{i}({\\bf r})-g({\\bf r})\\,\\Psi_{0}({ \\bf r})^{2}\\,v_{i}({\\bf r})&=E_{i}\\,u_{i}({\\bf r})\\\\ \\hat{\\cal L}\\,v_{i}({\\bf r})-g({\\bf r})\\,\\Psi_{0}^{*}({\\bf r})^{2} \\,u_{i}({\\bf r})&=-E_{i}\\,v_{i}({\\bf r})\\,\\end{split} \\tag{5}\\] so as to eliminate off-diagonal terms in the quasiparticle field operators. The BdG equations determine the elementary excitation modes of the condensate. We refer to (4), together with (5), as the HFB equations. Note that we have taken the Popov approximation by neglecting the anomalous average of the fluctuation operator, \\(\\tilde{m}({\\bf r})=\\langle\\delta\\hat{\\psi}({\\bf r})\\delta\\hat{\\psi}({\\bf r})\\rangle\\), whereby avoiding divergence problems of this quantity and the occurrence of a gap in the excitations spectrum [18; 19]. Once the BdG equations are solved, the non-condensate density \\(\\tilde{n}=\\langle\\delta\\hat{\\psi}^{\\dagger}({\\bf r})\\delta\\hat{\\psi}({\\bf r})\\rangle\\) can be obtained by populating the quasiparticle states, \\[\\tilde{n}({\\bf r})=\\sum_{i}f_{\\rm B}(E_{i})\\left(|u_{i}({\\bf r})|^{2}+|v_{i}({ \\bf r})|^{2}\\right)+|v_{i}({\\bf r})|^{2}\\, \\tag{6}\\] where the quasiparticle distribution function with the inverse temperature \\(\\beta\\) is given by \\[f_{\\rm B}(E_{i})=\\langle\\hat{\\alpha}_{i}^{\\dagger}\\hat{\\alpha}_{i}\\rangle= \\frac{1}{z^{-1}e^{\\beta E_{i}}-1}. \\tag{7}\\] Here, the fugacity \\(z\\) is determined by the difference between the chemical potential \\(\\mu\\) and the condensate eigenvalue \\(\\lambda\\), \\(z=e^{\\beta(\\mu-\\lambda)}\\), since the quasiparticle energies are measured relative to the condensate [20]. To a good approximation, we can use the result for the non-interacting gas, i. e. \\[z^{-1}=1+\\frac{1}{N_{0}}. \\tag{8}\\] The system we consider has a finite number of atoms. The fugacity fulfills the practical purpose of preventing the number of thermal atoms from exceeding the total atom number and, hence, the condensate density from becoming negative in our numerical calculations. In order to study coherence properties, we calculate the normalized first order correlation, or coherence function, which can be written in terms of the field operators as [21] \\[g^{(1)}(\\mathbf{r},\\mathbf{r}^{\\prime})=\\frac{\\langle\\hat{\\psi}^{\\dagger}( \\mathbf{r})\\hat{\\psi}(\\mathbf{r}^{\\prime})\\rangle}{\\sqrt{\\langle\\hat{\\psi}^{ \\dagger}(\\mathbf{r})\\hat{\\psi}(\\mathbf{r})\\rangle\\langle\\hat{\\psi}^{\\dagger}( \\mathbf{r}^{\\prime})\\hat{\\psi}(\\mathbf{r}^{\\prime})\\rangle}}. \\tag{9}\\] Given the decomposition of the field operator, the coherence function can be expressed in terms of the off-diagonal densities \\[n_{c}(\\mathbf{r},\\mathbf{r}^{\\prime}) =\\Psi_{0}^{*}(\\mathbf{r})\\Psi_{0}(\\mathbf{r}^{\\prime}) \\tag{10}\\] \\[\\tilde{n}(\\mathbf{r},\\mathbf{r}^{\\prime}) =\\langle\\delta\\hat{\\psi}^{\\dagger}(\\mathbf{r})\\delta\\hat{\\psi}( \\mathbf{r}^{\\prime})\\rangle. \\tag{11}\\] The latter can be calculated from the off-diagonal version of (6). Using the above for the correlation function, (9) gives \\[g^{(1)}(\\mathbf{r},\\mathbf{r}^{\\prime})=\\frac{n_{c}(\\mathbf{r},\\mathbf{r}^{ \\prime})+\\tilde{n}(\\mathbf{r},\\mathbf{r}^{\\prime})}{\\sqrt{n(\\mathbf{r})\\,n( \\mathbf{r}^{\\prime})}}. \\tag{12}\\] The correlation function is related to the momentum spectrum of the condensate by a simple Fourier transformation, i. e. \\[n(\\mathbf{k})=\\langle\\hat{\\phi}^{\\dagger}(\\mathbf{k})\\hat{\\phi}(\\mathbf{k}) \\rangle=\\int\\!\\mathrm{d}^{2}r\\,\\mathrm{d}^{2}r^{\\prime}\\ e^{i\\mathbf{k}\\cdot( \\mathbf{r}-\\mathbf{r}^{\\prime})}\\,\\langle\\hat{\\psi}^{\\dagger}(\\mathbf{r})\\hat {\\psi}(\\mathbf{r}^{\\prime})\\rangle\\, \\tag{13}\\] with \\(\\hat{\\phi}(\\mathbf{k})\\) and \\(\\hat{\\phi}^{\\dagger}(\\mathbf{k})\\) being the field operators in momentum space. This implies that coherence properties can be directly measured in an experiment, as has been done in [22] for the quasi-one-dimensional case by means of Bragg spectroscopy. In a Bragg experiment, the propagation speed of the light field is determined by the detuning of the crossed laser beams [23]. Therefore, the spectral response of the condensate is measured as a function of the detuning. To establish the relationship with the momentum distribution, we use the relation between the detuning \\(\\delta\\) and the momentum within the condensate plane \\(p_{\\perp}\\) for a \\(n\\)-photon process, \\[\\delta=\\frac{n\\,k_{L}p_{\\perp}}{2\\pi\\,m}\\, \\tag{14}\\] where \\(k_{L}=2\\pi/\\lambda,\\,\\lambda\\) is the wavelength of the light field (\\(780.02\\,\\mathrm{nm}\\) for Rubidium [22], \\(589\\,\\mathrm{nm}\\) for Sodium [24]), and \\(m\\) the mass of the atoms. ### Numerical Methods We discuss some aspects important to the solution of the finite temperature HFB equations. The trapping frequency in the axial direction is sufficiently large so that the dynamics in this dimension are frozen out (\\(\\hbar\\omega_{z}>k_{\\mathrm{B}}T\\)). In the radial plane, we consider an isotropic trapping potential with the radial frequency \\(\\omega_{\\perp}\\), \\(U_{\\mathrm{trap}}=m\\omega_{\\perp}^{2}r^{2}/2\\). Thus, our system is cylindrically symmetric and we can effectively treat the problem as one-dimensional upon changing to cylindrical coordinates. We scale all equations to computational units, i. e. lengths by the oscillator length, \\(a_{0}=\\sqrt{\\hbar/m\\omega_{\\perp}}\\), and energies by the Rydberg of energy, \\(E_{0}=\\hbar\\omega_{\\perp}/2\\). The calculation follows a self-consistent, iterative scheme, as proposed in [18]. First, the GPE is solved with the non-condensate density set to zero. Taking this calculated condensate density, the BdG equations are solved to obtain the quasiparticle modes. These are then populated through the quasiparticle distribution function (7), with the sum of all the excited particles yielding the thermal density. With the non-condensate density now known, we go back and solve the generalized GPE and the whole process is repeated until convergence. To begin, we expand the order parameter in a set of basis states. A convenient basis for this problem is given by the eigenstates of the 2D harmonic oscillator, since the single-particle Hamiltonian is diagonal in this basis. To take the cylindrical symmetry into account, we write the eigenfunctions of the oscillator problem \\(\\hat{h}_{\\mathrm{osc}}=-\\Delta+r^{2}\\) in terms of the Laguerre polynomials \\(L_{n}^{m}\\), \\[\\chi_{n,m}(r,\\varphi)=\\frac{1}{\\sqrt{\\pi\\,\\Gamma(1+m)\\,{n+m\\choose n}}}\\ r^{m}e^{-\\frac{r^{2}}{2}+im \\varphi}\\,L_{n}^{m}(r^{2})\\, \\tag{15}\\] with the eigenenergies \\(E_{n,m}=2(2n+m+1)\\). The quantum number \\(m\\) defines the angular momentum. Since the condensate ground state has zero angular momentum, for the solution of the GPE merely the \\(m=0\\) subspace must be considered. In order to numerically solve the GPE, we use an optimization routine with a Thomas-Fermi profile as the initial guess. The solution of the BdG equations follows the method described in [19]. In a first step, the BdG equations (5) are decoupled by a transformation to the auxiliary functions \\(\\psi_{i}^{(\\!c)}(r)=u_{i}(r)\\pm v_{i}(r)\\). Omitting spatial dependencies, this leads to \\[\\begin{split}&\\left(\\hat{h}_{\\mathrm{GP}}-\\mu\\right)^{2}\\psi_{i}^{( \\!c)}+2gn_{c}\\big{(}\\hat{h}_{\\mathrm{GP}}-\\mu\\big{)}\\,\\psi_{i}^{(\\!c)}& =E_{i}^{2}\\,\\psi_{i}^{(\\!c)}\\\\ &\\left(\\hat{h}_{\\mathrm{GP}}-\\mu\\right)^{2}\\psi_{i}^{(\\!c)}+2g \\big{(}\\hat{h}_{\\mathrm{GP}}-\\mu\\big{)}n_{c}\\,\\psi_{i}^{(\\!c)}&=E_{ i}^{2}\\,\\psi_{i}^{(\\!c)}\\,\\end{split} \\tag{16}\\] where \\(\\hat{h}_{\\mathrm{GP}}\\equiv\\hat{h}+g\\left(n_{c}+2\\tilde{n}\\right)\\) is the Hamiltonian in the generalized GPE (4). The auxiliary functions \\(\\psi_{i}^{(\\!c)}(r)\\) are then expanded in the basis set in which \\(\\hat{h}_{\\mathrm{GP}}\\) is diagonal. This basis we term the Hartree-Fock (HF) basis and it is obtained by the full solution of the generalized GPE, i. e. \\[\\left(\\hat{h}_{\\mathrm{GP}}-\\mu\\right)\\phi_{\\alpha}(r)=\\varepsilon_{\\alpha}\\, \\phi_{\\alpha}(r)\\, \\tag{17}\\] where \\(\\{\\phi_{\\alpha}(r)\\}\\) is the HF basis. The primary advantage of this basis is that all excited states are by definition orthogonal to the condensate, after the lowest momentum state, which is the condensate state itself, has been removed from the basis set. Both the calculation of the HF basis, as well as the solution of the decoupled BdG equations are only linear problems, since the condensate density is given from the solution of the GPE, and can be solved in a straightforward manner. The eigenvalue problem corresponding to the BdG equations is block-diagonal with no overlap between the subspaces of different angular momentum, so that the solution to (16) can be obtained separately in each subspace. The thermal density then follows from (6) by summing up the contributions from all angular momentum subspaces. Naturally, the number of basis states used in the discrete, quantum mechanical calculation is limited by an upper energy cutoff, \\(\\epsilon_{\\rm cut}\\), which must be introduced consistently in all angular momentum subspaces. To account for the contributions above the energy cutoff, we use the semi-classical approximation [25; 26], so that \\[\\tilde{n}(r)=\\sum_{i}\\tilde{n}_{i}^{\\rm qm}(r)\\times\\Theta( \\epsilon_{\\rm cut}-E_{i})+\\int_{\\epsilon_{\\rm cut}}^{\\infty}\\!\\!\\mathrm{d}E\\; \\tilde{n}^{\\rm sc}(E,r). \\tag{18}\\] The contribution \\(\\tilde{n}_{i}^{\\rm qm}(r)\\) below the cutoff is given by the addent in (6), and above the cutoff by the semi-classical equation, with the Heaviside function \\(\\Theta\\), and again omitting spatial dependencies, \\[\\tilde{n}^{\\rm sc} =\\frac{m}{2\\pi\\hbar^{2}}\\left\\{f_{\\rm B}(E)+\\frac{1}{2}\\,-\\frac{ E}{2\\,\\sqrt{E^{2}+(gn_{c})^{2}}}\\right.\\] \\[\\left.\\times\\,\\Theta\\left(E-\\sqrt{\\left(U_{\\rm trap}-\\mu+2gn \\right)^{2}-\\left(gn_{c}\\right)^{2}}\\,\\right)\\right\\}. \\tag{19}\\] ## III Results We will now present the results of our numerical calculation. We consider a sample of 2000 sodium atoms that are trapped in a harmonic potential with the parameters of the experiment by Gorlitz _et al._[1]. The radial trapping frequency is \\(\\omega_{\\perp}=2\\pi\\times 790\\,\\mathrm{Hz}\\). Unless otherwise stated, all quantities are expressed in dimensionless form. ### Thermal density and condensate population #### iii.1.1 Density profiles Figure 1 shows the thermal density at different temperatures. The temperature dependent term in (6) leads to the formation of the characteristic off-center peak of the non-condensate density. It is located at the edge of the condensate due to the repulsion of the thermal atoms by the condensate. This is depicted in Figure 2, where the two densities \\(n_{c}\\) and \\(\\tilde{n}\\) are plotted together. In comparison to the rapidly decaying condensate density, the thermal density has a long tail. Thus, the condensate is relatively dense with a sharp peak within the diffuse thermal cloud. The tail of the thermal cloud becomes longer as the temperature increases, while the condensate radius does not change significantly even if the number of condensate atoms drops by an order of magnitude. Note that the long tail contains a large number of atoms despite its low density because the spatial integral is weighted by a factor of \\(r\\) (\\(r^{2}\\) in three dimensions). The slight change in the size of the condensate can also be seen in the shift of the non-condensate peak towards the trap center with increasing temperature, remaining located at the edge of the condensate. Note that even at zero temperature there is a small fraction of excited atoms due to the temperature-independent quantum depletion term in (6). From the existence of a well defined condensate and non-condensate density we can already infer coherence information. Following the argument in [12; 16], a measure of phase fluctuations is given by \\(\\langle\\hat{\\delta}^{2}\\rangle\\approx\\tilde{n}/n_{c}\\), where \\(\\hat{\\delta}\\) is the phase fluctuation operator in the alternative decomposition \\(\\hat{\\psi}(r)\\simeq\\sqrt{\\hat{n}(r)}\\,e^{i\\hat{\\delta}(r)}\\). Phase fluctuations become important when \\(\\langle\\hat{\\delta}^{2}\\rangle\\gtrsim 1\\). Thus, as long as \\(n_{c}>\\tilde{n}\\), these fluctuations are suppressed in the system. #### iii.1.2 Ground state population In Figure 3 the condensate population is shown as a function of temperature. The results are compared to the case of the trapped ideal gas, where the population Figure 2: Condensate (solid) and non-condensate density (dashed) at \\(0.5\\) and \\(0.9\\,T_{c}\\). Figure 1: Non-condensate density at \\(T/T_{c}=0\\), \\(0.1\\), \\(0.25\\), \\(0.5\\) and \\(0.75\\) (from bottom to top). The lowest line corresponds to the quantum depletion. is determined by a power law expression. We fit the following functional form to the numerical data: \\[\\frac{N_{0}}{N}=1-\\left(\\frac{T}{\\bar{T}_{c}}\\right)^{\\beta},\\quad\\text{where} \\quad\\bar{T}_{c}=\\alpha T_{c}. \\tag{20}\\] In the case of the ideal gas, \\(\\beta=2\\) and \\(\\alpha=1\\). In the fit the critical temperature is reduced by a factor of about \\(5\\%\\) with \\(\\alpha\\approx 0.95\\). This shift has two contributions: The finite size of the system reduces the critical temperature [27], but it is also modified by the interactions. This second contribution as been extensively discussed in the literature, see the recent publication [28] and references therein. For the exponent we find \\(\\beta\\approx 1.70\\), which is \\(15\\%\\) smaller than for the ideal trapped gas. With these values, (20) parameterizes our data very well except near the critical temperature, where finite size effects are significant and the exact method of determining the shift of the chemical potential from the condensate eigenvalue becomes important. In the ideal gas the chemical potential is zero at the point of the phase transition, and therefore the transition point is strictly defined. In the interacting gas, the chemical potential depends implicitly on the non-condensate [19] and, when this becomes large, the transition point becomes smeared out. Technically, the occurrence of the finite temperature tail can be explained through the fugacity term in the quasiparticle distribution function (7), which is explicitly given by (8). The term \\(\\propto 1/N_{0}\\) prevents the condensate population from becoming negative. However, this expression for the fugacity is only approximate and the shape of the tail and the speed with which it approaches zero depends on the explicit choice of the fugacity term at temperatures around \\(\\bar{T}_{c}\\). ### Condensate excitations The low-lying collective or elementary excitation modes of the condensate, determined by the solution of the BdG equations, are of interest because they reflect certain fundamental symmetry properties of the system, as well as being easily accessible to experiments. Figure 4 shows the modes with angular momentum \\(m=0,1,2\\) as a function of temperature. Each of the three branches corresponds to the lowest quasiparticle energy eigenvalue in the lowest three, separated, angular momentum subspaces, in which the BdG equations are solved, c. f. Section II.2. Breathing mode.The breathing mode corresponds to an oscillation of the condensate radius and lies at a frequency that is twice the trapping frequency. As shown in [29], this is due to a hidden symmetry of the many-body Hamiltonian with a \\(\\delta^{(2)}\\)-interaction potential and a harmonic trapping potential in two dimensions. As the temperature increases, the non-condensate density grows and starts to constitute a deviation from the harmonic oscillator potential in the effective potential of the Gross-Pitaevskii equation, so that the frequency shifts slightly from \\(2\\omega_{\\perp}\\). The effective potential is weakened by the presence of the static thermal atoms so that the frequency decreases. If the dynamics of the thermal cloud were included in the calculation [30], then the full symmetry of the Hamiltonian would be restored and the mode frequency would remain precisely at \\(2\\hbar\\omega_{\\perp}\\). Kohn mode.The Kohn mode corresponds to a center of mass oscillation of the whole condensate. Less effected by the perturbation to the harmonic potential of the static thermal cloud, it remains very constant at the trapping frequency, as is predicted by the generalized Kohn theorem [31]. However, looking closer at Figure 4, one may see a slight increase in the frequency near the critical temperature as the effective potential becomes less harmonic and, therefore, breaks the Kohn theorem. In our calculation we treat the thermal cloud as stationary. An inclusion of the full dynamics of the thermal cloud would, again, ensure the Kohn mode remains constant at all temperatures [19; 30; 32]. Figure 3: Condensate population versus temperature. The solid line corresponds to a fit to (20), the dashed line shows the ideal gas power law dependence. The points are results from the HFB-Popov calculation. Figure 4: Low-lying excitation modes of the condensate as a function of temperature. The uppermost line corresponds to the _breathing mode_ with angular momentum quantum number \\(m=0\\), the middle line to the _quadrupole mode_ with \\(m=2\\). The lowest line is the _Kohn mode_, \\(m=1\\), which lies constantly at the trapping frequency. Quadrupole mode.The quadrupole mode is the only low-lying mode which depends strongly upon the temperature. The frequency of this mode could thus, in principle, be used as a measure of the temperature of the 2D gas. With increasing temperature, all three frequencies smoothly approach the frequencies of the non-interacting gas and the breathing and quadrupole mode become degenerate. Using a local density approximation with the relation \\(\\mu=g[n_{c}(r)+2\\tilde{n}(r)]\\) for the uniform gas in the Hartree-Fock approximation [33, SS8.3], it is easy to show that the BdG equations for \\(n_{c}(r)=0\\), given by \\[\\left(\\hat{h}(r)-\\mu+2g\\tilde{n}(r)-E_{i}\\right)u_{i}(r)=0\\, \\tag{21}\\] recover the energies of the harmonic oscillator. In the case that there is still a condensate, the total density in the region where \\(n_{c}(r)\ eq 0\\) is approximately constant just below the critical temperature, so that the mean-field energy only constitutes a near constant shift to the trapping potential and, hence, only slightly alters the eigenfrequencies of the trap. We would briefly like to draw comparison with the three-dimensional case where the frequency spectrum looks very similar [19, 34]. The striking difference is the breathing mode which is temperature dependent in three dimensions, whereas it is a feature of the two-dimensional system to have breathing oscillations with a universal energy of \\(2\\hbar\\omega_{\\perp}\\). ### Coherence properties #### iii.3.1 Correlation function Interacting gas.In Figure 5 the correlation function \\(g^{(1)}(0,r)\\) is depicted at various temperatures. The \\(r\\)-axis has been scaled by the size of the condensate. This is not the Thomas-Fermi radius, but we choose a minimal allowed condensate density in such a way that the whole condensate at zero temperature is phase coherent. The decay of the correlation function allows for a characterization of the gaseous system. At low temperatures the correlation function has a constant value throughout the extent of the condensate, indicating a truly coherent Bose-Einstein condensed phase with off-diagonal long-range order. Algebraic decay is associated with the KT phase and, at intermediate temperatures, the superfluid must be identified as a quasicondensate. At very high temperatures, clearly visible for the highest temperature in Figure 5, the coherence function decays exponentially, showing that long-range order is lost completely. At very low temperatures the correlation functions show some unphysical oscillations that are purely numerical noise. At low temperatures \\(n(r)\\approx n_{c}(r)\\). In the limit \\(\\tilde{n}\\equiv 0\\) the correlation function (9) is given by the Heaviside function \\(\\Theta(1-r/r_{\\rm con})\\). However, there is a small contribution from the quantum depletion of the condensate that smoothes the sharp corner as \\(g^{(1)}\\) drops to zero, causing a loss of numerical accuracy as we divide two very similar small numbers in (12). The coherence length can be extracted by measuring the full width at half maximum (fwhm) of \\(g^{(1)}\\) and is shown in Figure 8. Non-interacting gas.The off-diagonal density matrix is known in closed analytical form for the non-interacting gas. In can be determined by means of the inverse Laplace transform of the zero-temperature Bloch density matrix [35, 36]. Its explicit form in two dimensions at temperature \\(T\\) is given by \\[g^{(1)}(\\mathbf{r},\\mathbf{r}^{\\prime},T)=\\sum_{j=1}^{\\infty} \\frac{e^{j\\mu/T}}{\\pi(1-e^{-2j/T})}\\times\\\\ \\exp\\left(-\\frac{|\\mathbf{r}+\\mathbf{r}^{\\prime}|^{2}}{4}\\tanh(j /2T)-\\frac{|\\mathbf{r}-\\mathbf{r}^{\\prime}|^{2}}{4}\\coth(j/2T)\\right)\\,. \\tag{22}\\] We find the chemical potential \\(\\mu=\\mu(T)\\) for the trapped non-interacting gas by solving \\(\\sum_{n=0}^{\\infty}f_{\\rm B}(E_{n}=n+1,\\mu,T)\\left(n+1\\right)-N=0\\) with respect to \\(\\mu\\). Here, \\(f_{\\rm B}=\\left[\\left(1+N_{0}^{-1}\\right)e^{\\beta(E_{n}-\\mu)}-1\\right]^{-1}\\) is the Bose-Einstein distribution function _with the fugacity factor_ (8) that takes into account the number of condensate particles from the HFB calculation. Without this fugacity factor, finite size effects, taken into account in the HFB calculation, would be neglected and, therefore, the comparison would be between two approaches based on different assumptions. The expression (22) for the correlation function is exact at all temperatures. Numerically, the infinite sum can be calculated up to any required accuracy. We compared our code against these exact results for the non-interacting gas. At all temperatures the HFB results agree perfectly with the correlation function calculated from (22), implying that the numerics works well even at high temperatures. Deviations would indicate an insufficiency in the basis set or inaccuracy due to an insufficient fineness or range of the computational grid. The influence of the interactions can be seen in Figure Figure 5: Correlation function \\(g^{(1)}\\) for the non-interacting Bose gas at different temperatures \\(T/T_{c}\\): 0, 0.025, 0.05, 0.1, 0.5, 0.9 and 1 from right to left. 6. The solid lines show the coherence function for the interacting gas, compared to the exact non-interacting gas equation, shown as dotted lines. We see that interactions increase the coherence length in a large part of the temperature regime. At \\(0.8\\,T_{c}\\) and above, however, the coherence length of the interacting gas is decreased relative to the non-interacting gas. This can be explained by the effect of the mean-field interaction on the condensate radius. At high temperatures (\\(N_{c}\\) small) the radius of the interacting condensate is approximately the same as the radius of the non-interacting condensate, given by the size of the lowest harmonic oscillator state. At low temperatures, however, the condensate population is large and mean-field effects broaden the condensate. Correspondingly, the coherence function of the interacting condensate is broader. If the spatial coordinate was scaled by the size of the condensate as in Figure 5, the plot would show that interactions always reduce the range of coherence. #### iii.2.2 Momentum spectrum Figure 7 shows the momentum spectrum corresponding to (13), as it could be measured by means of Bragg spectroscopy. It has been calculated by Fourier transforming the correlation function shown in Figure 5. On the ordinate is the detuning of the Bragg laser beams, which is directly proportional to the momentum of the atoms, c. f. (14). The highest peak corresponds to the lowest temperature where the momentum distribution of the atoms is narrowest. With increasing temperature the spectrum is broadened. An experimental setup is limited by its resolution at low temperatures, because of the decrease of the spectral width. In Figure 8, the coherence length obtained from the momentum spectra in Figure 7 is plotted. We determine the coherence length by fitting the momentum profile and measuring the half width of the fit. A Gaussian provides a good fit at temperatures \\(\\raisebox{-2.15pt}{$\\,\\stackrel{{<}}{{\\sim}}\\,$}0.9\\,T_{c}\\). Above a small crossover regime, the momentum profiles at temperatures higher than \\(0.925\\,T_{c}\\) fit more closely to a Lorentzian. Thus, the data points in Figure 8 correspond to the fwhm of a Gaussian or a Lorentzian fit, depending on which gives better agreement with the data. In the same graph the lengths are compared to those obtained from the results shown in Figure 5. Qualitatively the results agree with each other, although those obtained from \\(g^{(1)}\\) lead to somewhat smaller values for the coherence length. Also one can see that the extracted half widths of the correlation function are subject to a slight inaccuracy, whereas the widths calculated from the fits to the momentum profile result in a smooth line over the whole temperature regime. Note that the coherence length is to some extent a matter of personal definition, as e. g. we could have chosen \\(1/e\\) rather than the fwhm. Looking at the decay of the coherence length in Figure 8, we can distinguish three different regimes. Close to zero temperature, the slope is very steep and the coherence length decreases to a third of the condensate size by about \\(0.1\\,T_{c}\\). Then, up to about \\(0.8\\,Tc\\), \\(l_{\\phi}\\) decreases monotonically, but much more slowly. From there up to the critical temperature, the coherence length again drops rapidly. A decreasing coherence length directly implies a loss in the global phase coherence of the superfluid phase. A true Bose-Einstein condensate cannot be said to exist when the phase of the order parameter fluctuates on a length scale significantly smaller than the extent of the condensate. At this point we should instead refer to a quasicondensate. From Figure 5 we see that the coher Figure 6: Correlation function \\(g^{(1)}\\) for the non-interacting Bose gas at different temperatures \\(T/T_{c}\\): 0.025, 0.1, 0.5, 0.8 and 0.95 from right to left. The solid lines correspond to the HFB-Popov results for the interacting gas, the dotted lines represent the exact result for the non-interacting gas (22). Figure 7: Momentum spectrum, with intensity corresponding to the fraction of scattered atoms as function of the Bragg detuning \\(\\delta\\). Temperatures \\(T/T_{c}\\) from top: 0, 0.025, 0.05, 0.1, 0.5, 0.9 and 1. ence length drops smoothly. Therefore, it is difficult to determine an exact point on the temperature scale where the transition from a true condensate to a quasicondensate takes place. At about \\(0.5\\,T_{c}\\) the coherence length has dropped to about half of the maximal value. The maximal value we can use to determine the spatial extent of the condensate, indicated on the right axis in Figure 8. The treatment in [16] predicts a value of approximately \\(0.4\\,T_{c}\\) for our parameters for phase fluctuations to become dominant. Our result is, therefore, consistent with [16], although the coherent phase seems to persist at slightly higher temperatures. #### iii.3.3 Comparison to 1D Similar behaviour has been observed in calculations for the one-dimensional Bose gas at finite temperatures [37]. Looking at the coherence function presented by Ghosh, we see that in the one-dimensional case the coherence length drops even more rapidly than in the two-dimensional case, showing that phase fluctuations become much more dominant as the dimension is reduced further. In 1D, the temperature range between \\(0.3\\) and \\(0.5\\,T_{c}\\) has about the same coherence properties as the range around \\(0.9\\,T_{c}\\) in our 2D calculation. Ghosh identifies the 1D phase at temperatures as low as \\(0.1\\,T_{c}\\) as a quasicondensate with large phase fluctuations. From an examination of Figure 5, we see that, at this temperature in the 2D case, even if the coherence length has decreased slightly, there is still a large proportion of the condensate where \\(g^{(1)}\\) is constantly 1, indicating that the system is essentially a phase coherent BEC. In 1D the Lorentzian momentum profile has been found to be characteristic of the phase-fluctuating quasicondensate [38] and has been used as an the identifying signature of such a phase [22]. However, we are convinced that the shape change we observe is not a signature of a phase fluctuating condensate, but the effect of the fugacity term as \\(N_{0}\\) goes to zero. Furthermore, from looking at the correlation function in Figure 5, we would expect the phase fluctuations to become important, indicating the presence of a quasicondensate, at about \\(0.5\\,T_{c}\\), long before the momentum profile becomes Lorentzian in character. ## IV Conclusion We have used the HFB formalism to investigate the finite-temperature physics of a Bose-Einstein condensate confined to a two-dimensional geometry. Unlike the three dimensional case, phase fluctuations must be taken into consideration at comparatively lower temperatures. In a regime below the critical temperature they destroy the global coherence of the condensate and the superfluid state is best described as a quasicondensate. In the HFB formalism phase fluctuations are included via the contribution to the non-condensate density from low-energy quasiparticles. We have shown that the formalism is not only applicable in the strictly phase coherent regime, but also that the quantities obtained, such as the single-particle off-diagonal density matrix, allow for a quantitative analysis even in the phase fluctuating regime. Our work is consistent with [16], although we find that, within the HFB treatment, the pure condensate phase persists to higher temperatures. The coherence length of the condensate can be determined from its correlation function or the momentum profile. Following Aspect _et al._ for the one-dimensional case [22; 39], we have calculated the Bragg spectrum for a condensate in two dimensions. We found the values extracted for the coherence length to be in qualitative agreement with those calculated for the one-dimensional Bose gas, although a true BEC with global phase coherence still exists at much higher temperatures than in the 1D case. The Bragg spectrum provides a clear signature of the quasicondensate phase and we anticipate experimental efforts in this area in the near future. ###### Acknowledgements. The authors would like to acknowledge financial support from the Marsden and ISAT Linhages Funds of the Royal Society of New Zealand, as well as a University of Otago Research Grant. We thank Sam Morgan, Mark Lee and Brandon van Zyl for many useful conversations during various exchange visits and subsequently. Figure 8: Coherence length of the condensate, shown in both real units and scaled by the extension of the condensate. The upper line (\\(+\\)) has been calculated from the momentum spectrum (left panel), the lower line (\\(\\cdot\\)) is the data from Figure 5. ## References * (1) A. Gorlitz, J. M. Vogels, A. E. Leanhardt, C. Raman, T. L. Gustavson, J. R. Abo-Shaeer, A. P. Chikkatur, S. Gupta, S. Inouye, T. Rosenband, et al., Phys. Lett. **87**, 130402 (2001). * (2) D. Rychtarik, B. Engeser, H.-C. Nagerl, and R. Grimm, Phys. Rev. Lett. **92**, 173003 (2004). * (3) N. D. Mermin and H. Wagner, Phys. Rev. Lett. **17**, 1133 (1966). * (4) N. D. Mermin, Phys. Rev. **176**, 250 (1968). * (5) P. C. Hohenberg, Phys. Rev. **158**, 383 (1967). * (6) O. Penrose and L. Onsager, Phys. Rev. **104**, 576 (1956). * (7) C. N. Yang, Rev. Mod. Phys. **34**, 694 (1962). * (8) J. M. Kosterlitz and D. J. Thouless, J. Phys. C: Solid State Phys. **6**, 1181 (1973). * (9) J. M. Kosterlitz, J. Phys. C: Solid State Phys. **7**, 1046 (1974). * (10) V. L. Berezinski, Sov. Pys. JETP **32** (1971), [Zh. Eksp. Teor. Fiz. **59**, 907 (1970)]. * (11) V. N. Popov, _Functional Integrals in Quantum Field Theory and Statistical Physics_ (D. Reidel Publishing Company, Holland, 1983). * (12) C. Gies, B. P. van Zyl, S. A. Morgan, and D. A. W. Hutchinson, Phys. Rev. A **69**, 023616 (2004). * (13) K. Huang, _Statistical Mechanics_ (John Wiley & Sons, New York, 1987), 2nd ed. * (14) M. D. Lee, S. A. Morgan, M. J. Davis, and K. Burnett, Phys. Rev. A **65**, 043617 (2002). * (15) D. S. Petrov and G. V. Shlyapnikov, Phys. Rev. A **64**, 012706 (2001). * (16) D. S. Petrov, M. Holzmann, and G. V. Shlyapnikov, Phys. Rev. Lett. **84**, 2551 (2000). * (17) C. Gies, M. D. Lee, and D. A. W. Hutchinson, _to be published_. * (18) A. Griffin, Phys. Rev. B **53**, 9341 (1996). * (19) D. A. W. Hutchinson, K. Burnett, R. J. Dodd, S. A. Morgan, M. Rusch, E. Zaremba, N. P. Proukakis, M. Edwards, and C. W. Clark, J. Phys. B **33**, 3825 (2000). * (20) S. A. Morgan, J. Phys. B **33**, 3847 (2000). * (21) M. Naraschewski and R. J. Glauber, Phys. Rev. A **59**, 4595 (1999). * (22) S. Richard, F. Gerbier, J. H. Thywissen, M. Hugbart, P. Bouyer, and A. Aspect, Phys. Rev. Lett. **91**, 010405 (2003). * (23) P. B. Blakie, R. J. Ballagh, and C. W. Gardiner, Phys. Rev. A **65**, 033602 (2002). * (24) S. R. Wilkinson, C. F. Bharucha, K. W. Madison, Q. Niu, and M. G. Raizen, Phys. Rev. Lett. **76**, 4512 (1996). * (25) J. Reidl, A. Csordas, R. Graham, and P. Szepfalusy, Phys. Rev. A **59**, 3816 (1999). * (26) W. J. Mullin, J. Low Temp. Phys. **110**, 167 (1998). * (27) S. Giorgini, L. P. Pitaevskii, and S. Stringari, Phys. Rev. A **54**, R4633 (1996). * (28) B. Kastening, Phys. Rev. A **69**, 043613 (2004). * (29) L. P. Pitaevskii and A. Rosch, Phys. Rev. A **55**, R853 (1997). * (30) S. A. Morgan, M. Rusch, D. A. W. Hutchinson, and K. Burnett, Phys. Rev. Lett. **91**, 250403 (2003). * (31) J. F. Dobson, Phys. Rev. Lett. **73**, 2244 (1994). * (32) N. P. Proukakis, S. A. Morgan, S. Choi, and K. Burnett, Phys. Rev. A **58**, 2435 (1998). * (33) C. J. Pethick and H. Smith, _Bose-Einstein Condensation in Dilute Gases_ (Cambridge University Press, 2002). * (34) D. A. W. Hutchinson, R. J. Dodd, and K. Burnett, Phys. Rev. Lett. **81**, 2198 (1998). * (35) B. P. van Zyl, Phys. Rev. A **68**, 033601 (2003). * (36) B. P. van Zyl, R. K. Bhaduri, A. Suzuki, and M. Brack, Phys. Rev. A **67**, 023609 (2003). * (37) T. K. Ghosh, Preprint cond-mat/0402079 (2004). * (38) F. Gerbier, J. H. Thywissen, S. Richard, M. Hugbart, P. Bouyer, and A. Aspect, Phys. Rev. A **67**, 051602(R) (2003). * (39) A. Aspect, S. Richard, F. Gerbier, M. Hugbart, J. Retter, J. Thywissen, and P. Bouyer, Proceedings of the International Conference on Laser Spectroscopy (ICOLS 03), Cairns, Australia (2003).
We present a detailed finite-temperature Hartree-Fock-Bogoliubov (HFB) treatment of the two-dimensional trapped Bose gas. We highlight the numerical methods required to obtain solutions to the HFB equations within the Popov approximation, the derivation of which we outline. This method has previously been applied successfully to the three-dimensional case and we focus on the unique features of the system which are due to its reduced dimensionality. These can be found in the spectrum of low-lying excitations and in the coherence properties. We calculate the Bragg response and the coherence length within the condensate in analogy with experiments performed in the quasi-one-dimensional regime [Richard _et al._, Phys. Rev. Lett. **91**, 010405 (2003)] and compare to results calculated for the one-dimensional case. We then make predictions for the experimental observation of the quasicondensate phase via Bragg spectroscopy in the quasi-two-dimensional regime. pacs: 03.75.Hh, 05.30.Jp, 67.40.Db
Condense the content of the following passage.
arxiv-format/0407075v1.md
**[To be published in Geophysics Research Letters (GL020212) in early July]** **Disparity of Tropospheric and Surface TemperatureTrends: New Evidence** David H. Douglass\\({}^{1}\\)*, Benjamin D. Pearson\\({}^{1}\\), S. Fred Singer\\({}^{2}\\), Paul C. Knappenberger\\({}^{3}\\), and Patrick J. Michaels\\({}^{4}\\) 1. Dept of Physics and Astronomy, University of Rochester, Rochester, NY 14627 2. Science & Environmental Policy Project and University of Virginia, Charlottesville, VA 22903 3. New Hope Environmental Services, Charlottesville, VA 22902 4. Dept of Environmental Sciences, University of Virginia, Charlottesville, VA 22903 *corresponding author. [email protected]_ Index: Air/sea interactions 4504, ocean/atmos 3309, atmosphere 1610, general 1699 ## 1 Introduction The question of the degree to which Earth's surface temperature is increasing is a climate problem of great interest. The pattern and magnitude of current and /or future warming has both ecological and economic implications. However, the science is not settled on these issues, as many outstanding questions remain. For example, General Circulation Models (GCMs) predict that as a result of enhanced greenhouse gases and atmospheric aerosols, there should be a warming trend that is greater in the low-to-middle troposphere than over the earth's surface [_Chase et al._, 2004]. However, temperature observations taken during the past 25 years do not verify this GCM result [_Douglass et al._, 2004].. The globally averaged surface temperature (ST) trend over the last 25 years is 0.171 K/decade [_Jones et al._, 2001], while the trend in the lower troposphere from observations made by satellites and radiosondes is significantly less, with exact values depending on both the choice of dataset and analysis methodology [e.g., _Christy et al._, 2003, _Lanzante et al._, 2003]. This disparity was of sufficient concern for the National Research Council (NRC) to convene a panel of experts that studied the \"[a]pparently conflicting surface and upper air temperature trends\" and concluded, after considering various possible systematic errors, that \"[a] substantial disparity remains\"[National Research Council, 2000]. The implication of this conclusion is that the temperature of the surface and the temperature of the air above the surface are changing at different rates due to some unknown mechanism. A number of studies have suggested explanations for the disparity. _Lindzen and Giannitsis_ [2002] have ascribed the disparity to a time delay in the warming of the oceans following the rapid temperature increase in the late 1970s. _Hegerl and Wallace_ [2002] have concluded that the disparity is not due to El Nino or cold-ocean-warm-land effects. Other authors [_Santer et al._, 2000] have suggested that the disparity is not real but due to the disturbing effects of El Nino and volcanic eruptions, a conclusion that has been critiqued by _Michaels and Knappenberger_ [2000]. Still others argue that the disparity results from the methodology used to prepare the satellite data [_Fu et al._, 2004, _Vinnikov and Grody_, 2003]; however, only the results from Christy et al. [2000] have been independently confirmed by weather-balloon data [_Christy et al._, 2000, _Christy et al._, 2003, _Lanzante et al._, 2003, _Christy and Norris_, 2004]. In this paper, we explore the geographic patterns of the difference between the trends in surface and lower-tropospheric temperatures. We rely not only on surface and satellite temperature measurements for this comparison, but additionally, we employ a set of data, not previously considered, which represents an attempt to reduce tropospheric observations to surface temperature values. Through this methodology, we hope to shed more light on the nature of this disparity. **2. Data** We incorporate three temperature datasets into this analysis: observations taken at the earth's surface [_Jones et al._, 2001], observations of the lower atmosphere made from satellites [_Christy et al._, 2000], and calculated near-surface temperatures (R2-2m), a diagnostic variable derived from atmospheric temperature observations tied to weather balloons [_Kanamitsu et al._, 2002]. Each of these datasets contributes unique information to the understanding of the evolution of patterns of temperature at and near the earth's surface. The \"surface\" temperature (ST) observations commonly utilized in research and the media are a combination of near-surface air temperatures for land coordinates and below-surface water temperatures for ocean coordinates. The data are monthly anomalies from the 1961-1990 mean temperature within 5\\({}^{\\rm o}\\) by 5\\({}^{\\rm o}\\) grid cells.The amount of available data varies with time and grid cell such that some locations have either no data, or spotty data coverage resulting in a total coverage, that is less than global in extent. The satellite data are observations taken by the microwave sounder units (MSU) [_Christy et al._, 2000]. In this study, we use the MSU data that is best representative of the lower troposphere. These data are monthly anomalies from the 1979-1998 mean values for 2.5\\({}^{\\rm o}\\) by 2.5\\({}^{\\rm o}\\) grid cells with complete global data coverage. Our third dataset is the \"2-meter\" temperature product (R2-2m) from an update of the original National Centers for Environmental Prediction--National Center for Atmospheric Research (NCAR) reanalysis [_Kanamitsu et al._, 2002; _Kalnay et al._, 1996]. The R2-2m temperature data are modelled primarily from a collection of atmospheric measurements from weather balloons and satellites There is little influence from surface thermometers [_Kistler et al._, 2001; _Kalnay et al._, 2003], although other surface processes, such as snow cover, can contribute. The time-evolution of the R2-2m temperature variable is independent of the MSU-derived lower tropospheric temperatures. The globally complete R2 data begins in 1979 and continues through the near present. However, a change in the snow cover measuring system in late 1998 has resulted in break points in the 2-m temperature series in grid cells with seasonal snow cover (W. Ebisuzaki, personal communication, 2004). **3. Methods** Since we wish to examine the disparity in the temperature trends among these three datasets, we limit our analysis to a common observational time series. The starting point in our analysis will be 1979, which is the beginning year in both the R2-2m and MSU data. We truncate the analysis at December 1996 which avoids the snow cover issue in R2-2m. This also avoids the anomalously large 1997 El Nino event in the tropical Pacific which _Douglass and Clader_ [2002] showed can severely affect the trend-line. We will show later in this paper that it is likely that our conclusions would change little had we been able to use data though 2003. For the period 1979 through 1996, we perform a simple least-squares regression analysis through the monthly temperature anomalies for each grid cell in the R2-2m and MSU datasets (which contain no missing data). For the ST data, however, we must be concerned with missing data. We therefore first aggregate the monthly data into annual temperature anomalies requiring at least 9 months of valid data to produce a valid year, and then perform our trend analysis on those grid cells with at least 16 (out of 18) valid years. We then compare the trends across the three datasets grid cell by grid cell, in latitudinally averaged bands, and in global aggregate. In our comparisons involving aggregated grid cells, we first mask out the trends in the gridcells of the globally complete R2 and MSU data in which there is not valid ST data so that all our comparisons are made to a common geographical area. ## 4 Results ### Maps of trend-lines for MSU, ST, and R2-2m. For each cell on the surface of the Earth we show the trend-line for the period 1979-1996 for the MSU, ST, and R2-2m data (See Figures 1A, 1B, and 1C.). One of the most striking observations is that the values are geographically highly non-uniform due in part to the relatively short period examined and the magnitude of natural variations therein.. The greatest positive trends in the Northern Hemisphere (NH) reach valueshigher than the greatest positive trends in the Southern Hemisphere (SH), and the greatest positive trends are in the mid-latitudes bands. In the NH the highest trend-line values are localized in three areas: Region 1. Netherlands/Germany; Region 2. Manchuria/western Pacific and Region 3. Pacific ocean/western Alaska. There are no regions of large positive trends in the equatorial band, (nor at the poles for the MSU data) The polar views dramatically show symmetry about the poles. The MSU plots show an unmistakable 3-fold symmetry in the north-polar view and a 4-fold symmetry in south-polar view. While less clear, the same symmetry exists also in the R2-2m and in the ST maps. Averages computed from these plots are listed in Table 1. ### 4b. Latitude dependence of the zonal average We compute the zonal averages of the MSU, ST and R2-2m trends for each 5\\({}^{\\circ}\\) latitude band and present our results in Figure 2. They all have maximum values close to each other in the band from 40\\({}^{\\circ}\\)N to 50\\({}^{\\circ}\\)N. However, in the tropics, the MSU and R2-2m trends agree and are both negative, whereas the ST trend values are positive. This difference of \\(\\sim\\) 0.2 K/decade at tropical latitudes has been noted before (e.g. _Gaffen et al._, 2000; _Singer_ (2001); _Christy et al._, 2001). It is clear from this graph and the maps that the maximum near 45\\({}^{\\circ}\\)N is real and that values are decreasing as one goes towards the pole--in contrast to what some climate models predict. There is also a relative maximum in the SH located at approximately 25\\({}^{\\circ}\\)S to 30S\\({}^{\\circ}\\). In addition, we made similar plots of the data with either the land or the oceans masked out. We found that the maximum at about 45N was 50% higher for the ocean only data. This is consistent with the map showing high positive trends over the Pacific extending from Region 2 to Region 3. ## 4c. Northern mid-latitudes (35N to 60N) Fig. 3A shows latitude band averages (from 35\\({}^{\\circ}\\)N to 60N) of the trend values vs. longitude. One sees that the variations in amplitude of all three data sets are of about the same magnitude and phase. We note that the three 'warming' regions defined in Section 4a are readily apparent and that Regions 2 and 3 are connected across the Pacific ocean. From the general agreement in amplitude and phase of these three data sets we infer that the methodologies of all are essentially correct and free from harmful errors. ## 4d. Tropics (20S to 20N) Fig 3B shows a plot of trend lines vs. longitude, centered at the equator. It is noted that MSU and R2-2m have nearly the same negative means (-0.06 K/decade) while ST is positive (0.09 K/decade) (see Table 1). The difference in mean between ST and MSU/R2-2m is 0.15 and is the disparity noted by the NRC [National Research Council, 2000] and others. ## 4e. Southern mid-latitude band (40S to 20S) The plots of the three data sets for the SH mid-latitude band are shown in figure 3C. Here the 4-fold symmetry in all three data sets is very noticeable. The averages are: ST: 0.04 K/decade; MSU: 0.02 K/decade; R2-2m: -0.12 K/decade (see Table 1). Thesedifferences are smaller than the amplitudes of the 4-fold oscillation so statements about the differences are difficult. ## 5 Results and Discussion We have studied the temperature trend-lines given by the MSU, R2-2m and ST data for the period 1979 to 1996. There is general agreement among the three (mostly) independent data sets for northern mid-latitudes. It also indicates that the differences we observe in the tropics are real--thus also validating and extending previous results for the tropics that the magnitude of the trend over the oceans is lower than the ST trend (_Gaffen et al., 2000; Christy et al._, 2001). To assess sensitivity to the length of record, we repeated our analysis of the latitude band average for regions only over the open oceans (regions free of seasonal snow cover issues) for 1979-2002. We found some small changes in the absolute trend values, but the pattern of relative trend differences remained similar thus supporting the robustness of our findings and indicates that our results are not adversely affected by truncating the datasets at 1996. Our results point to near-surface processes in the tropical regions as a leading cause in the observed disparity between surface and lower tropospheric temperature trends. As most of the tropical region is dominated by ocean areas, it is possible that ocean/atmosphere interactions are a primary driver of the observed trend differences and that sea surface temperatures are not reliable indicators of the overlying near surface air temperatures. It is interesting to note that the agreement among the three datasets is greatest over the more industrialized northern extratropics, indicating that local processes such as urbanization [_Kalnay and Cai_, 2004] and industrialization [_de Latt and Maurellis_, 2004; _Michaels et al._, 2004] play only a relatively minor role in causing differential vertical temperature trends. This result does not suggest that these processes do not contribute to the observed warming trend, just that they do not contribute greatly to the temperature trend disparity. ## Acknowledgements This research was supported in part by the Rochester Area Community Foundation. We thank R. S. Knox for valuable discussions. Additional thanks to V. Patel and Yi-Lun Ding for assisting in some of the computations. ## References: * Chase et al. (2004) Chase, T. N., R.A. Pielke Sr, et al. (2004), Likelihood of rapidly increasing surface temperatures unaccompanied by strong warming in the free troposphere, _Cli. Res._, _25_, 185-190. * Christy et al. (2000) Christy J. R., R. W. Spencer, et al (2000), MSU Tropospheric temperatures: Data set construction and radiosonde comparisons, _J. Atmos. Oceanic Tech._, _17_, 1153-1170. * Christy et al. (2001) Christy J. R. et al. (2001), Differential trends in tropical sea surface atmospheric temperatures since 1979, _Geophys. Res. Lett._, _28_, 183-186Christy, J. R., R. W. Spencer, et al. (2003), Error estimates of Version 5.0 of MSU-AMSU bulk atmospheric temperatures, _J. Atmos. Oceanic Tech., 20,_ 613-629. * [] Christy, J. R., and W. B. Norris, (2004), What may we conclude about global tropospheric temperature trends? _Geophysical Research Letters, 31_, L06211, doi:10.1029/2003GL019361. * [] de Laat, A. T. J., and A. N. Maurellis (2004), Industrial CO\\({}_{2}\\) emissions as a proxy for anthropogenic influence on lower tropospheric temperature trends, _Geophys. Res. Lett., 31,_ L05204, doi:10.1029/2003GL019024. * [] Douglass D. H., and B. D. Clader (2002), Climate sensitivity of the earth to solar irradiance, _Geophys. Res. Lett. 29_, doi:10.1029/2002GL015345. * [] Douglass D. H., B. D. Pearson and S. F. Singer (2004) Altitude Dependence of Atmospheric Temperature Trends:Climate Models vs Observation. Accepted _Geophys. Res. Lett_ * [] Fu, Q., et al., (2004), Contribution of stratospheric cooling to satellite-inferred tropospheric temperature trends, _Nature, 429_, 55-58. * [] Gaffen D. J., et al. (2000), Multidecadal change in the vertical temperature structure of the tropical troposphere, _Science, 287,_ 1242-1245. * [] Hegerl G. C, and J. M. Wallace (2002), Influence of patterns of climate variability on the difference between satellite and surface trends, _J. Clim., 15,_ 2412-2428. * [] Jones, P.D., et al. (2001), Adjusting for sampling density in grid box land and ocean surface temperature time series, _J. Geophys. Res., 106_, 3371-3380. Kanamitsu, M, W. Ebisuzaki, et al. (2004), NCEP-DOE AMIP-II Reanalysis, _Bull. Amer. Meteorol. Soc., 83_, 1631-1643. * [1996] Kalnay E., et al. (1996), The NCEP/NCAR 40-year reanalysis project, _Bull. Amer. Meteorol. Soc., 77,_ 437-471. * [2003] Kalnay, E., and M. Cai (2003), Impact of urbanization and land use change on climate, _Nature, 423_, 528-531. * [2001] Kistler R., et al. (2001), The NCEP_NCAR 50-year Reanalysis: monthly means CD-ROM and documentation, _Bull. Amer. Meteorol. Soc., 82_, 247-267. * [2003] Lanzante, J.R.,et al (2003), Temporal homogenization of monthly radiosonde temperature data. Part II: trends, sensitivities, and MSU comparison, _J. Clim., 16,_ 241-262. * [2002] Lindzen R. S., and C. Giannitsis (2002), Reconciling observations of global temperature change, _Geophys. Res. Lett., 29_, doi:10.1029/2001GL014074. * [2004] Michaels, P.J., R. McKitrick, and P.C. Knappenberger (2004), Economic signals in global temperature histories, paper presented at the 14\\({}^{\\rm th}\\) Conference on Applied Climatology, Seattle, Washington. * [2000] Michaels, P. J., and P. C. Knappenberger (2000), Natural Signals in the Lower Tropospheric Temperature Record, _Geophys. Res. Lett., 27_, 2905-2908. National Research Council (2000), _Reconciling observations of global temperature change_, National Academy Press, Washington DC. * [2001] Singer S. F. 2001 _Disparity of temperature trends (1979-99) of atmosphere and surface_. 12th Symposium on Global Climate Variations. 14- 19 Jan 2001. Albuquerque,Santer B. D., et al. (2000), Interpreting differential temperature trends at the surface and lower troposphere, _Science, 287,_ 1227-1232. * Vinnikov, and Grody (2003) Vinnikov, K. Y., and N. C. Grody (2003), Global warming trend of mean tropospheric temperature observed by satellites, _Science_, _302_, 269-272. ## Table Caption Temperature trends from the ST, MSU, and R2-2m data sets in three latitude bands. (1979-1996) \\begin{tabular}{l c c c} Trend line (C/decade) & ST & MSU & R2-2m \\\\ North(35-60) & 0.224 & 0.244 & 0.228 \\\\ Tropics(20S-20N) & 0.092 & -0.057 & -0.054 \\\\ South(20S-40S) & 0.043 & 0.020 & -0.121 \\\\ global(common area) & 0.106 & 0.027 & 0.014 \\\\ global (all available data) & 0.106 & -0.005 & 0.015 \\\\ ## Figure Captions 1. Trend-line maps of ST, MSU, and R2-2m, 1979-1996. North Pole, Full World, and South Pole Projections. For ST, cells with missing data are made dark blue; polar regions for which there are no data are then covered with a colorless circle. 2. Latitude Plot. MSU, R2-2m, and ST Zonal averages of trend-lines plotted vs. latitude
Observations suggest that the earth's surface has been warming relative to the troposphere for the last 25 years; this is not only difficult to explain but also contrary to the results of climate models. We provide new evidence that the disparity is real. Introducing an additional data set, R2 2-meter temperatures, a diagnostic variable related to tropospheric temperature profiles, we find trends derived from it to be in close agreement with satellite measurements of tropospheric temperature. This suggests that the disparity likely is a result of near-surface processes. We find that the disparity does not occur uniformly across the globe, but is primarily confined to tropical regions which are primarily oceanic. Since the ocean measurements are sea surface temperatures, we suggest that the disparity is probably associated with processes at the ocean-atmosphere interface. Our study thus makes unlikely some of the explanations advanced to account for the disparity; it also demonstrates the importance of distinguishing between land, sea and air measurements.
Summarize the following text.
arxiv-format/0407091v1.md
# Detection of coherent reflections with GPS bipath interferometry Achim Helm Georg Beyerle and Markus Nitschke GeoForschungsZentrum Potsdam Dept. Geodesy & Remote Sensing Potsdam Germany Corresponding author address: Achim Helm GeoForschungsZentrum Potsdam Dept. Geodesy & Remote Sensing Telegrafenberg D-14473 Potsdam Germany. Tel.: +49-331-288-1812; fax: +49-331-288-1111. E-mail: [email protected] ## 1 Introduction Satellite-based active altimeters on ENVISAT and JASON deliver valuable ocean height data sets for global climate modelling. In order to improve the climate models, altimetric data of higher resolution in space and time is required. This gap can potentially be filled with GPS-based altimetric measurements. Additionally, ground-based GPS receivers can monitor ocean heights in coastal areas where satellite altimetry data get coarse and decrease in quality (Fu and Cazenave, 2001; Shum et al., 1997). Since GPS altimetry has been proposed as a novel remote sensing capability (Martin-Neira, 1993), many studies have been carried out at different observation heights and platforms. While Earth-reflected GPS signals have been observed from spaceborne instruments (Lowe et al., 2002a; Beyerle et al., 2002) and the CHAMP and SAC-C satellites already are equipped with Earth/nadir looking GPS antennas, work is in progress in order to establish satellite-based GPS altimetry (Hajj and Zuffada, 2003). Airborne campaigns have been conducted (e.g. Garrison et al. (1998), Garrison and Katzberg (2000), Rius et al. (2002)) and recently reached a 5-cm height precision (Lowe et al., 2002b). Ground-based GPS altimetry measurements have been performed at laboratory scale of some meters height with 1-cm height precision (Martin-Neira et al., 2002) up to low-altitudes height (e.g. Anderson (2000), Martin-Neira et al. (2001)) and reached a 2-cm height precision (Treuhaft et al., 2001). In this study a 12 channel GPS receiver is used (Kelley et al., 2002), that has been extended with a coarse/acquisition (C/A) code correlation function tracking mode. In this coherent delay mapping (CDM) mode the direct GPS signal is tracked while concurrently the reflected signal is registered in open-loop mode. Using the L1 carrier phase the relative altimetric height is determined from the components of the reflected signal. ## 2 Experimental Setup and Data Acquisition The experiment was conducted on 8 - 10 July 2003, 50 km south of Munich, Germany, in the Bavarian alpine upland at the mountain top of Fahrenberg (47.61\\({}^{\\circ}\\)N, 11.32\\({}^{\\circ}\\)E) at a height of about 1625 m asl. Mount Fahrenberg belongs to the Karvendel mountains and from the mountain top unobstructed view is available to lake Kochelsee (surface area about 6 km\\({}^{2}\\)) to the north and lake Walchensee (surface area about 16 km\\({}^{2}\\)) to the south. Following a schedule of predicted GPS reflection events, the receiver antenna was turned towards the lake surface of Kochelsee (about 599 m asl) or Walchensee (about 801 m asl). The antenna was tilted about 45\\({}^{\\circ}\\) towards the horizon. During a GPS reflection event the direct and the reflected signals interfere at the antenna center (e.g. Parkinson and Spilker (1996)). The interference causes amplitude fluctuations that are quantitatively analyzed to determine the height variation of the specular reflection point. The receiver is based on the OpenSource GPS design (Kelley et al., 2002) and was modified to allow for open-loop tracking of reflected signals. The receiving antenna is an active RHCP patch antenna (AT575-70C from AeroAntenna Technology Inc.) with 4 dBic gain, 54 mm in diameter and a hemispheric field-of-view. Operating in CDM mode all 12 correlator channels are tuned to the same GPS satellite by setting the appropriate pseudo-random noise (PRN) value. The correlation between the received and model (replica) signal is realized in hardware (Zarlink GP2021, ZARLINK (2001)). While one channel (the master channel) continues to track the direct signal, the carrier and code tracking loops of the 11 remaining channels (slave channels) are synchronized to the master channel. Each channel consists of the prompt and the early tracking arm at zero and at -0.5 chip code delay, respectively. Thus, \\(2\\times 11=22\\) delays are available to map the C/A code correlation function of the reflection signature. In CDM mode the slave carrier and code phase-locked loops (PLLs) are opened and their loop feed-back is obtained from the master PLL. All carrier loops operate with zero carrier phase offset with respect to the master channel; in the code loop feed-back, however, delays covering an interval of 2 chips (about 2 \\(\\mu\\)s) with a step size of 0.1 chips (about 100 ns) are inserted. In-phase and quad-phase correlation-sums of each channel are summed over 20 ms taking into account the navigation bit boundaries and stored together with code and carrier phases to hard disk at a rate of 50 Hz. Figure 2 illustrates the CDM mode: while the direct GPS signal is tracked with the prompt and early arm of the master channel at 0 and -0.5 chips code offset, the prompt and early arms of the remaining 11 slave channels are set to chip code offsets between 0.4 and 2.7 to map the reflected signal (corresponding to an optical path difference of 120 to 810 m). In Figure 2 the root-sum-squared in-phase and quad-phase values of the reflected signal are plotted as a function of code delay. The maximum power of the reflected signal is about \\(20\\log 0.2=-14\\) dB below the direct signal's power. The peak of the correlation function is separated by a delay of 1.5 chips from the direct signal's correlation peak. Data analysis is performed in the following way: first, the code delay corresponding to the maximum of the reflected waveform is determined. Second, all in-phase and quad-phase correlation sum values \\(I_{r}\\) and \\(Q_{r}\\) are extracted from the raw data which lie within a certain delay interval (grey box in Figure 2) around the maximum code delay. The navigation message is demodulated according to \\[\\tilde{I}_{r} = \\mbox{sign}(I_{d})\\,I_{r}\\] \\[\\tilde{Q}_{r} = \\mbox{sign}(I_{d})\\,Q_{r}, \\tag{1}\\] where \\(I_{d}\\) denotes the in-phase value of the master channel. Figure 3 A shows the oscillations of \\(\\tilde{I}_{r}\\) and \\(\\tilde{Q}_{r}\\) caused by the interference between the reflected and the replica GPS signal. The phasor \\(\\tilde{I}_{r}+i\\,\\tilde{Q}_{r}\\) rotates by about \\(+\\,0.5\\) Hz due to the decreasing path length difference between the direct and the reflected signal, since during this measurement the GPS satellite moved towards the horizontal. Note the phase offset of \\(90^{\\circ}\\) between \\(\\tilde{I}_{r}\\) and \\(\\tilde{Q}_{r}\\). The phase \\(\\phi\\) (Fig. 3 B) is calculated from the four quadrant arctangent \\[\\phi=\\mbox{atan2}(\\tilde{Q}_{r},\\tilde{I}_{r}) \\tag{2}\\] and is unwrapped by adding \\(\\pm\\,2\\pi\\) when the difference between consecutive values exceeds \\(\\pi\\), resulting in the accumulated phase \\(\\phi_{a}\\). The optical path length difference \\(\\delta\\) between direct and reflected signal is calculated from the accumulated phase \\(\\phi_{a}\\) and the L1 carrier wavelength \\(\\lambda_{L1}=0.1903\\) m at the observation time \\(t\\) \\[\\delta(t)=\\frac{\\phi_{a}(t)}{2\\pi}\\,\\lambda_{L1}. \\tag{3}\\] Starting with a height estimate \\(H(t_{0})\\), the temporal evolution of the altimetric height variation \\(h(t)-h(t_{0})\\), normal to the tangent plane at the reflection point P, is calculated from (Treuhaft et al., 2001) \\[h(t) = \\frac{\\delta(t)-\\delta(t_{0})+2\\,h(t_{0})\\,\\sin\\alpha(t_{0})}{2 \\,\\sin\\alpha(t)} \\tag{4}\\] \\[h(t_{0}) = (H(t_{0})+r_{E})\\cos\\frac{s}{r_{E}}-r_{E} \\tag{5}\\] with the arclength \\(s\\) defined in Figure 1, an Earth radius \\(r_{E}=6371\\) km and \\[\\alpha = \\epsilon+\\frac{\\pi}{2}-\\gamma \\tag{6}\\] \\[\\epsilon = \\epsilon_{eph}+\\Delta\\epsilon_{tropo}, \\tag{7}\\]assuming an infinite distance to the GPS transmitter. \\(\\epsilon_{eph}\\) is calculated from the broadcast ephemeris data (GPS SPS, 1995), the correction \\(\\Delta\\epsilon_{tropo}\\) accounts for refraction caused by atmospheric refractivity. The tropospheric correction is derived from a geometric raytracing calculation using a refractivity profile obtained from meteorological analyse provided by the European Centre for Medium-Range Weather Forecasts. The position of the specular reflection point P as function of \\(\\gamma\\) (Figure 1) is calculated following Martin-Neira (1993). Thus, the altimetric height change of the GPS receiver above the reflecting surface is determined from the carrier phase difference between the direct and reflected GPS signal (Figure 3 C). ## 3 Data Analysis and Discussion During all 3 days several reflection events were observed from both lake surfaces with different GPS satellites at elevation angles between about 10\\({}^{\\circ}\\) - 15\\({}^{\\circ}\\), indicated by a clearly visible waveform (see Figure 2). Several outliers can be observed in the data records. Most likely they are caused by overheating of the hardware correlator chip [S. Esterhuizen, University of Colorado, personal communication, 2003]. In this study outliers are removed in the following way: a value is calculated by linear extrapolation from the last 3 values of \\(\\tilde{I}_{r}(t)\\). If the difference between extrapolated and actual value exceeds a threshold (20000-22000), the extrapolated value is taken. The same is applied to the \\(\\tilde{Q}_{r}(t)\\) data. Additionally cycle slips (sporadic height jumps of about \\(\\lambda_{L1}\\) m in adjacent data points) can be observed in the optical path length difference \\(\\delta(t)\\). The distortion of the data by cycle slips could be minimized by applying the same method as above to \\(\\delta(t)\\). Continuous data segments without cycle slips are chosen for height change determination. The mean receiver height above the lake surface is not expected to change during the short analyzed time periods. From topographic maps (scale 1:25000, Bayerisches Landesvermessungsamt, 1987) the heights \\(H(t_{0})\\) are estimated to be 1026 m \\(\\pm\\,5\\) m (Kochelsee) and 824 m \\(\\pm\\,5\\) m (Walchensee), respectively. By minimization of the linear trend of \\(h(t)-h(t_{0})\\) we obtain a \\(H(t_{0})\\) of 1022.5 m (Kochelsee) and 827.5 m (Walchensee). Figure 4 A and B plot the relative height change between the receiver and the reflection point at the surface of lake Kochelsee. Both observations used the same PRN, but were taken on different days. The height varies within an interval of about \\(\\pm\\,5\\) cm with a standard deviation of about 3.1 and 2.6 cm. Figure 4 C and D show the height changes between the receiver and the reflection point at the surface of lake Walchensee. Again both observations were taken on different days and used different PRNs. Compared to the Kochelsee data, the height varies within a height interval of about \\(\\pm\\,2.5\\) cm with a standard deviation of about 1.4 and 1.7 cm. The different height variations at both lakes can be explained by different local wind and wave height conditions. As lake Walchensee is completely surrounded by mountains, waves are mainly driven by local, thermal induced winds which mainly occur at noon. Lake Kochelsee is open to the north, so longer lasting wind can build up waves on the lake surface. ## 4 Conclusions and Outlook Open-loop tracking of the reflected signals allows the determination of the relative altimetric height with 2-cm precision. Different height changes can be observed at Kochelsee and Walchensee which reflect the different wind and wave height conditions at the two lakes. The relationship between the observed height changes and wind speed (e.g. Caparrini and Martin-Neira (1998), Lin et al. (1999), Komjathy et al. (2000), Zuffada et al. (2003), Cardellach et al. (2003)) will be subject of further studies. The present receiver implementation is limited to the observation of one GPS satellite at a time. To fully use the potential of GPS reflections the receiver will be modified to keep track of several GPS reflections simultaneously. Our results suggest that open-loop tracking is possible with low-gain and wide field-of-view antennas, showing the potential of this method also for space-based measurements of GPS reflections. ## Acknowledgments This work would not have been possible without the open source projects OpenSource GPS and RTAI-Linux. We thank Clifford Kelley and the developers of RTAI-Linux for making their work available. Helpful discussion with Robert Treuhaft and Philipp Hartl are gratefully acknowledged. We thank T. Schmidt, C. Selke and A. Lachmann for their help and technical support. The ECMWF provided meteorological analysis fields. ## References * Anderson (2000) Anderson, K. (2000). Determination of water level and tides using interferometric observations of GPS signals. _Journal of Atmospheric and Oceanic Technology_, 17:1118-1127. * Beyerle et al. (2002) Beyerle, G., Hocke, K., Wickert, J., Schmidt, T., and Reigber, C. (2002). GPS radio occultations with CHAMP: A radio holographic analysis of GPS signal propagation in the troposphere and surface reflections. _Journal of Geophysical Research_, 107(D24):doi:10.1029/2001JD001402. * Caparrini and Martin-Neira (1998) Caparrini, M. and Martin-Neira, M. (1998). Using reflected GNSS signals to estimate SWH over wide ocean areas. _ESTEC Working Paper_, 2003. * Cardellach et al. (2003) Cardellach, E., Ruffini, G., Pino, D., Rius, A., Komjathy, A., and L., G. J. (2003). Mediterranean ballon experiment: ocean wind speed sensing from the stratosphere, using GPS reflections. _Remote Sensing of Environment_, 88(3):doi:10.1016/S0034-4257(03)00176-7. * Fu and Cazenave (2001) Fu, L. L. and Cazenave, A., editors (2001). _Satellite Altimetry and Earth Sciences_, volume 69 of _International Geophysical Series_. Academic Press. * Garrison and Katzberg (2000) Garrison, J. L. and Katzberg, S. J. (2000). The application of reflected GPS signals to ocean remote sensing. _Remote Sensing of Environment_, 73:175-187. * Garrison et al. (1998) Garrison, J. L., Katzberg, S. J., and Hill, M. I. (1998). Effect of sea roughness on bistatically scattered range coded signals from the global positioning system. _Geophysical Research Letter_, 25(13):2257-2260. * Garrison et al. (2002)GPS SPS (1995). _GPS SPS Signal Specification_. GPS NAVSTAR, 2 edition. * Hajj and Zuffada (2003) Hajj, G. and Zuffada, C. (2003). Theoretical description of a bistatic system for ocean altimetry using the GPS signal. _Radio Science_, 38(5):doi:10.1029/2002RS002787. 1089. * Kelley et al. (2002) Kelley, C., Barnes, J., and Cheng, J. (2002). OpenSource GPS: Open source software for learning about GPS. In _ION GPS 2002_, pages 2524-2533, Portland, USA. * Komjathy et al. (2000) Komjathy, A., Zavorotny, V. U., Axelrad, P., Born, G. H., and Garrison, J. L. (2000). GPS signal scattering from sea surface: Wind speed retrieval using experimental data and theoretical model. _Remote Sensing of Environment_, 73:162-174. * Lowe et al. (2002a) Lowe, S. T., LaBrecque, J. L., Zuffada, C., Romans, L. J., Young, L. E., and Hajj, G. A. (2002a). First spaceborne observation of an earth-reflected GPS signal. _Radio Science_, 29(10):doi:10.1029/2000RS002539. * Lowe et al. (2002b) Lowe, S. T., Zuffada, C., Chao, Y., Kroger, P., Young, L. E., and LaBrecque, J. L. (2002b). 5-cm-precision aircraft ocean altimetry using GPS reflections. _Geophysical Research Letter_, 29(10):doi:10.1029/2002GL014759. * Martin-Neira (1993) Martin-Neira, M. (1993). A passive reflectometry and interferometry system (PARIS): Application to ocean altimetry. _ESA Journal_, 17:331-355. * Martin-Neira et al. (2001) Martin-Neira, M., Caparrini, M., Font-Rossello, J., Lannelongue, S., and Serra, C. (2001). The paris concept: An experimental demonstration of sea surface altimetry using GPS reflected signals. _IEEE Transactions on Geoscience and Remote Sensing_, 39:142-150. * Martin-Neira et al. (2002) Martin-Neira, M., Colmenarejo, P., Ruffini, G., and Serra, C. (2002). Altimetry precision of 1 cm over a pond using the wide-lane carrier phase of gps reflected signals. _Canadian Journal of Remote Sensing_, 28(3):pp. 394-403. * Parkinson and Spilker (1996) Parkinson, B., W. and Spilker, J. J., editors (1996). _Global Positioning System: Theory and Application_, volume 163 of _Progress in Astronautics and Aeronautics_. American Institute of Aeronautics. * Rius et al. (2002) Rius, A., Aparicio, J. M., Cardellach, E., Martin-Neira, M., and Chapron, B. (2002). Sea surface state measured using GPS reflected signals. _Geophysical Research Letter_, 29(23):doi:10.1029/2002GL015524. * Shum et al. (1997) Shum, C. K., Woodworth, P. L., Andersen, O. B., Egbert, G. D., Francis, O., King, C., Klosko, S. M., Le Provost, C., Li, X., Molines, J. M., Parke, M. E., Ray, R. D., Schlax, M. G., Stammer, D., Tierney, C. C., Vincent, P., and Wunsch, C. I. (1997). Accuracy assessment of recent ocean tide models. _Journal of Geophysical Research_, 102(C11):25173-25194. * Treuhaft et al. (2001) Treuhaft, R., Lowe, S., Zuffada, C., and Chao, Y. (2001). 2-cm GPS altimetry over Crater Lake. _Geophysical Research Letter_, 22(23):4343-4346. * ZARLINK (2001) ZARLINK (2001). _GP2021 GPS 12 Channel Correlator_, DS4077-3.2 edition. [http://www.zarlink.com](http://www.zarlink.com). * Zuffada et al. (2003) Zuffada, C., Fung, A., Parker, J., Okolicanyi, M., and Huang, E. (2003). Polarization properties of the gps signal scattered off a wind-driven ocean. _IEEE Transactions on Antennas and Propagation_. List of Symbols \\(R\\): receiver position \\(P\\): specular reflection point position \\(\\epsilon\\): elevation angle of the GPS satellite above local horizon plane at R \\(\\delta\\): observed path difference between direct and reflected signal path \\(r_{E}\\): Earth radius \\(H\\): receiver height \\(h\\): height variations normal to tangential plane at P \\(\\alpha\\): angle of reflection above tangential plane at P \\(\\gamma\\): angle between normal of tangential plane and local horizon plane at P \\(s\\): arc length from subreceiver point to specular reflection point P \\(I_{d}\\): in-phase correlation sum of the direct data \\(I_{r}\\): in-phase correlation sum of the reflected data \\(Q_{r}\\): quad-phase correlation sum of the reflected data \\(\\tilde{I}_{r}\\): \\(I_{r}\\) demodulated from navigation message \\(\\tilde{Q}_{r}\\): \\(Q_{r}\\) demodulated from navigation message \\(\\phi\\): phase \\(\\phi_{a}\\): accumulated phase \\(\\lambda_{L1}\\): L1 carrier wavelength \\(\\epsilon_{eph}\\): elevation angle calculated from broadcast ephemeris data \\(\\Delta\\epsilon_{tropo}\\): tropospheric correction to elevation angle Figure 1: Geometry used to express the observed path difference \\(\\delta\\) in terms of the known receiver position R with height H and the GPS satellite elevation angle \\(\\epsilon\\) and the calculated position of the specular reflection point P. \\(h\\) denotes the height variations normal to the tangential plane at P. Note that \\(\\epsilon\\) has to be corrected by \\(\\Delta\\epsilon_{tropo}\\) due to the bending effect caused by the Earth’s troposphere. Figure 2: Delay mapped waveform of a reflection event (PRN 16) at 1334:17 UTC 8 July 2003, antenna oriented towards Kochelsee. The delay is given in relation to the maximum peak of the direct signal. Blue circles and red triangles indicate 2 measurements (0.5-second duration) starting 120 (blue) and 267 (red) seconds after the start of the measurement. In the second case (red) the 2-chip-wide interval of covered chip code offsets is centered at the maximum of the reflected signal. The points reveal the measured waveform of the direct and reflected correlation signal. The thin black triangle marks the theoretical C/A code correlation function of the direct signal. The grey box marks the maximum of the reflected signal. Figure 3: Panel A shows the demodulated reflected in- and quad-phase data \\(\\tilde{I}_{r}\\) (blue circles) and \\(\\tilde{Q}_{r}\\) (red triangles) (PRN 16, elevation from \\(11.04^{\\circ}\\) to \\(10.99^{\\circ}\\)), antenna oriented towards Kochelsee at 1334:17 UTC 8 July 2003, as a function of time since measurement start. With Eq. 2 the phase \\(\\phi\\) (Panel B) and from Eq. 4 and 5 the relative height \\(h(t)-h(t_{0})\\) is calculated (Panel C), with \\(H(t_{0})=1022.5\\) m. Figure 4: The left panels show relative height measurements at Kochelsee (PRN 16), starting at 1334:17 UTC 8 July 2003 (Panel A) and starting at 1327:17 UTC 10 July 2003 (Panel B) as a function of time since the start of the observation. PRN 16 changed elevation from \\(11.0^{\\circ}\\) to \\(10.4^{\\circ}\\) (Panel A) and from \\(11.4^{\\circ}\\) to \\(11.3^{\\circ}\\) (Panel B). On the right panels height measurements at Walchensee are shown, PRN 20 (elevation from \\(14.7^{\\circ}\\) to \\(14.1^{\\circ}\\)), starting at 1257:15 UTC 8 July 2003 (Panel C) and PRN 11 (elevation from \\(14.3^{\\circ}\\) to \\(13.6^{\\circ}\\)), starting at 1110:21 UT 9 July 2003 (Panel D). \\(H(t_{0})=1022.5\\) m (Kochelsee) and \\(H(t_{0})=827.5\\) m (Walchensee).
Results from a GPS reflectometry experiment with a 12 channel ground-based GPS receiver above two lakes in the Bavarian Alps are presented. The receiver measures in open-loop mode the coarse/aquisition code correlation function of the direct and the reflected signal of one GPS satellite simultaneously. The interference between the coherently reflected signal and a model signal, which is phase-locked to the direct signal, causes variations in the amplitude of the in-phase and quad-phase components of the correlation sums. From these amplitude variations the relative altimetric height is determined within a precision of 2 cm.
Provide a brief summary of the text.
arxiv-format/0407147v1.md
Destabilization of the thermohaline circulation by transient perturbations to the hydrological cycle Valerio Lucarini Dipartimento di Matematica ed Informatica, Universita di Camerino Via Madonna delle Carceri, 62032 Camerino (MC), Italy Sandro Calmanti and Vincenzo Artale ENEA-CLIM-MOD Via Anguillarese 301, 00060 S. Maria di Galeria (Roma), Italy [email protected]. Please address correspondence to: Valerio Lucarini, Via Palestro 7, 50123 Firenze, Italy. Tel: +393488814008. ###### Introduction The thermohaline circulation (THC) plays a major role in the global circulation of the oceans as pictured by the conveyor belt scheme (Weaver and Hughes 1992; Stocker 2001). The currently accepted picture is that the meridional overturning and the associated heat and freshwater transports are energetically sustained by the action of winds and tides, controlling turbulent mixing in the interior of the ocean (Munk and Wunsch 1998; Rahmstorf 2003; Wunsch and Ferrari 2004). However, for climatic pourposes, many authors have succesfully assumed a dependence of the strength of the THC from the meridional gradients in the buoyancy of the water masses (Stommel 1961; Weaver and Hughes 1992; Tziperman et al. 1994; Marotzke 1996; Rahmstorf 1996; Gnanadesikan 1999; Stocker et al. 2001). The present day THC of the Atlantic Ocean is characterized by a strongly asymmetric structure. Deep convection is observed at high latitudes in the northern hemisphere. The water masses formed in the northern regions can be followed as they cross the equator and observed as they connect with the other major basins of the world ocean (Weaver and Hughes 1992; Rahmstorf 2000, 2002; Stocker et al. 2001). Idealized and realistic coupled GCM experiments have shown that such equatorial asymmetry may be a consequence of the large scale oceanic feedbacks leading to the existence of multiple equilibria (Bryan 1986; Manabe and Stouffer 1988; Stocker and Wright 1991; Manabe and Stouffer 1999a; Marotzke and Willebrand 1991; Hughes and Weaver1994). The equatorial asymmetry is also responsible for a large portion of the the global poleward heat transport (Broecker 1994; Rahmstorf and Ganopolski 1999; Stocker 2000; Stocker et al. 2001). Consequently, large climatic shifts are often associated with important changes in the large scale oceanic circulation. On a paleoclimatic perspective, major climatic shifts may be associated with the complete shutdown of the THC (Broecker et al. 1985; Boyle and Keigwin 1987; Keigwin et al. 19994; Rahmstorf 1995, 2002). In fact, the THC is sensitive to changes in the climate since the North Atlantic Deep Water (NADW) formation is affected by variations in air temperature and in precipitation in the Atlantic basin (Rahmstorf and Willebrand 1995; Rahmstorf 1996). With respect to the present climate change, most GCMs have shown that the changes in radiative forcing caused by the ongoing modification of the greenhouse gases in the atmosphere could imply a weakening of the THC. Large increases of the moisture flux and/or of the surface air temperature in the deep-water formation regions could inhibit the sinking of the water in the northern Atlantic (Weaver and Hughes 1992; Manabe and Stouffer 1993; Rahmstorf 1997, 1999b,a, 2000; Wang et al. 1999a,b). Moreover, models of different level of complexity, from box models (Tziperman and Gildor 2002; Lucarini and Stone 2003a,b), to EMICs (Stocker and Schmittner 1997; Schmittner and Stocker 1999) to GCMs (Stouffer and Manabe 1999; Manabe and Stouffer 1999a,b, 2000) have shown how the rate of increase of forcing may be relevant for determining the response of the system. In particular Stocker and Schmittner (1997) have performed a systematic analysis of the stability of the meridional overturning circulation as a function of the climate sensitivity and of the rate of CO\\({}_{2}\\) increase. However they made no explicit reference as to the mechanism driving the response of their coupled model. In this work we study the THC stability using a simplified 2D Boussinesq ocean model, which has been presented in Artale et al. (2002). Two-dimensional models have been widely adopted (Cessi and Young 1992; Vellinga 1996) and have proved their ability in describing the most relevant feedbacks of the system (Dijkstra and Neelin 1999). Moreover, the low computational cost of such models permits extensive parametric studies. Our work wishes to bridge both in terms of methodology and results the studies performed with simplified models with the more physically sensible analyses performed with EMICs and GCMs. We explicitly analyze what is the role of the rates of changes of the hydrological forcing in determining the response of the system. In particular we determine, for a given initial state and a given rate of increase of the forcing, which are the thresholds in total change of the forcing beyond which destabilization of the THC occurs. The treatment of a wide range of temporal scales for the increase of the forcing allows us to join on naturally and continuously (Lucarini and Stone 2003a,b) the analysis of quasi-static perturbations, which have been usually addressed with the study of the bifurcations of the system (Rahmstorf 1995, 1996; Stone and Krasovskiy 1999; Scott et al. 1999; Wang et al. 1999a; Titz et al. 2002b,a), with the study of the effects of very rapid perturbations (Rahmstorf 1996; Scott et al. 1999;Wiebe and Weaver 1999), which are usually differently framed. Our paper is organized as follows. In section 2 we provide a description of the model we adopt in this study. In section 3 we explore the parameters space descriptive of the hydrology of the system by considering quasi-static perturbations. We determine which hydrological patterns are compatible with multiple equilibria and which hydrological patterns define a unique stationary state. In section 4 we extend the analysis to time-dependent perturbations. We analyze the temporal evolution of finite amplitude modifications of the hydrological cycle that are able to destabilize the equilibria of this advective system. We propose a simple relation between an estimate of the critical rate of increase of forcing, which divides robustly _slow_ from _fast_ regimes, with an estimate of the characteristic advective time scale of the system. In section 5 we present our conclusions. In appendix A we present the dependence of the THC strength on the value of the vertical mixing coefficient. ## 2. The model We consider the two-dimensional convection equations in the Boussinesq approximation. The motion is forced by buoyancy gradients only: gravity is the only external force, while Earth rotation is not explicitly considered. Buoyancy gradients are generated in the model by imposing heat and freshwater fluxes at the top boundary. Such fluxes are assumed to be representative of the interactions with the overlying atmosphere. We adopt a linearized equation of state for the sea water: \\[\\rho\\left(T,S\\right)=\\rho_{0}\\left(1-\\alpha T+\\beta S\\right) \\tag{1}\\] where the values of the coefficients of thermal expansion \\(\\alpha=8\\cdot 10^{-4}\\ ^{\\circ}C^{-1}\\) and haline contraction \\(\\beta=1.5\\cdot 10^{-3}psu^{-1}\\) have been chosen in order to provide a good approximation over a quite large range of salinity and temperature. The linear approximation is commonly adopted in conceptual models. However, we note that analyses performed on simple models show that small nonlinearities may induce self-sustained oscillations for the THC (Rivin and Tziperman 1997). Such nonlinearities in the equation of state might be especially relevant for the high latitude areas, since the well-known cabbeling effect occurs at low temperatures. The geometry of the model is descriptive of the Atlantic ocean, where we assume a depth of \\(5000m\\), an effective east-west extension of \\(6000Km\\), and a north-south extension of \\(13600Km\\) (\\(120^{\\circ}\\)) and a total volume \\(V=4.08\\times 10^{17}m^{3}\\). The only active boundary of the model is the air-sea interface. We select a relatively coarse uniform resolution with \\(N_{H}\\times N_{V}=64\\times 16\\) grid points, where \\(N_{H}\\) and \\(N_{V}\\) refer to the number of the horizontal and vertical grid points, respectively, in order to meet the computational requirements needed for performing a parametric study. A well known property of the THC system is the existence of regimes of multiple equilibria. When equatorially symmetric surface forcing is applied at the surface, the equilibria of the system fall into three well-distinct classes. One class is characterized by the presence of two equatorially symmetric thermally direct cells, where the deep water is formed at high latitudes. Another class is characterized by equatorially symmetric salinity driven cells, where deep water is formed in the equatorial region. Equilibria belonging to these two classes are observed when either surface thermal or haline buoyancy forcings are largely dominant, respectively. When the two forcings have comparable intensity, multiple equilibria regimes - which constitute a third class - appear. In this case, the equilibria are characterized by the dominance of one overturning cell. If also geometry is symmetric with respect to the equator, the two equilibria have odd parity and map into each other by exchanging the sign of the latitude. **a.** _Boundary conditions_ The boundary condition for the sea surface salinity is defined in terms of the imposed atmospheric freshwater flux \\(F\\) affecting the surface grid box of volume \\(v=V/\\left(N_{H}\\times N_{V}\\right)\\): \\[\\partial_{t}S_{i,j=1}=-F_{i}\\frac{S_{0}}{v}, \\tag{2}\\] where we indicate the value of the bulk variable \\(S\\) at the grid point \\((i,j)\\) with \\(S_{i,j}\\) and the value of the interface variable \\(F\\) at the grid point \\((i,1)\\) with \\(F_{i}\\)We underline that, in order to simplify the expressions, in equation (2) (and in the following ones) we have not explicitly adopted a discrete notation for the time variable. We emphasize that in expression (2) we have neglected the contribution in terms of mass of the freshwater flux to the ocean (Marotzke 1996). We divide the water basin into three distinct regions by using a suitable analytical expression of the freshwater flux. The equatorial region (region E) is characterized by a net atmospheric export of freshwater, while the northern and southern high latitude regions (regions \\(N\\) and \\(S\\), respectively) are characterized by a positive atmospheric freshwater budget. We then consider the following functional form for the surface freshwater flux: \\[F_{i}=\\frac{2}{\\pi}\\Phi_{i}\\cos\\left(2\\pi i/N_{H}\\right) \\tag{3}\\] where: \\[\\Phi_{i}=\\begin{cases}\\Phi_{S},\\quad i\\leq 1/4\\;N_{H}\\\\ \\Phi_{N},\\quad i\\geq 3/4\\;N_{H}\\\\ \\Phi_{E}=-1/2\\left(\\Phi_{N}+\\Phi_{S}\\right),\\quad 1/4\\;N_{H}<i<3/4\\;N_{H} \\end{cases} \\tag{4}\\] The definition of \\(F_{i}\\) is such that \\(\\Phi_{N}\\) and \\(\\Phi_{S}\\) are respectively the value of the total net atmospheric freshwater fluxes into regions \\(N\\) and \\(S\\), while \\(\\Phi_{E}\\) is constrained in order to have conservation of the salinity of the ocean. This latter condition is needed to allow the system to reach equilibrium states. Therefore, the ocean conserves its average salinity \\(S_{0}=35\\,psu\\). Since the atmospheric freshwater budgets for the Atlantic ocean are thought to be positive for the regions \\(N\\) and \\(S\\)(Baumgartner and Reichel, 1975), we consider the case \\(\\Phi_{N},\\Phi_{S}\\geq 0\\). Three relevant examples of surface freshwater flux are depicted in figure 1. The sea surface temperature is restored to a time-independent climatological temperature field \\(\\overline{T}_{i}\\) with a newtonian relaxation law: \\[\\partial_{t}T_{i,j=1}=\\lambda\\left[\\overline{T}_{i}-T_{i,j=1}\\right] \\tag{5}\\] where the constant \\(\\lambda\\) describes the efficiency of the process. Such very simplified ocean-atmosphere coupling (Marotzke and Stone, 1995; Marotzke, 1996) synthetically describes the combined effects of radiative heating-cooling and of the atmospheric latent and sensible heat meridional transport. The climatological temperature profile \\(\\overline{T}_{i}\\) profile is set with the following equatorially symmetric analytical form: \\[\\overline{T}_{i}=\\overline{T}_{0}-\\Delta\\overline{T}\\cos\\left(2\\pi i/N_{H} \\right), \\tag{6}\\] where \\(2\\Delta\\overline{T}\\) is the imposed equator-to-pole temperature gradient and \\(\\overline{T}_{0}\\) is the average value of \\(\\overline{T}_{i}\\). In accordance to definition of the \\(E\\), \\(N\\), and \\(S\\) regions with respect to their atmospheric freshwater budget, we observe that regions \\(N\\) and \\(S\\) are _cold_ at surface, _i.e._\\(\\overline{T}_{i}\\) is lower than \\(\\overline{T}_{0}\\), while the region \\(E\\) is _warm_ at surface, _i.e._\\(\\overline{T}_{i}\\) is larger than \\(\\overline{T}_{0}\\). Since in our work we explore the stability of the THC with respect to the hydrology of the system, we keep fixed the parameters determining the restoring temperature profile \\(\\overline{T}_{i}\\). We set \\(\\overline{T}_{0}=15^{\\circ}C\\), since it represents a reasonable average surface climatological temperature, and we choose \\(\\Delta\\overline{T}\\sim 23.5^{\\circ}C\\), which corresponds to forcing the surface equator-to-pole temperature gradient to be \\(\\sim 47^{\\circ}C\\). Such a choice also implies that the average \\(\\overline{T}\\) is \\(\\sim 0^{\\circ}C\\) in the two high latitude regions \\(N\\) and \\(S\\) and is \\(\\sim 30^{\\circ}C\\) in the equatorial region \\(E\\). Furthermore, following the parameterization proposed by Marotzke (1996), we choose the restoring constant \\(\\lambda\\sim(1y)^{-1}\\), which is reasonable for the description of the temperature relaxation of the uppermost \\(5000/16\\:m\\sim 300\\:m\\) of the ocean. ## 3 Quasi-static hysteresis and multiple equilibria ### Symmetric case: \\(\\Phi_{S}=\\Phi_{N}\\) We start by considering the existence of a region of multiple equilibria in the space of parameters that defines the hydrology of the system, _i.e._ the \\((\\Phi_{S},\\Phi_{N})\\) plane. The surface thermal forcing is kept fixed. In the space of parameters that we are considering, symmetric forcings constitute the bisectrix of the \\((\\Phi_{S},\\Phi_{N})\\) plane. The allowed circulation patterns change when adjusting the parameters along the bisectrix, as discussed in section 2. For a weak hydrological cycle (\\(\\Phi_{N}=\\Phi_{S}<\\Phi_{inf}\\)), we observe a stable symmetric circulation with downwelling at high latitudes. If the hydrological cycle is very strong (\\(\\Phi_{N}=\\Phi_{S}>\\Phi_{sup}\\)), we obtain a stable symmetric circulation with downwelling of warm, saline water at the equator. In the intermediate regimes (\\(\\Phi_{inf}\\leq\\Phi_{N}=\\Phi_{S}\\leq\\Phi_{sup}\\)), the system has multiple equilibria. For later convenience, we define \\(\\Phi_{av}\\equiv 0.5\\left(\\Phi_{inf}+\\Phi_{sup}\\right)\\), which corresponds to an _average_ hydrological cycle. The described equilibria are depicted in figures 2a), 2b), 3a), and 3b) respectively. The two points (\\(\\Phi_{S}=\\Phi_{inf},\\Phi_{N}=\\Phi_{inf}\\)) and (\\(\\Phi_{S}=\\Phi_{sup},\\Phi_{N}=\\Phi_{sup}\\)) are bifurcation points in the one-dimensional subspace \\(\\Phi_{N}=\\Phi_{S}\\)(Dijkstra and Neelin, 1999). We consider the northern sinking equilibrium state (note that northern and southern sinking patterns are equivalent in these terms) realized for \\(\\Phi_{S}=\\Phi_{N}=\\Phi_{av}\\) as the reference state of the system. **b.** _General case: \\(\\Phi_{S}\ eq\\Phi_{N}\\)_ The study of the multiple equilibria states can be extended to the case of non-symmetric forcings, _i.e._\\(\\Phi_{S}\ eq\\Phi_{N}\\). We wish to obtain an estimate of the shape of the domain \\(\\Gamma\\) in the \\((\\Phi_{S},\\Phi_{N})\\) plane where the system has multiple equilibria. Since, apart from the freshwater flux boundary conditions, the model is wholly symmetric with respect to the equator, we expect that any property of the system is invariant for exchange of \\(\\Phi_{N}\\) and \\(\\Phi_{S}\\), so that we have that \\(\\Gamma\\) is _a priori_ symmetric with respect to the bisectrix. A fundamental property of the region \\(\\Gamma\\) is that, if we start from a point belonging to \\(\\Gamma\\) and change quasi-statically \\(\\Phi_{S}\\) and \\(\\Phi_{N}\\) along a closed path so that the point \\((\\Phi_{S},\\Phi_{N})\\) remains inside \\(\\Gamma\\), we get back to the initial equilibrium state. Instead, if the closed path crosses the boundary of \\(\\Gamma\\), the initial equilibrium state may not be recovered at the end of the loop, since the final state depends on the path. As a starting position of the loop, we consider a northern sinking equilibrium corresponding to one of the stable states of the point \\(\\Phi_{inf}\\leq\\Phi_{S}=\\Phi_{N}\\leq\\Phi_{sup}\\) on the bisectrix. By definition, such point belongs to \\(\\Gamma\\). However, the initial position of the loop - if belonging to \\(\\Gamma\\) - is not relevant in determining the shape of the region of multiple equilibria. We increase the value of \\(\\Phi_{N}\\) at a given slow constant rate \\(r_{s}\\) over a time \\(t_{0}\\) and then decrease it back to the initial value at the rate \\(-r_{s}\\). By slow rate we mean that \\(r_{s}\\ll\\Phi_{av}/\\tau\\); we select \\(r_{s}=\\Phi_{av}/100\\tau\\). If the initial state is not recovered at the end of the integration, we deduce that the path has crossed the boundary of \\(\\Gamma\\). By bisection, we can determine the critical value \\(t_{0}^{crit}\\), which determines \\((\\Phi_{S},\\Phi_{N}+r_{s}\\cdot t_{0}^{crit}=\\Phi_{N}+\\Delta\\Phi_{N}^{crit})\\) as belonging to the boundary of \\(\\Gamma\\). A schematization of this procedure is depicted in figure 4a). By changing the initial point along the considered segment of the bisectrix, we are able to define the whole boundary of \\(\\Gamma\\) above the bisectrix with a good degree of precision. Then, the symmetry properties of \\(\\Gamma\\) allow us to easily deduce the portion of its boundary lying below the bisectrix. In figure 5 we present the estimate for the the boundary of the bistable region \\(\\Gamma\\) obtained following this strategy. We conclude that the boundary of \\(\\Gamma\\) is constituted by the bifurcation points of the system. We note that in the more general case of the 2D \\((\\Phi_{S},\\Phi_{N})\\) plane, \\((\\Phi_{S}=\\Phi_{inf},\\Phi_{N}=\\Phi_{inf})\\) and \\((\\Phi_{S}=\\Phi_{sup},\\Phi_{N}=\\Phi_{sup})\\) result to be cusp points (Dijkstra 2001). ## 4 Effects of transient perturbations The analysis we have performed captures the equilibrium properties of the system, but is not sufficient to gain insight on the response of the system to transient changes in the forcings, which in general can range from instantaneous to quasi-static perturbations. Such sort of problems has been first investigated in the seminal paper by Stocker and Schmittner (1997) with the purpose of determining how efficiently the negative feedbacks of the system can counteract external perturbations, depending on the temporal patters of the destabilizing forcings. In our case, it is reasonable to expect that, if we change \\(\\Phi_{N}\\) at a finite rate, the system can destabilize before reaching the boundary of \\(\\Gamma\\). In fact, a fast perturbation can overcome the ability of the advective feedback to stabilize the system. We also expect that the faster the perturbation, the stronger such effect, _i.e._ the smaller the total perturbation required to obtain destabilization. As in the previous case, the analysis starts by considering as initial equilibria the northern sinking states under symmetric forcing. In this case, we increase the value of \\(\\Phi_{N}\\) at a constant rate \\(r_{f}\\) over a time \\(t_{0}\\), we let the system adjust for a time \\(10\\tau\\), so that transients can die out, and then decrease \\(\\Phi_{N}\\) back to the initial value at the slow rate \\(-r_{s}\\), corresponding to a quasi-static change. We schematically depict such strategy for three different values of \\(r_{f}\\) in figure 4b). If at the end of the process the initial state is not recovered and southern sinking state is instead realized, the system has made a transition to the other branch of the multiple equilibria. Depending on the choice of \\(r_{f}\\), we obtain different values for the critical perturbation causing such transition. Along the lines of the quasi-static analysis, varying the initial point, we can obtain for each value of \\(r_{f}\\) a curve which describes the critical perturbations. In figure 6 we report the curves obtained by selecting, from fast to slow, \\(r_{f}=\\infty\\), \\(r_{f}=\\Phi_{av}/\\tau\\), \\(r_{f}=\\Phi_{av}/3\\tau\\), \\(r_{f}=\\Phi_{av}/10\\tau\\), and \\(r_{f}=r_{s}=\\Phi_{av}/100\\tau\\). If we select \\(r_{f}=r_{s}\\), we obtain by definition the previously described upper branch of the boundary of \\(\\Gamma\\) depicted in figure 5, since the presence of the relaxing time \\(10\\tau\\) is not relevant. On the other extreme, if we apply instantaneous changes of \\(\\Phi_{N}\\) (\\(r_{f}=\\infty\\)), we obtain information on the minimum change in \\(\\Phi_{N}\\) that is needed to destabilize the system for any initial state having symmetric surface forcing. In fact, the corresponding curve is the closest to the bisectrix. Considering intermediate values of \\(r_{f}\\), we obtain consistently that the curves of the critical perturbations lye within the two extremes obtained with \\(r_{f}=r_{s}\\) and \\(r_{f}=\\infty\\). Moreover, we have that the curves are properly ordered with respect to the value of \\(r_{f}\\), _i.e._ the smaller \\(r_{f}\\), the closer the corresponding curve to the upper branch of the boundary of \\(\\Gamma\\). Previous studies, albeit performed with coupled EMICs, obtain a qualitatively similar dependence of thresholds on the rate of increase of the forcings (Stocker and Schmittner 1997; Schmittner and Stocker 1999), while in other studies (on GCMs) where the full collapse of the THC is not obtained, it is nevertheless observed that the higher the rate of increase of the forcing, the larger the decrease of THC realized (Stouffer and Manabe 1999). The most important result is that we can identify two separate regimes. If the surface forcing changes with a rate faster than \\(\\Phi_{av}/\\tau\\), the response of the system is virtually identical to the case of instantaneous changes. On the other side, if the rate of change is smaller that \\(\\Phi_{av}/10\\tau\\), the response of the system is very close to the case of quasi-static perturbations. The curve corresponding to \\(r_{f}=\\Phi_{av}/3\\tau\\) is geometrically about midway between these two regimes and is not patched to either. We have that the response of the system to varying external perturbations dramatically changes when the time scale of the variation of the external forcing changes by only one order of magnitude. Therefore, we can interpret \\(r_{c}=\\Phi_{av}/3\\tau\\) as an estimate of the critical rate of change of the hydrology of the system. It follows that with \\(r_{c}\\) we identify a relation between the class of changes in the external forcing that distinctively affect the stability of the system and the internal time scale of the system. The previous results prove to be robust with respect to changes in the freshwater forcing of the initial states. As an example, in figure 7 we show the results of a similar analysis referring to initial states having non-symmetric surface freshwater forcing (\\(\\Phi_{N}=2\\Phi_{S}\\)). We explored the behavior of the system in the \\((\\Phi_{S},\\Phi_{N})\\) plane considering only changes in \\(\\Phi_{N}\\) for computational convenience. Nevertheless, coherent results can be obtained by changing both parameters. In this case \\(r_{f}\\) has to be interpreted as the sum of the absolute values of the rates of change of the two parameters. Changing the parameter \\(\\Delta\\overline{T}\\) implies a change in the values of \\(\\Phi_{inf}\\) and \\(\\Phi_{sup}\\), since the meridional gradient of the total buoyancy forcing is changed. However, the response of the THC to transient perturbations does not change qualitatively. Instead, figures 5-7 must be rescaled linearly with the proper values of \\(\\Phi_{sup}\\) and \\(\\Phi_{inf}\\). Such linear relation is a direct consequence of the use of a linearized equation of state for the sea water. ## 5. Conclusions This work provides a complete analysis of the stability of the ocean system under examination with respect to perturbation to the hydrological cycle. We have provided a simple description of the profile of the net freshwater flux into the ocean which is fully specified when only two parameters, which are related to the total freshwater budget of the two high-latitudes regions, are specified. We have first analyzed the bifurcations of the symmetric system, which might be taken as the prototype of a system that has equal probabilities of falling in two different equilibrium configurations. We have found that the system is characterized by two bifurcations, which delimitate a domain of multiple equilibria. We have then extended the study to the general case where asymmetries in the hydrology are considered. We have produced a two-dimensional stability graph and have pointed out the presence of a region \\(\\Gamma\\) where multiple equilibria are realized. Our results summarize the information that can be obtained with multiple hysteresis studies. In this study we emphasize that the rate at which a perturbation in the hydrological cycle is applied to a simple model of the THC may dramatically affect its stability. When general time-depends perturbation to the hydrological cycle are considered, we obtain that the shorter the time scale of the forcing, the smaller the total perturbation required to disrupt the initial pattern of the circulation. The observed relevance of the temporal scale of the forcing in determining the response of our system to perturbations affecting the stability of the THC agrees with the findings of Tziperman and Gildor (2002), Lucarini and Stone (2003a,b) for box models, of Stocker and Schmittner (1997) and Schmittner and Stocker (1999) in the context of EMICSs, and of (Manabe and Stouffer 1999a), Manabe and Stouffer(1999b), Manabe and Stouffer (2000), and Stouffer and Manabe (1999) in the context of GCMs. Moreover, the saturation and patching effect observed for slowly and rapidly increasing perturbations, which allows the definition of _slow_ and _fast_ regimes, respectively, agrees qualitatively with the findings of Lucarini and Stone (2003a,b) for box models, and resembles some of the results - albeit obtained with a coupled model and considering CO\\({}_{2}\\) increases - presented in Stocker and Schmittner (1997) and in Schmittner and Stocker (1999). The main conceptual improvement we propose in this work is the existence of a relation between the critical rate of change of the forcing and the characteristic advective time scale of the system. We notice that the advective time scale results to depend on the inverse of the square root of \\(K_{V}\\), as discussed in appendix A. Therefore, a very important consequence of this analysis is that the efficiency of the vertical mixing might be also one of the key factors determining the response of the THC system to transient changes in the surface forcings. Future work should specifically address the details of the functional dependence of the critical rate on the the vertical diffusivity. Other relevant improvements to the present study could be the adoption of a more complex ocean model, descriptive of other ocean basins, as well as the consideration of a simplified coupled atmosphere-ocean model, where the effects of the radiative forcing can be more properly represented. _Acknowledgment_ We wish to thank for technical and scientific help Fabio Dalan, Antonello Provenzale, Peter H. Stone, and Antonio Speranza. ## Appendix A. Relevance of the vertical diffusivity \\(K_{v}\\) The vertical diffusivity \\(K_{V}\\), or, equivalently, the diapycnal diffusivity \\(K_{D}\\), is the critical parameter controlling the maximum THC strength \\(\\Psi_{max}\\) in ocean models (Bryan, 1987; Wright and Stocker, 1992). On the other hand, an estimate of its value in the real ocean is a subject of current research (Gregg et al., 2003). Scaling theories proposing a balance between vertical diffusion and advection processes suggest, in the case of three-dimensional hemispheric model of the Atlantic ocean, a power law dependence \\(\\Psi_{max}\\sim K_{V}^{2/3}\\)(Zhang et al., 1999; Dalan et al., 2004). In the case of two-dimensional models, the expected dependence is \\(\\Psi_{max}\\sim K_{V}^{1/2}\\)(Knutti et al., 2000). This relation is verified in our model in the range \\(0.6cm^{2}s^{-1}<K_{V}<4.0cm^{2}s^{-1}\\) (figure 8). In this study, the value of \\(K_{V}\\) has been selected so that the corresponding northern sinking equilibrium state characterized by an hydrological cycle determined by \\(\\Phi_{S}=\\Phi_{N}=\\Phi_{av}\\) has an overturning circulation \\(\\sim 30Sv\\). With this choice of \\(K_{V}\\) we can define a characteristic time scale \\(\\tau\\) for the system as \\(\\tau=V/\\Psi_{max}\\sim 350y\\). Similarly, if we change \\(K_{V}\\) over the range shown in figure 8, the advective time-scale would range over the interval \\(150y<\\tau<400y\\). These considerations will be useful in the final discussion of our results. Given the parameters chosen for our simulations, our model integrations estimate \\(\\Phi_{inf}\\sim 0.04Sv\\) and \\(\\Phi_{sup}\\sim 0.73Sv\\), so that \\(\\Phi_{av}\\) results to be \\(\\sim 0.39Sv\\). ## References * Artale et al. (2002) Artale, V., S. Calmanti, and A. Sutera, 2002: Thermohaline circulation sensitivity to intermediate-level anomalies. _Tellus A_, **54**, 159-174. * Baumgartner and Reichel (1975) Baumgartner, A. and E. Reichel, 1975: _The World Water Balance_. Elsevier, New York. * Boyle and Keigwin (1987) Boyle, E. A. and L. Keigwin, 1987: North Atlantic thermohaline circulation during the past 20000 years linked to high-latitude surface temperature. _Nature_, **335**, 335. * Broecker (1994) Broecker, W. S., 1994: Massive iceberg discharges as triggers for global climate change. _Nature_, **372**, 421. * Broecker et al. (1985) Broecker, W. S., D. M. Peteet, and D. Rind, 1985: Does the ocean-atmosphere system have more than one stable mode of operation. _Nature_, **315**, 21. * Bryan (1986) Bryan, F., 1986: High-latitude salinity effects and interhemispheric thermohaline circulations. _Nature_, **323**, 301. * Bryan (1987) -- 1987: Parameter sensitivity of primitive equation ocean general circulation models. _J. Phys. Oceanogr._, **17**, 970-985. * Cessi and Young (1992) Cessi, P. and W. R. Young, 1992: Multiple equilibria in two-dimensional thermohaline flow. _J. Fluid Mech._, **241**, 291-309. * Cessi et al. (2002)Dalan, F., P. H. Stone, I. Kamenkovich, and J. R. Scott, 2004: Sensitivity of climate to dyapicnal diffusion in the ocean - Part I: Equilibrium state. _J. Climate_, submitted. * Dijkstra (2001) Dijkstra, H. A., 2001: _Nonlinear Physical Oceanography_. Kluwer, Dordrecht. * Dijkstra and Neelin (1999) Dijkstra, H. A. and J. D. Neelin, 1999: Imperfections of the thermohaline circulation: Multiple equilibria and flux-correction. _J. Climate_, **12**, 1382-1392. * Gnanadesikan (1999) Gnanadesikan, A., 1999: A simple predictive model for the structure of the oceanic pycnocline. _Science_, **263**, 2077-2079. * Gregg et al. (2003) Gregg, M. C., T. B. Sanford, and D. P. Winkel, 2003: Reduced mixing from the breaking of internal waves in equatorial waters. _Nature_, **422**, 513-515. * Hughes and Weaver (1994) Hughes, T. C. M. and A. J. Weaver, 1994: Multiple equilibrium of an asymmetric two-basin model. _J. Phys. Ocean._, **24**, 619. * Keigwin et al. (1994) Keigwin, L. D., W. B. Curry, S. J. Lehman, and S. J. S., 19994: The role of the deep ocean in North Atlantic climate change between 70 and 130 ky ago. _Nature_, **371**, 323. * Knutti et al. (2000) Knutti, R., T. W. Stocker, and D. G. Wright, 2000: The effects of subgrid-scale parameterizations in a zonally averaged ocean model. _J. Phys. Ocean._, **54**, 2738-2752. Lucarini, V. and P. H. Stone, 2003a: Thermohaline circulation stability: a box model study - Part I: uncoupled model. _J. Climate_, submitted. * 2003b: Thermohaline circulation stability: a box model study - Part II: coupled model. _J. Climate_, submitted. * Manabe and Stouffer (1988) Manabe, S. and R. J. Stouffer, 1988: Two stable equilibria coupled ocean-atmosphere model. _J. Climate_, **1**, 841. * Manabe and Stouffer (1993) -- 1993: Century-scale effects of increased atmospheric CO\\({}_{2}\\) on the ocean-atmosphere system. _Nature_, **364**, 215. * Manabe and Stouffer (1999a) -- 1999a: Are two modes of thermohaline circulation stable? _Tellus_, **51A**, 400. * Manabe and Stouffer (1999b) -- 1999b: The role of thermohaline circulation in climate. _Tellus_, **51A-B(1)**, 91-109. * Manabe and Stouffer (2000) -- 2000: Study of abrupt climate change by a coupled ocean-atmosphere model. _Quat. Sci. Rev._, **19**, 285-299. * Marotzke (1996) Marotzke, J.: 1996, Analysis of thermohaline feedbacks. _Decadal Climate Variability: Dynamics and predicatibility_, Springer, Berlin, 333-378. * Marotzke and Stone (1995) Marotzke, J. and P. H. Stone, 1995: Atmospheric transports, the thermohaline circulation, and flux adjustments in a simple coupled model. _J. Phys. Ocean._, **25**, 1350-1360. Marotzke, J. and J. Willebrand, 1991: Multiple equilibria of the global thermohaline circulation. _J. Phys. Ocean._, **21**, 1372. * Munk and Wunsch (1998) Munk, W. and C. Wunsch, 1998: Abyssal recipes II. Energetics of the tides and wind. _Deep-Sea Res._, **45**, 1976-2009. * Rahmstorf (1995) Rahmstorf, S., 1995: Bifurcations of the Atlantic thermohaline circulation in response to changes in the hydrological cycle. _Climatic Change_, **378**, 145. * Rahmstorf (1996) -- 1996: On the freshwater forcing and transport of the Atlantic thermohaline circulation. _Clim. Dyn._, **12**, 799. * Rahmstorf (1997) -- 1997: Risk of sea-change in the atlantic. _Nature_, **388**, 528. * Rahmstorf (1999a) Rahmstorf, S.: 1999a, Rapid oscillation of the thermohaline ocean circulation. _Reconstructing ocean history: A window into the future_, Kluwer Academic, New York, 309-332. * Rahmstorf (1999b) Rahmstorf, S., 1999b: Shifting seas in the greenhouse? _Nature_, **399**, 523. * 2000: The thermohaline ocean circulation - a system with dangerous thresholds? _Climatic Change_, **46**, 247. * Rahmstorf (2002) -- 2002: Ocean circulation and climate during the past 120,000 years. _Nature_, **419**, 207. * Rahmstorf (2003) -- 2003: The current climate. _Nature_, **421**, 699. Rahmstorf, S. and A. Ganopolski, 1999: Long-term global warming scenarios computed with an efficient coupled climate model. _Climatic Change_, **43**, 353. * Rahmstorf and Willebrand (1995) Rahmstorf, S. and J. Willebrand, 1995: The role of temperature feedback in stabilizing the thermohaline circulation. _J. Phys. Ocean._, **25**, 787. * Rivin and Tziperman (1997) Rivin, I. and E. Tziperman, 1997: Linear versus self-sustained interdecadal thermohaline variability in a coupled box model. _J. Phys. Oceanogr._, **27**, 1216-1232. * Schmittner and Stocker (1999) Schmittner, A. and T. F. Stocker, 1999: The stability of the thermohaline circulation in global warming experiments. _J. Climate_, **12**, 1117-1127. * Scott et al. (1999) Scott, J. R., J. Marotzke, and P. H. Stone, 1999: Interhemispheric thermohaline circulation in a coupled box model. _J.Phys.Oceanogr._, **29**, 351-365. * Stocker (2000) Stocker, T. F., 2000: Past and future reorganisations in the climate system. _Quat. Sci. Rev._, **19**, 301-319. * Stocker (2001) Stocker, T. F.: 2001, The role of simple models in understanding climate change. _Continuum Mechanics and Applications in Geophysics and the Environment_, Springer, Heidelberg, 337-367. * a perspective. _The Oceans and Rapid Climate Change: Past, Present and Future_, AGU, Washington, 277-293. * Stocker et al. (2002)Stocker, T. F. and A. Schmittner, 1997: Influence of CO\\({}_{2}\\) emission rates on the stability of the thermohaline circulation. _Nature_, **388**, 862-864. * Stocker and Wright (1991) Stocker, T. F. and D. G. Wright, 1991: Rapid transitions of the ocean's deep circulation induced by changes in the surface water fluxes. _Nature_, **351**, 729-732. * Stommel (1961) Stommel, H., 1961: Thermohaline convection with two stable regimes of flow. _Tellus_, **13**, 224-227. * Stone and Krasovskiy (1999) Stone, P. H. and Y. P. Krasovskiy, 1999: Stability of the interhemispheric thermohaline circulation in a coupled box model. _Dyn. Atmos. Oceans_, **29**, 415-435. * Stouffer and Manabe (1999) Stouffer, R. J. and S. Manabe, 1999: Response of a coupled ocean-atmosphere model to increasing atmospheric carbon dioxide: Sensitivity to the rate of increase. _J. Climate_, **12**, 2224-2237. * Titz et al. (2002a) Titz, S., T. Kuhlbrodt, and U. Feudel, 2002a: Homoclinic bifurcation in an ocean circulation box model. _Int. J. Bif. Chaos_, **12**, 869-875. * Titz et al. (2002b) Titz, S., T. Kuhlbrodt, S. Rahmstorf, and U. Feudel, 2002b: On freshwater-dependent bifurcations in box models of the interhemispheric thermohaline circulation. _Tellus A_, **54**, 89. * Tziperman and Gildor (2002) Tziperman, E. and H. Gildor, 2002: The stabilization of the thermohaline circulation by the temperature/precipitation feedback. _J. Phys. Ocean._, **32**, 2707. * Tziperman et al. (2002)Tziperman, E., R. J. Toggweiler, Y. Feliks, and K. Bryan, 1994: Instability of the thermohaline circulation with respect to mixed boundary conditions: Is it really a problem for realistic models? _J. Phys. Ocean._, **24**, 217-232. * Vellinga (1996) Vellinga, M., 1996: Instability oltwo dimensional thermohaline circulation. _J. Phys. Ocean._, **26**, 305-319. * Wang et al. (1999a) Wang, X., P. H. Stone, and J. Marotzke, 1999a: Thermohaline circulation. Part I: Sensitivity to atmospheric moisture transport. _J. Climate_, **12**, 71-82. * Wang et al. (1999b) -- 1999b: Thermohaline circulation. Part II: Sensitivity with interactive atmospheric transport. _J. Climate_, **12**, 83-92. * Weaver and Hughes (1992) Weaver, A. J. and T. M. C. Hughes: 1992, Stability of the thermohaline circulation and its links to climate. _Trends in Physical Oceanography_, Council of Scientific Research Integration, Trivandrum, 15. * Wiebe and Weaver (1999) Wiebe, E. C. and A. J. Weaver, 1999: On the sensitivity of global warming experiments to the parametrisation of sub-grid scale ocean mixing. _Clim. Dyn._, **15**, 875-893. * Wright and Stocker (1992) Wright, D. G. and T. W. Stocker, 1992: Sensitivities of a zonally averaged global ocean circulation model. _J. Geophys. Res._, **97**, 12707-12730. * Wunsch and Ferrari (2004) Wunsch, C. and R. Ferrari, 2004: Vertical mixing, energy, and the general circulation of the oceans. _Ann. Rev. Flu. Mech._, **36**, DOI: 10.1146. * Wunsch et al. (2005)Zhang, J., R. W. Schmitt, and R. X. Huang, 1999: The relative influence of diapycnal mixing and hydrologic forcing on the stability of the thermoha-line circulation. _J. Phys. Oceanogr._, **29**, 1096-1108. Figure 1: Surface freshwater flux for three different configurations of the hydrological cycle. Figure 2: Circulation patterns obtained in the non-bistable region for symmetric forcing. Transport is expressed in Sv. a): \\(\\Phi_{S}=\\Phi_{N}=0.1\\Phi_{av}=0.5\\Phi_{inf}\\). b) : \\(\\Phi_{S}=\\Phi_{N}=2\\Phi_{av}=1.1\\Phi_{sup}\\). Figure 3: Circulation patterns obtained in the bistable region for symmetric forcing with \\(\\Phi_{S}=\\Phi_{N}=\\Phi_{av}\\). Transport is expressed in Sv. a) Northern sinking pattern. b) Southern sinking patterns. Figure 4: Quasi-static and transient changes in the value of the parameter \\(\\Phi_{N}\\) Figure 5: Stability graph of the system in the space \\((\\Phi_{S},\\Phi_{N})\\). The thick black line delimitates the bistability region \\(\\Gamma\\). Along the diagonal, solid lines represent the bistable states having antisymmetric circulation patters, while the dashed line represent represent the stable symmetric circulation patters. Figure 6: Critical forcings for the collapse of the northern sinking pattern in the space \\((\\Phi_{S},\\Phi_{N})\\). Various temporal patterns of the forcings are considered. Figure 7: Critical forcings for the collapse of the northern sinking pattern in the space \\((\\Phi_{S},\\Phi_{N})\\). Various temporal patterns of the forcings are considered. Figure 8: Dependence of maximum value of the THC on the value of the vertical diffusivity for \\(\\overline{T}=15.0^{\\circ}C\\), \\(\\Delta\\overline{T}=23.5^{\\circ}C\\), and \\(\\Phi_{N}=\\Phi_{S}=\\Phi_{av}\\)
We reconsider the problem of the stability of the thermohaline circulation as described by a two-dimensional Boussinesq model with mixed boundary conditions. We determine how the stability properties of the system depend on the intensity of the hydrological cycle. We define a two-dimensional parameters' space descriptive of the hydrology of the system and determine, by considering suitable quasi-static perturbations, a bounded region where multiple equilibria of the system are realized. We then focus on how the response of the system to finite-amplitude surface freshwater forcings depends on their rate of increase. We show that it is possible to define a robust separation between slow and fast regimes of forcing. Such separation is obtained by singling out an estimate of the critical growth rate for the anomalous forcing, which can be related to the characteristic advective time scale of the system.
Summarize the following text.
arxiv-format/0408044v3.md
# Eigenwavelets of the Wave Equation Gerald Kaiser Signals & Waves, Austin, TX www.wavelets.com [email protected] ###### ## 1 Extension of wave functions to complex spacetime The ideas to be presented here affirm that complex analysis resonates deeply in \"real\" physical and geometric settings, and so they are close in spirit to the work of Carlos Berenstein (see [1, 1, 2] for example), to whom this volume is dedicated. Acoustic and electromagnetic wavelets were first constructed in [10]. It was shown that solutions of homogeneous (_i.e.,_ sourceless) scalar and vector wave equations in Minkowski space \\(\\,\\mathbb{R}^{3,1}\\) extend naturally to complex space-time, and the wavelets were defined as the Riesz duals of evaluation maps acting on spaces of such holomorphic solutions. The sourceless wavelets then split naturally into retarded and advanced parts emitted and absorbed, respectively, by sources located on branch cuts needed to make these parts single-valued. Later work [10, 11] was aimed at the construction of realizable source distributions which, when synthesized, would act as antennas radiating and receiving the wavelets. Two difficulties with this approach have been (a)that the computed sources are quite singular, consisting of multiple surface layers that may be difficult to realize in practice, and (b) in the electromagnetic case the sources appeared to require a nonvanishing magnetic charge distribution, which cannot be realized as no magnetic monopoles have been observed in Nature. In this paper we resolve the first difficulty by replacing the spheroidal surface supporting the sources in [K3, K4] by a spheroidal shell. It is shown in [K4a] that the second difficulty can be overcome using Hertz potentials, which give a charge-current distribution due solely to bound electric charges confined to the shell. Although our constructions generalize to other dimensions, we shall concentrate here on the physical case of the Minkowski space \\(\\,\\mathbb{R}^{3,1}\\). Let \\[x=(\\boldsymbol{r},t),\\ y=(\\boldsymbol{a},b)\\in\\,\\mathbb{R}^{3,1} \\tag{1}\\] be real spacetime vectors and define the complex causal tube \\[\\mathcal{T}=\\{x-iy\\in\\mathbb{C}^{4}:\\ y\\ \\text{is timelike, {\\it i.e.,}}\\ |b|>| \\boldsymbol{a}|\\}. \\tag{2}\\] It was shown in [K94, K3] that solutions of the homogeneous wave equation \\[\\square f_{0}(x)\\equiv(\\partial_{t}^{2}-\\Delta)f_{0}(\\boldsymbol{r},t)=0 \\tag{3}\\] extend naturally to analytic functions \\(\\tilde{f}_{0}(x-iy)\\) in \\(\\mathcal{T}\\) in the sense that \\[\\lim_{y\\to+0}\\left\\{\\tilde{f}_{0}(x-iy)-\\tilde{f}_{0}(x+iy)\\right\\}=f_{0}(x), \\tag{4}\\] where \\(y\\to+0\\) means that \\(y\\) approaches the origin within the future cone, _i.e.,_ with \\(b>|\\boldsymbol{a}|\\). This kind of extension to complex domains is familiar in hyperfunction theory; see [K88, KS99] for example. We now show that even when the wave function has a source, _i.e.,_ \\[\\square f(x)=4\\pi g(x), \\tag{5}\\] it extends analytically to \\(\\mathcal{T}\\) outside a spacetime region determined by the source. It will suffice to do this for the retarded propagator \\[G(x)=\\frac{\\delta(t-r)}{r}, \\tag{6}\\] which is the unique causal fundamental solution: \\[\\square G(x)=4\\pi\\delta(t)\\delta(\\boldsymbol{r})=4\\pi\\delta(x),\\quad G( \\boldsymbol{r},t)=0\\ \\forall t<0. \\tag{7}\\] If the source \\(g\\) is supported in a compact spacetime region \\(W\\), the unique causal solution of (5) is given by \\[f(x)=\\int_{W}dx^{\\prime}\\ G(x-x^{\\prime})g(x^{\\prime}). \\tag{8}\\]Assume for the moment that \\(G(x)\\) has been extended to \\(\\tilde{G}(x-iy)\\). Then we define the source of \\(\\tilde{G}\\) as the distribution \\(\\bar{\\delta}\\) in real spacetime given by \\[4\\pi\\tilde{\\delta}(x-iy)\\equiv\\Box_{x}\\tilde{G}(x-iy), \\tag{9}\\] where \\(\\Box_{x}\\) means that the wave operator acts only on \\(x\\), in a distributional sense, so that the imaginary spacetime vector \\(y\\) is regarded as an auxiliary parameter. The extended solution is now defined as \\[\\tilde{f}(x-iy)=\\int_{W}dx^{\\prime}\\ \\tilde{G}(x-x^{\\prime}-iy)g(x^{\\prime}) \\tag{10}\\] and it satisfies the wave equation \\[\\Box_{x}\\tilde{f}(x-iy)=4\\pi\\tilde{g}(x-iy)\\] with the extended source \\[\\tilde{g}(x-iy)=\\int_{W}dx^{\\prime}\\ \\tilde{\\delta}(x-x^{\\prime}-iy)g(x^{ \\prime}). \\tag{11}\\] Formally, the extended delta function \\(\\tilde{\\delta}(x-iy)\\) is a 'point source' at the imaginary spacetime point \\(iy\\) as seen by a real observer at \\(x\\). Actually, it will be seen to be a distribution in \\(x\\) with compact spatial (but not temporal) support localized around the spatial origin (\\(\\vec{r}=\\vec{0}\\)) and depending on the choice of a branch cut needed to make \\(\\tilde{G}\\) single-valued. This branch cut is precisely the region where \\(\\tilde{G}\\) fails to be analytic, and the integral (10) determines a region \\(\\tilde{W}\\) containing \\(W\\) where \\(\\tilde{f}\\) fails to be analytic. A general solution \\(f_{1}(x)\\) of (5) is obtained by adding a sourceless wave \\(f_{0}(x)\\) to (8). Since \\(\\tilde{f}_{0}\\) is analytic in \\(\\mathcal{T}\\), \\(\\tilde{f}_{1}\\) is analytic in \\(\\mathcal{T}\\) outside of \\(\\tilde{W}\\). It therefore suffices to concentrate on the propagators as claimed. In the rest of the paper we construct extended propagators, study their properties, and compute their sources. ## 2 Extended propagators In accordance with (1), we use the following notation for complex space and time variables: \\[\\vec{\\vec{r}} = \\vec{r}-i\\vec{a}\\in\\mathbb{C}^{3},\\qquad\\tilde{t}=t-ib\\in\\ \\mathbb{C}\\] \\[\\tilde{x} = x-iy=(\\vec{\\vec{r}},\\tilde{t})\\in\\mathcal{T}\\ \\Leftrightarrow\\ |b|>|\\vec{a}|.\\] As above, we interpret \\(i\\vec{a}\\) formally as an imaginary spatial source point, so that \\(\\vec{\\vec{r}}\\) is the vector from the imaginary source point \\(i\\vec{a}\\) to a real observer at \\(\\vec{r}\\). To extend the propagator (6), begin by replacing the one-dimensional delta function with the Cauchy kernel,\\[\\delta(t)\\rightarrow\\tilde{\\delta}(\\tilde{t})=\\frac{1}{2\\pi i\\tilde{t}}\\,,\\quad \\tilde{t}=t-ib, \\tag{12}\\] which indeed satisfies a condition of type (4): \\[\\lim_{b\\rightarrow+0}\\left\\{\\tilde{\\delta}(t-ib)-\\tilde{\\delta}(t+ib)\\right\\}= \\delta(t). \\tag{13}\\] To complete the extension of \\(G(\\vec{r},t)\\), we must also extend the Euclidean distance \\(r(\\vec{r})=|\\vec{r}|\\). Define the complex distance from the source to the observer as \\[\\tilde{r}(\\vec{\\vec{r}})=\\sqrt{\\vec{\\vec{r}}\\cdot\\vec{\\vec{r}}}=\\sqrt{r^{2}-a^ {2}-2i\\vec{r}\\cdot\\vec{a}},\\ \\ \\mbox{where}\\ \\ r=|\\vec{r}|,\\ a=|\\vec{a}|. \\tag{14}\\] \\(\\tilde{r}(\\vec{\\vec{r}})\\) is an analytic continuation to \\(\\mathbb{C}^{3}\\) of \\(r(\\vec{r})\\). Being a complex square root, it has branch points wherever \\(\\vec{\\vec{r}}\\cdot\\vec{\\vec{r}}=0\\). For fixed \\(\\vec{a}\ eq\\vec{0}\\), these form a circle of radius \\(a\\) in the plane orthogonal to \\(\\vec{a}\\),1 Footnote 1: In \\(\\mathbb{R}^{n}\\), \\(\\mathcal{C}\\) would be a sphere of codimension 2 orthogonal to \\(\\vec{a}\\). \\[\\mathcal{C}\\equiv\\{\\vec{r}\\in\\mathbb{R}^{3}:\\tilde{r}=0\\}=\\{\\vec{r}:\\ r=a,\\ \\vec{r}\\cdot\\vec{a}=0\\}. \\tag{15}\\] To be consistent with the notation \\(\\vec{\\vec{r}}=\\vec{r}-i\\vec{a}\\), we write \\[\\tilde{r}=p-iq. \\tag{16}\\] Comparison with (14) gives the following relations between \\((p,q)\\) and the spherical and cylindrical coordinates with axis along \\(\\vec{a}\\): \\[p^{2}-q^{2}=r^{2}-a^{2},\\qquad pq=\\vec{a}\\cdot\\vec{r}=ar\\cos\\theta=az \\tag{17}\\] and \\[a^{2}\\rho^{2} = a^{2}(r^{2}-z^{2})=a^{2}(a^{2}+p^{2}-q^{2})-p^{2}q^{2} \\tag{18}\\] \\[= (a^{2}+p^{2})(a^{2}-q^{2}).\\] It follows that the real and imaginary parts of \\(\\tilde{r}\\) are bounded by \\(r\\) and \\(a\\), respectively: \\[p^{2}\\leq r^{2},\\ \\ \\mbox{\\it i.e.,}\\ \\ \\ |\\mathop{\\rm Re} \ olimits\\tilde{r}|\\leq|\\mathop{\\rm Re}\ olimits\\tilde{\\vec{r}}|\\] \\[q^{2}\\leq a^{2},\\ \\ \\mbox{\\it i.e.,}\\ \\ \\ |\\mathop{\\rm Im} \ olimits\\tilde{r}|\\leq|\\mathop{\\rm Im}\ olimits\\tilde{\\vec{r}}|, \\tag{19}\\] with equalities attained only when \\(\\vec{r}\\) is parallel or antiparallel to \\(\\vec{a}\\). Since \\(\\vec{a}\\) will be a fixed nonzero vector throughout, we will usually regard \\(\\tilde{r},p,q\\) as functions of \\(\\vec{r}\\) only, suppressing the dependence on \\(\\vec{a}\\). Note that \\(\\mathbb{R}^{3}-\\mathcal{C}\\) is multiply connected since a closed loop that threads \\(\\mathcal{C}\\) cannot be shrunk continuously to a point without intersecting \\(\\mathcal{C}\\). In particular, if we continue \\(\\tilde{r}\\) analytically around a simple closed loop, we obtain the value \\(-\\tilde{r}\\) instead of \\(\\tilde{r}\\) upon returning to the starting point. Thus \\(\\tilde{r}\\) is a double-valuedfunction on \\(\\mathbb{R}^{3}\\). To make it single-valued, we choose a branch cut that must be crossed to close the loop. Instead of returning to the starting point as \\(-\\tilde{r}\\), the sign reversal now takes place upon crossing the cut. To give an extension of the positive distance, the branch must be chosen so that \\[\\vec{a}\\to\\vec{0}\\ \\Rightarrow\\ \\tilde{r}\\to+r, \\tag{20}\\] and the simplest such choice is obtained by requiring \\[\\mathrm{Re}\\ \\tilde{r}=p\\geq 0. \\tag{21}\\] The resulting branch cut consists of the disk spanning the circle \\(\\mathcal{C}\\), \\[\\mathcal{D}\\equiv\\{\\vec{r}\\in\\mathbb{R}^{3}:p=0\\}=\\{\\vec{r}:\\ r\\leq a,\\ \\vec{r}\\cdot\\vec{a}=0\\},\\quad\\partial\\mathcal{D}=\\mathcal{C}. \\tag{22}\\] \\(\\mathcal{D}\\) will be called the standard branch cut and \\(\\tilde{r}\\) the standard complex distance. General branch cuts, obtained by deforming \\(\\mathcal{D}\\) while leaving its boundary intact, will be considered in the next section. If the observer is far from \\(\\mathcal{C}\\), it follows from (14) and (20) that \\[r\\gg a\\ \\Rightarrow\\ p\\approx r\\ \\ \\mathrm{and}\\ \\ q\\approx a\\cos\\theta, \\tag{23}\\] Thus, \\((p,q/a)\\) are deformations of the spherical coordinates \\((r,\\cos\\theta)\\) near the source. From (17) and (18) it follows that level surfaces of \\(p^{2}\\) (as a function of \\(\\vec{r}\\), keeping \\(\\vec{a}\ eq\\vec{0}\\) fixed) are spheroids \\(\\mathcal{S}_{p}\\) and those of \\(q^{2}\\) are the orthogonal hyperboloids \\(\\mathcal{H}_{q}\\), given by \\[\\mathcal{S}_{p}: \\frac{\\rho^{2}}{p^{2}+a^{2}}+\\frac{z^{2}}{p^{2}}=1,\\quad p\ eq 0 \\tag{24}\\] \\[\\mathcal{H}_{q}: \\frac{\\rho^{2}}{q^{2}-a^{2}}-\\frac{z^{2}}{q^{2}}=1,\\quad 0<q^{2}<a ^{2}. \\tag{25}\\] All these quadrics are confocal with \\(\\mathcal{C}\\) as the common focal set. As \\(p\\to 0\\), \\(\\mathcal{S}_{p}\\) collapses to a double cover of the disk \\(\\mathcal{D}\\). The variables \\((p,q)\\), together with the azimuthal angle \\(\\phi\\) about the \\(\\vec{a}\\)-axis, determine an oblate spheroidal coordinate system, as depicted in Figure 1. We now define the extended propagator as \\[\\tilde{G}(\\vec{\\vec{r}},\\tilde{t})=\\frac{\\tilde{\\delta}(\\tilde{t}-\\tilde{r})} {\\tilde{r}}=\\frac{1}{2\\pi i\\tilde{r}(\\tilde{t}-\\tilde{r})}. \\tag{26}\\] This is our basic wavelet,2 from which the entire wavelet family is obtained by spacetime translations: Footnote 2: In applications, it is better to use time derivatives of \\(\\tilde{G}\\), which have vanishing moments and better temporal decay and propagation properties [K4]. \\[\\tilde{G}_{z}(x)=\\tilde{G}(x-z),\\quad z=x^{\\prime}+iy=(\\vec{r}^{\\prime}+i \\vec{a},t^{\\prime}+ib)\\in\\mathcal{T} \\tag{27}\\]The family \\(\\tilde{G}_{z}\\) may be called eigenwavelets of the wave equation in the sense that they are proper to that equation, though of course they are not eigenfunctions. In fact, \\(\\tilde{G}_{z}(x)\\) is seen [K3] to be a pulsed beam originating from \\(\\vec{r}=\\vec{r}^{\\prime}\\) at \\(t=t^{\\prime}\\) and propagating along the direction of \\(\\vec{a}/b\\), _i.e.,_ along \\(\\vec{a}\\) if \\(y\\) is in the future cone and along \\(-\\vec{a}\\) if \\(y\\) is in the past cone. The pulse has a duration \\(|b|-a\\) along the beam axis. By letting \\(y\\) approach the light cone (\\(a\\rightarrow|b|\\)), the beam can be focused as tightly as desired around its axis, approximating a single ray along \\(y\\). Equation (10) states that the extended causal solution \\(\\tilde{f}(x-iy)\\) is a superposition of eigenwavelets, all with the same \\(y\\). This gives a directional scale analysis of the original solution \\(f(x)\\) which may be called its eigenwavelet transform. The eigenwavelets have the spheroids \\({\\cal S}_{p}\\) as wave fronts and propagate out along the orthogonal hyperboloids \\({\\cal H}_{q}\\) with strength decaying monotonically away from the front beam axis. Hence they have no sidelobes, which makes them potentially useful for applications to communication, radar and related areas. These properties are illustrated in Figures 2 and 3. We may visualize the effects of the extension \\(G(\\vec{r},t)\\rightarrow\\tilde{G}(\\vec{\\vec{r}},\\tilde{t})\\) as follows. The extension \\(t\\rightarrow\\tilde{t}\\) replaces the spherical impulse \\(\\delta(t-r)\\) in (6) by a spherical pulse \\(\\tilde{\\delta}(\\tilde{t}-r)\\) of duration \\(|b|\\). The extension \\(r\\rightarrow\\tilde{r}\\) then deforms this spherical pulse to a pulsed beam in the direction of \\(\\vec{a}/b\\). By (23), \\[r\\gg a\\ \\Rightarrow\\ \\tilde{r}\\approx r-ia\\cos\\theta, \\tag{28}\\] hence the larger we choose \\(a\\), the stronger the dependence of \\(\\tilde{r}\\) on \\(\\cos\\theta\\) in the far zone and the more focused the beam. Let us emphasize that \\(\\tilde{G}\\) depends on the complex spatial vector \\(\\vec{\\vec{r}}\\in\\mathbb{C}^{3}\\) only through the complex distance \\(\\tilde{r}\\) by writing \\[\\Psi(\\tilde{r},\\tilde{t})=\\tilde{G}(\\vec{\\vec{r}},\\tilde{t})=\\frac{1}{2\\pi i \\tilde{r}(\\tilde{t}-\\tilde{r})}. \\tag{29}\\] Figure 1: The level surfaces of \\(p,q\\) and \\(\\phi\\) form an oblate spheroidal coordinate system. Due to the factor \\(\\tilde{r}\\) in the denominator, \\(\\Psi\\) is discontinuous across \\({\\cal D}\\) and singular on \\({\\cal C}\\). \\({\\cal D}\\) generalizes the point singularity of \\(G\\) at \\(\\mathbf{r}=\\mathbf{0}\\) and will be the spatial support of the source (9). To avoid further singularities, the factor \\[\\tilde{t}-\\tilde{r}=(t-p)-i(b-q)\\] must not vanish for any \\(r\\). By (19), \\[b-q\ eq 0\\ \\forall\\mathbf{r}\\ \\Leftrightarrow\\ a<|b|, \\tag{30}\\] so a necessary and sufficient condition for \\(\\Psi(\\tilde{r},\\tilde{t})\\) to be analytic whenever \\(\\mathbf{r}\ otin{\\cal D}\\) is that \\((\\mathbf{\\tilde{r}},\\tilde{t})\\in{\\cal T}\\). Recalling that the tightness of the beam is controlled by the size of \\(a\\), (30) means that the beam cannot become tighter than a single ray and, in fact, fails to be analytic along the ray in the limit \\(a=|b|\\). The volume element in \\(\\mathbb{R}^{3}\\) in oblate spheroidal coordinates is \\[dV=\\frac{1}{a}(p^{2}+q^{2})dp\\,dq\\,d\\phi=\\frac{1}{a}|\\tilde{r}|^{2}dp\\,dq\\,d\\phi, \\tag{31}\\] hence \\(\\Psi\\) is locally integrable and square integrable. A differentiation gives \\[4\\pi\\tilde{\\delta}(x-iy)\\equiv\\Box_{x}\\tilde{G}(x-iy)=0\\qquad(x-iy\\in{\\cal T},\\ \\mathbf{r}\ otin{\\cal D}). \\tag{32}\\] Therefore \\(\\tilde{\\delta}(x-iy)\\), with \\(y\\) a fixed timelike vector, is a distribution in \\(x=(\\mathbf{r},t)\\) with spatial support in \\({\\cal D}\\). (The temporal support is noncompact; in fact, \\(\\tilde{\\delta}(x-iy)\\) decays as \\(1/\\tilde{t}\\) due to the Cauchy kernel.) Figure 2: Time-lapse plots of \\(|\\tilde{G}(x-iy)|\\) in the far zone, showing the evolution of a single pulse with propagation vector \\(y=(0,0,a,b)\\). Clockwise from upper left: \\(b/a=1.5,\\ 1.1,\\ 1.01,\\ 1.0001\\). As \\(b/a\\to 1\\), \\(y\\) approaches the light cone and the pulsed beam becomes more and more focused around the ray \\(y\\). We have taken the slice \\(x_{2}=0\\), so that the disk \\({\\cal D}\\) becomes the interval \\([-a,a]\\) on the \\(x_{1}\\)-axis and the pulse propgates in the \\(x_{3}\\) direction of the \\(x_{1}\\)-\\(x_{3}\\) plane. The source \\(\\tilde{\\delta}(x-iy)\\) was computed explicitly in [K3] and turns out to be quite singular. It consists of a single layer and a double layer on \\({\\cal D}\\), both of which diverge on the boundary \\({\\cal C}\\) where \\(\\Psi\\) is singular. We will compute regularized versions of \\(\\Psi\\) and \\(\\tilde{\\delta}\\) by using the freedom to deform the branch cut to eliminate the singularity on \\({\\cal C}\\). ## 3 Regularization by branch cut deformation A general branch cut \\({\\cal B}\\) is a membrane obtained by a continuous deformation of the disk \\({\\cal D}\\) leaving its boundary intact, \\[\\partial{\\cal B}={\\cal C}. \\tag{33}\\] \\({\\cal B}\\) inherits an orientation from \\({\\cal D}\\), which in turn is oriented by \\(a\\). Let \\(V_{\\cal B}\\) be the compact volume swept out in the deformation from \\({\\cal D}\\) to \\({\\cal B}\\). Let us define the complex distance \\(\\tilde{r}_{\\cal B}\\) with branch cut \\({\\cal B}\\) in terms of \\(\\tilde{r}=\\tilde{r}_{\\cal D}\\) by Figure 3: Near-zone graphs of \\(|\\tilde{G}(x-iy)|^{2}\\) with \\(y=(0,0,1,1.01)\\) immediately after launch, evolving in the \\(x_{1}\\)-\\(x_{3}\\) plane with \\(x_{2}=0\\) as in Fig. 1. Clockwise from upper left: \\(t=0.1,1,2,3\\). The ellipsoidal wave fronts and hyperbolic flow lines are visible. The top of the peak is cut off to show the behavior near the base. The two spikes represent the branch circle, whose slice with \\(x_{2}=0\\) consists of the points \\((\\pm 1,0,0)\\). \\[\\tilde{r}_{\\mbox{\\tiny B}}=\\begin{cases}\\tilde{r}&\\mbox{if $\\mathbf{r}\ otin V_{ \\mbox{\\scriptsize\\mbox{\\scriptsize\\mbox{\\scriptsize\\mbox{\\scriptsize\\mbox{\\scriptsize \\mbox{\\scriptsize\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{}}}}}}} {\\mbox{\\mbox{{\\mbox{{\\mbox{\\mbox{{\\mbox{}}}}}}}}}}}}}}}\\\\ -\\tilde{r}&\\mbox{if $\\mathbf{r}\\in V_{\\mbox{\\scriptsize\\mbox{\\mbox{ \\scriptsize\\mbox{\\mbox{\\scriptsize\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{{\\mbox{\\mbox{{\\mbox{ }}}}}}}}}}}}}}}}\\,.\\end{cases} \\tag{34}\\] I claim that \\(\\tilde{r}_{\\mbox{\\tiny B}}\\) is continuous except for a sign reversal across \\(\\mbox{\\scriptsize\\mbox{\\mbox{\\scriptsize\\mbox{\\mbox{\\scriptsize{\\mbox{\\mbox{ \\scriptsize\\mbox{\\mbox{\\mbox{\\mbox{\\scriptsize\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{{\\mbox{\\mbox{ \\mbox{\\mbox{\\mbox{\\mbox{\\mbox{{\\mbox{\\mbox{{ }}}}}}}}}}}}}}}}}}}}\\), generalizing the sign reversal of \\(\\tilde{r}\\) across \\(\\mbox{\\scriptsize\\mbox{\\mbox{\\mbox{\\mbox{\\scriptsize{\\mbox{\\mbox{\\mbox{ \\mbox{\\mbox{\\mbox{\\mbox{{\\mbox{\\mbox{\\mbox{{\\mbox{\\mbox{\\mbox{{\\mbox{\\mbox{{\\mbox{\\mbox{\\mbox{{ }}}}}}}}}}}}}}}}}}}}}\\). This can be seen most simply if \\(\\mbox{\\scriptsize\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{ \\mbox{\\mbox{\\mbox{\\mbox{{\\mbox{\\mbox{{\\mbox{\\mbox{\\mbox{{\\mbox{\\mbox{\\mbox{{\\mbox{\\mbox{{\\mbox{ }}}}}}}}}}}}}}}}}}}}}}\\) does not intersect the interior of \\(\\mbox{\\scriptsize\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{{ \\mbox{\\mbox{\\mbox{{\\mbox{{\\mbox{{\\mbox{\\mbox{{\\mbox{\\mbox{\\mbox{{\\mbox{\\mbox{\\mbox{{\\mbox{{ }}}}}}}}}}}}}}}}}}}}}}}\\), so that they have only the boundary in common. Then \\(V_{\\mbox{\\scriptsize\\mbox{\\mbox{\\scriptsize{\\mbox{\\scriptsize{\\mbox{\\mbox{ \\mbox{\\mbox{{\\mbox{\\mbox{{\\mbox{\\mbox{{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{{ \\mbox{\\mbox{{\\mbox{\\mbox{{\\mbox{{\\mbox{{ }}}}}}}}}}}}}}}}}}}}}}}}}\\) is either all on the 'positive' or all on the 'negative' side of \\(\\mbox{\\scriptsize\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{{\\mbox{\\mbox{{\\mbox{ \\mbox{{\\mbox{\\mbox{\\mbox{{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{{\\mbox{\\mbox{{\\mbox{ \\mbox{{\\mbox{{ }}}}}}}}}}}}}}}}}}}}}}}\\). If \\(V_{\\mbox{\\scriptsize\\mbox{\\mbox{\\scriptsize{\\mbox{\\mbox{\\scriptsize{\\mbox{ \\mbox{{\\mbox{{\\mbox{\\mbox{{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{{\\mbox{\\mbox{{\\mbox{\\mbox{{ \\mbox{{\\mbox{{\\mbox{{ }}}}}}}}}}}}}}}}}}}}}}\\)\\) is 'positive,' then its boundary is \\[\\partial V_{\\mbox{\\scriptsize\\mbox{\\scriptsize{\\mbox{\\scriptsize{\\mbox{\\scriptsize{ \\mbox{\\mbox{{\\mbox{\\mbox{\\mbox{{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{ \\mbox{\\mbox{\\mbox{{\\mbox{{\\mbox}}}}}}}}}}}}}}}}}}}}}}}=\\mbox{ \\scriptsize\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{{\\mbox{\\mbox{{\\mbox{\\mbox{{\\mbox{\\mbox{ \\mbox{\\mbox{{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{\\mbox{{\\mbox{\\\\[\\Psi_{A}(\\tilde{r},\\tilde{t})=\\frac{1}{2}\\left\\{\\Psi(\\tilde{r}_{+},\\tilde{t})+ \\Psi(\\tilde{r}_{-},\\tilde{t})\\right\\}=\\tilde{G}_{A}(x-iy). \\tag{39}\\] Let \\(V_{\\alpha}^{\\pm}\\) be the interiors of the upper and lower hemispheroids. By (34), \\[\\mathbf{r}\\in V_{\\alpha}^{+}\\ \\Rightarrow\\ \\tilde{r}_{+}=- \\tilde{r},\\ \\ \\tilde{r}_{-}=\\tilde{r} \\tag{40}\\] \\[\\mathbf{r}\\in V_{\\alpha}^{-}\\ \\Rightarrow\\ \\tilde{r}_{+}= \\tilde{r},\\ \\ \\tilde{r}_{-}=-\\tilde{r}. \\tag{41}\\] Hence, in both \\(V_{\\alpha}^{\\pm}\\) we have \\[\\Psi_{A}(\\tilde{r},\\tilde{t})=\\frac{1}{4\\pi i\\tilde{r}(\\tilde{t}-\\tilde{r})}- \\frac{1}{4\\pi i\\tilde{r}(\\tilde{t}+\\tilde{r})}=\\frac{1}{2\\pi i(\\tilde{t}^{2}- \\tilde{r}^{2})}\\,, \\tag{42}\\] which is independent of the choice of branch. This shows that the discontinuities across the aprons cancel in the average \\(\\Psi_{A}\\). Furthermore, by (30) we have \\[|b|>a\\ \\Rightarrow\\ \\tilde{t}^{2}-\\tilde{r}^{2}=(\\tilde{t}-\\tilde{r})( \\tilde{t}+\\tilde{r})\ eq 0,\\] showing that the singularities on \\({\\cal C}\\) cancel as well. That is, \\(\\Psi_{A}\\) is analytic at all interior points of the spheroid\\({\\cal S}_{\\alpha}\\). In the exterior of \\({\\cal S}_{\\alpha}\\) we have \\(\\tilde{r}_{\\pm}=\\tilde{r}\\) and hence \\(\\Psi_{A}=\\Psi\\). Since \\({\\cal D}\\) is contained in \\({\\cal S}_{\\alpha}\\) and \\(\\Psi\\) is analytic outside of \\({\\cal D}\\), we conclude that \\(\\Psi_{A}(\\tilde{r},\\tilde{t})\\) fails to be analytic only when \\(\\mathbf{r}\\in{\\cal S}_{\\alpha}\\). Denoting the interior field by \\(\\Psi_{1}\\) and the exterior field by \\(\\Psi_{2}\\), we have \\[\\Psi_{1}(\\tilde{r},\\tilde{t})=\\frac{1}{2}\\left\\{\\Psi(\\tilde{r}, \\tilde{t})+\\Psi(-\\tilde{r},\\tilde{t})\\right\\} \\tag{43}\\] \\[\\Psi_{2}(\\tilde{r},\\tilde{t})=\\Psi(\\tilde{r},\\tilde{t}).\\] Thus \\(\\Psi_{A}\\) is analytic except for a bounded jump discontinuity across \\({\\cal S}_{\\alpha}\\) given by \\[\\Psi_{J}(\\tilde{r},\\tilde{t})\\equiv \\Psi_{2}(\\tilde{r},\\tilde{t})-\\Psi_{1}(\\tilde{r},\\tilde{t})= \\frac{1}{2}\\left\\{\\Psi(\\tilde{r},\\tilde{t})-\\Psi(-\\tilde{r},\\tilde{t})\\right\\} =\\tilde{G}_{J}(x-iy). \\tag{44}\\]It follows by the same argument as in (32) that the source distribution \\[4\\pi\\bar{\\delta}_{A}(x-iy)\\equiv\\square_{x}\\Psi_{A}(\\tilde{r},\\tilde{t}) \\tag{45}\\] is supported spatially on \\({\\cal S}_{\\alpha}\\). Because \\(\\bar{\\delta}_{A}\\) is obtained by twice differentiating a discontinuous function, it consists of a combination of single and double layers on \\({\\cal S}_{\\alpha}\\). But the jump discontinuity in \\(\\Psi_{A}\\) is bounded (unlike that in \\(\\Psi\\), which diverges on \\({\\cal C}\\)), and so are these layers; see [K4]. The above arguments remain valid if instead of \\({\\cal B}_{\\alpha}^{\\pm}\\) we use any two branch cuts whose common interior \\(V\\) contains the branch circle \\({\\cal C}\\). In that case, the averaged propagator is analytic in \\({\\cal T}\\) except for a finite discontinuity when \\(r\\) crosses the boundary \\(\\partial V\\), and its source distribution is supported spatially on \\(\\partial V\\). However, the above choice has the advantage that \\(\\partial V={\\cal S}_{\\alpha}\\) are wave fronts, hence all parts of the surface radiate simultaneously and coherently. ## 4 Extended Huygens sources Let \\(H\\) be the Heaviside step function. Since \\(0\\leq p<\\alpha\\) in the interior of \\({\\cal S}_{\\alpha}\\) and \\(p>\\alpha\\) in the exterior, we have \\[\\Psi_{A}(\\tilde{r},\\tilde{t})=H(\\alpha-p)\\Psi_{1}(\\tilde{r},\\tilde{t})+H(p- \\alpha)\\Psi_{2}(\\tilde{r},\\tilde{t}) \\tag{46}\\] where the interior and exterior fields are given by (43). This can be used to compute the source distribution \\(\\bar{\\delta}_{A}\\) defined in (45), and the result is sum of terms with factors \\(\\delta(p-\\alpha)\\) and \\(\\delta^{\\prime}(p-\\alpha)\\). The former are interpreted as single layers on \\({\\cal S}_{\\alpha}\\), and the latter as double layers. An interesting practical question is whether the wavelets \\(\\Psi_{A}\\), interpreted as acoustic pulsed beams, can be realized by manufacturing their sources. A similar question can be posed for their electromagnetic counterparts, which solve Maxwell's equations; see [K4]. It is doubtful whether an acoustic source can be produced including double layers, and the problem becomes even more difficult in the electromagnetic case because the current density involves yet another derivative, hence a still higher layer [K4a]. The multilayered structure is unavoidable as long as we insist on surface sources. We now propose a method for constructing solutions of the wave equation where the transition occurs in a shell instead of a surface. It will be simpler to present this method initially in a somewhat more general context. Given a function \\(p(\\mathbf{r},t)\\) on \\(\\,\\mathbb{R}^{n,1}\\) and two regular values \\(p_{1}<p_{2}\\) in its range, define two time-dependent surfaces and volumes in \\(\\mathbb{R}^{n}\\) by \\[S_{1}(t) = \\{\\mathbf{r}:p(\\mathbf{r},t)=p_{1}\\},\\ S_{2}(t )= \\{\\mathbf{r}:p(\\mathbf{r},t)=p_{2}\\}\\] \\[V_{1}(t) = \\{\\mathbf{r}:p(\\mathbf{r},t)<p_{1}\\},\\ V_{2}(t )= \\{\\mathbf{r}:p(\\mathbf{r},t)>p_{2}\\}.\\] Let \\(f_{1},f_{2}\\) be solutions of the wave equation in \\(\\,\\mathbb{R}^{n,1}\\) with sources \\(g_{1},g_{2}\\):\\[\\Box f_{k}(\\vec{r},t)=g_{k},\\quad k=1,2. \\tag{47}\\] We want to construct an interpolated solution \\(f(\\vec{r},t)\\) such that \\[f(\\vec{r},t)=f_{k}(\\vec{r},t)\\ \\forall\\vec{r}\\in V_{k}(t) \\tag{48}\\] and compute its source. This can be done by choosing functions \\(h_{k}(\\vec{r},t)\\) with \\[h_{1}(\\vec{r},t)=\\begin{cases}1,&\\vec{r}\\in V_{1}(t)\\\\ 0,&\\vec{r}\\in V_{2}(t)\\end{cases},\\qquad h_{2}(\\vec{r},t)=1-h_{1}(\\vec{r},t) \\tag{49}\\] and letting \\[f=h_{1}f_{1}+h_{2}f_{2}\\equiv h_{k}f_{k} \\tag{50}\\] where the (Einstein) summation convention is used. The source of \\(f\\) is found to consists of two parts, \\[g=\\Box f=g_{{}_{I}}+g_{{}_{T}}, \\tag{51}\\] where \\[g_{{}_{I}}=h_{k}g_{k} \\tag{52}\\] is an interpolated source and \\[g_{{}_{T}}=2\\dot{h}_{k}\\dot{f}_{k}-2\ abla h_{k}\\cdot\ abla f_{k}+(\\Box h_{k} )f_{k}\\qquad(\\dot{f}\\equiv\\partial_{t}f) \\tag{53}\\] is a transitional source which, by (49), is supported on the transition shell \\[V_{T}(t)=\\{\\vec{r}:p_{1}\\leq p(\\vec{r},t)\\leq p_{2}\\} \\tag{54}\\] and depends only on the jump field \\(f_{{}_{J}}=f_{2}-f_{1}\\): \\[g_{{}_{T}}=2\\dot{h}_{2}\\dot{f}_{{}_{J}}-2\ abla h_{2}\\cdot\ abla f_{{}_{J}}+( \\Box h_{2})f_{{}_{J}}. \\tag{55}\\] Now suppose that \\(V_{1}(t)\\) and \\(V_{T}(t)\\) are compact and we are given only one source \\(g_{2}\\), supported in \\(V_{1}(t)\\). Letting \\(f_{2}\\) be its causal field, our objective is to find an equivalent source \\(g\\) supported in \\(V_{T}(t)\\) whose causal field \\(f\\) is identical with \\(f_{2}\\) in \\(V_{2}(t)\\). It suffices to choose any solution \\(f_{1}\\) whose source \\(g_{1}\\) is supported in \\(V_{2}(t)\\), since the interpolated source (52) then vanishes and hence \\(g=g_{{}_{T}}\\). \\(f_{1}\\) is a sourceless internal field in \\(V_{1}(t)\\), and the source \\(g_{{}_{T}}\\) so constructed on \\(V_{T}(t)\\) generalizes the idea of a Huygens source on a surface surrounding the support of \\(g_{2}\\). We may recover the latter by assuming that \\(p\\) is time-independent (hence so are \\(S_{k}\\) and \\(V_{k}\\)) and choosing \\(h_{k}(\\vec{r})\\) so that \\[\\lim_{p_{1}\\to p_{2}}\ abla h_{2}(\\vec{r})=\\delta(p(\\vec{r})-p_{2})\\vec{n}( \\vec{r})\\]where \\(\\boldsymbol{n}(\\boldsymbol{r})\\) is a field of orthogonal vectors on \\(S_{2}\\) pointing into \\(V_{2}\\). The corresponding scheme in the electromagnetic case reduces to the usual boundary conditions on an interface between two media [K4a]. Returning to \\(n=3\\) with \\(p=\\,\\mathrm{Re}\\ \\tilde{r}\\), let \\(f_{k}=\\Psi_{k}\\) as in (43) and \\(h_{k}\\) be time-independent (_e.g.,_ functions of \\(p\\) only). A smoothed version of \\(\\Psi_{A}\\) (39) is \\[\\Psi_{A}^{\\,\\mathrm{sm}}=h_{1}\\Psi_{1}+h_{2}\\Psi_{2}. \\tag{56}\\] Since \\(\\Psi_{k}\\) are sourceless in \\(V_{T}\\), (51) gives the smoothed version of (45) as \\[4\\pi\\bar{\\delta}_{A}^{\\,\\mathrm{sm}}=\\square_{x}\\Psi_{A}^{\\,\\mathrm{sm}}=-2 \ abla h_{2}\\cdot\ abla\\Psi_{J}-(\\Delta h_{2})\\Psi_{J} \\tag{57}\\] where \\[\\Psi_{J}=\\Psi_{2}-\\Psi_{1}=\\frac{1}{2}\\left\\{\\Psi(\\tilde{r},\\tilde{t})-\\Psi(- \\tilde{r},\\tilde{t})\\right\\}=\\frac{\\tilde{t}}{2\\pi i\\tilde{r}(\\tilde{t}^{2}- \\tilde{r}^{2})}\\] is the jump field from \\(V_{1}\\) to \\(V_{2}\\) as in (44), but no longer restricted to a single spheroid \\(\\mathcal{S}_{\\alpha}\\). If we now let \\(p_{1}\\to p_{2}=\\alpha\\) and \\[h_{1}=H(\\alpha-p),\\quad h_{2}=H(p-\\alpha),\\] then the transition becomes abrupt on \\(\\mathcal{S}_{\\alpha}\\) and \\(\\Psi_{A}^{\\,\\mathrm{sm}}\\) becomes \\(\\Psi_{A}\\) (39). Since \\[\ abla h_{2} =\\delta(p-\\alpha)\ abla p\\] \\[\\Delta h_{2} =\\delta^{\\prime}(p-\\alpha)|\ abla p|^{2}+\\delta(p-\\alpha)\\Delta p,\\] equation (57) becomes \\[4\\pi\\bar{\\delta}_{A}=-2\\delta(p-\\alpha)\ abla p\\cdot\ abla\\Psi_{J}-\\delta(p- \\alpha)\\Delta p\\Psi_{J}-\\delta^{\\prime}(p-\\alpha)|\ abla p|^{2}\\Psi_{J}\\] displaying the aforementioned single and double layer structure on \\(\\mathcal{S}_{\\alpha}\\). To get an explicit expression, use [K4, Appendix] \\[\ abla p=\\frac{p\\boldsymbol{r}+q\\boldsymbol{a}}{p^{2}+q^{2}}, \\Delta p=\\frac{2p}{p^{2}+q^{2}}\\] \\[|\ abla p|^{2}=\\frac{p^{2}+a^{2}}{p^{2}+q^{2}}, \ abla p\\cdot\ abla q=0\\] and \\[\ abla p\\cdot\ abla\\Psi_{J}=\\Psi_{J}^{\\prime}\ abla p\\cdot\ abla\\tilde{r}= \\Psi_{J}^{\\prime}|\ abla p|^{2}\\] where \\(\\Psi_{J}^{\\prime}\\) is the complex derivative of \\(\\Psi(\\tilde{r},\\tilde{t})\\) with respect to \\(\\tilde{r}\\) (keeping in mind that \\(\\Psi(\\pm\\tilde{r},\\tilde{t})\\) are analytic in \\(\\tilde{r}\\) for \\(p>0\\)), \\[\\Psi_{J}^{\\prime}=\\frac{\\partial\\Psi_{J}}{\\partial\\tilde{r}}=-\\frac{\\tilde{t}} {2\\pi i\\tilde{r}^{2}(\\tilde{t}^{2}-\\tilde{r}^{2})^{2}}.\\] ## 5 Conclusions Although I have concentrated on the wave equation in four-dimensional Minkowski space \\(\\,\\mathbb{R}^{3,1}\\), similar considerations apply in \\(\\,\\mathbb{R}^{n,1}\\). In fact, the awkward extension of the propagator, using the Cauchy kernel in time but the complex distance in space, becomes much more natural when \\(\\tilde{G}(\\vec{\\boldsymbol{r}},\\tilde{t})\\) is viewed as the retarded part of the analytic continuation of the fundamental solution \\(G_{E}(\\boldsymbol{R})\\) of Laplace's equation in Euclidean \\(\\,\\mathbb{R}^{n+1}\\)[K0, K3], based on the complex distance \\[\\tilde{R}=\\sqrt{\\vec{\\boldsymbol{R}}\\cdot\\vec{\\boldsymbol{R}}},\\qquad\\vec{ \\boldsymbol{R}}\\in\\,\\mathbb{C}^{n+1},\\] whose branch points form a sphere \\(S^{n-1}\\) in \\(\\,\\mathbb{R}^{n+1}\\) of codimension 2 and radius \\(|\\,\\mathrm{Im}\\,\\,\\vec{\\boldsymbol{R}}|\\). The extended delta function \\(\\tilde{\\delta}_{{}_{E}}(\\vec{\\boldsymbol{R}})\\),3 defined by applying the Laplacian in \\(\\boldsymbol{R}\\) to the extension \\(\\tilde{G}_{E}(\\vec{\\boldsymbol{R}})\\), is supported on \\(S^{n-1}\\) for odd \\(n\\geq 3\\), but a branch cut, consisting of a'membrane' bounded by \\(S^{n-1}\\), is needed in all other cases.4 Given any test function \\(f\\) in \\(\\,\\mathbb{R}^{n+1}\\), the convolution Footnote 3: The subscript distinguishes \\(\\tilde{\\delta}_{{}_{E}}(\\vec{\\boldsymbol{R}})\\) from the Minkowskian \\(\\tilde{\\delta}(\\tilde{x})\\) in (9). Footnote 4: This is because \\(G_{E}(\\boldsymbol{R})=c_{n}/R^{n-1}\\) for \\(n\\geq 2\\) and \\(G_{E}=c_{1}\\log R\\) for \\(n=1\\). \\[\\tilde{f}(\\vec{\\boldsymbol{R}})=\\int_{\\mathbb{R}^{n+1}}\\tilde{\\delta}_{{}_{E}} (\\vec{\\boldsymbol{R}}-\\boldsymbol{R}^{\\prime})f(\\boldsymbol{R}^{\\prime})\\,dV( \\boldsymbol{R}^{\\prime}) \\tag{58}\\] defines an extension of \\(f\\) to \\(\\,\\mathbb{C}^{n+1}\\), non-holomorphic in general, whose restriction to the Minkowski subspace \\(\\,\\mathbb{R}^{n,1}\\), obtained by letting \\(\\vec{\\boldsymbol{R}}=(\\boldsymbol{r},it)\\), is a solution of the following initial-value problem for the wave equation: \\[(\\partial_{t}^{2}-\\Delta_{\\boldsymbol{r}})\\tilde{f}(\\boldsymbol{r },it)=0 \\tag{59}\\] \\[\\tilde{f}(\\boldsymbol{r},0)=f(\\boldsymbol{r},0)\\] (60) \\[(\\partial_{t}-i\\partial_{b})\\tilde{f}(\\boldsymbol{r},b+it)\\mid_{ b=t=0}=0. \\tag{61}\\] For odd \\(n\\geq 3\\), the proof of (59) is based on the fact that \\(\\tilde{\\delta}_{{}_{E}}\\) is distributed uniformly on \\(S^{n-1}\\) and hence \\(\\tilde{f}\\) is a spherical mean of \\(f\\)[J55]. This relates the support of \\(\\tilde{\\delta}_{{}_{E}}\\) for odd \\(n\\geq 3\\) to Huygens principle. The other cases can be treated by applying a distributional version of Hadamard's method of descent. Equation (61) states that \\(\\tilde{f}(\\boldsymbol{r},b+it)\\) satisfies the Cauchy-Riemann equation in its last variable; but since this holds only at one point, it does not imply analyticity -- as it cannot since \\(f(\\boldsymbol{r},b)\\) need not have any analytic continuation in \\(b\\). If one exists, it is indeed given by \\(\\tilde{f}(\\boldsymbol{r},b+it)\\). This generalizes an old theorem by Paul Garabedian [G64, pp 191-202]. ## Acknowledgements I thank Dr. Arje Nachman for his sustained support of my research, most recently through AFOSR Grant #FA9550-04-1-0139. ## References * [BG91] C A Berenstein and R Gay, Complex Variables: An Introduction. Springer-Verlag, New York, 1991 * [BG95] C A Berenstein and R Gay, Complex Analysis and Special Topics in Harmonic Analysis. Springer-Verlag, New York, 1995 * [B98] C A Berenstein, Integral geometry, Radon transforms and complex analysis, Springer-Verlag, Lecture Notes in Math. **1684,** pp 1-33, 1998 * [G64] P R Garabedian, Partial Differential Equations. Chelsea, New York, 1964; AMS Chelsea, Providence, 1998 * [J55] F John, Plane Waves and Spherical Means. Interscience, New York, 1955 * [K88] A Kaneko, Introduction to Hyperfunctions. Kluwer, 1988 * [KS99] G Kato and D C Struppa, Fundamentals of Algabraic Microlocal Analysis. Marcel Dekker, 1999 * [K94] G Kaiser, A Friendly Guide to Wavelets. Birkhauser, Boston, 1994 (sixth printing, 1999) * [K0] G Kaiser, Complex-distance potential theory and hyperbolic equations, in Clifford Analysis, J Ryan and W Sprossig (editors) Birkhauser, Boston, 2000. [http://arxiv.org/abs/math-ph/9908031](http://arxiv.org/abs/math-ph/9908031) * [K3] G Kaiser, Physical wavelets and their sources: Real physics in complex space-time. Topical Review, Journal of Physics A: Mathematical and General Vol. 36 No. 30 (2003) R29-R338. * [K4] G Kaiser, Making electromagnetic wavelets. J. Phys. A: Math. Gen. 37:5929-5947, 2004. [http://arxiv.org/abs/math-ph/math-ph/0402006](http://arxiv.org/abs/math-ph/math-ph/0402006) * [K4a] G Kaiser, Making electromagnetic wavelets II: Spheroidal shell antennas. Preprint, August 2004. [http://arxiv.org/abs/math-ph/0408055](http://arxiv.org/abs/math-ph/0408055)
We study a class of localized solutions of the wave equation, called eigenwavelets, obtained by extending its fundamental solutions to complex space-time in the sense of hyperfunctions. The imaginary spacetime variables \\(y\\), which form a timelike vector, act as scale parameters generalizing the scale variable of wavelets in one dimension. They determine the shape of the wavelets in spacetime, making them pulsed beams that can be focused as tightly as desired around a single ray by letting \\(y\\) approach the light cone. Furthermore, the absence of any sidelobes makes them especially attractive for communications, remote sensing and other applications using acoustic waves. (A similar set of 'electromagnetic eigenwavelets' exists for Maxwell's equations.) I review the basic ideas in Minkowski space \\(\\,\\mathbb{R}^{3,1}\\), then compute sources whose realization should make it possible to radiate and absorb such wavelets. This motivates an extension of Huygens' principle allowing equivalent sources to be represented on shells instead of surfaces surrounding a bounded source.
Give a concise overview of the text below.
arxiv-format/0409220v1.md
# Solar System Science with SKA B.J. Butler[MC]NRAO, Socorro, NM, USA, [email protected], D.B. Campbell[MC]Cornell University, Ithaca, NY, USA, I. de Pater[MC]University of California at Berkeley, Berkeley, CA, USA, D.E. Gary[MC]New Jersey Institute of Technology, Newark, NJ, USA ## 1 Introduction Radio wavelength observations of solar system bodies are an important tool for planetary scientists. Such observations can be used to probe regions of these bodies which are inaccessible to all other remote sensing techniques. For solid surfaces, depths of up to meters into the subsurface are probed (the rough rule of thumb is that depths to \\(\\sim\\)10 wavelengths are sampled). For giant planet atmospheres, depths of up to 10's of bars are probed. Probing these depths yields unique insights into the bodies, their composition, physical state, dynamics, and history. The ability to resolve this emission is important in such studies. The VLA has been the state-of-the-art instrument in this respect for the past 20 years, and its power is evidenced by the body of literature in planetary science utilizing its data. With its upgrade (to the EVLA), it will remain in this position in the near future. However, even with that upgrade, there are still things beyond its capabilities. For these studies, the SKA is the only answer. We investigate the capabilities of SKA for solar system studies below, including studies of the Sun. We also include observations of extrasolar giant planets. Such investigations have appeared before (for example, in the EVLA science cases in a general sense, and more specifically in de Pater (1999)), and we build on those previous expositions here. ## 2 Instrumental Capabilities For solar system work the most interesting frequencies in most cases are the higher ones, since the sources are mostly blackbodies to first order (see discussion below for exceptions). We are very interested in the emission at longer wavelengths, of course, but the resolution and source detectability are maximized at the higher frequencies. To frame the discussion below, we need to know what those resolutions and sensitivities are. We take our information from the most recently released SKA specifications (Jones 2004). The current specifications for SKA give a maximum baseline of 3000 km. Given that maximum baseline length, Table 1 shows the resolution of SKA at three values of the maximum baseline, assuming we can taper to the appropriate lengthif desired. In subsequent discussion we will translate these resolutions to physical dimensions at the distances of solar system bodies. The specification calls for \\(A/T\\) of 5000 at 200 MHz; 20000 from 500 MHz to 5 GHz; 15000 at 15 GHz; and 10000 at 25 GHz. The specification also calls for 75% of the collecting area to be within 300 km. Let us assume that 90% of the collecting area is within 1000 km. The bandwidth specification is 25% of the center frequency, up to a maximum of 4 GHz, with two independently tunable passbands and in each polarization (i.e., 16 GHz total bandwidth at the highest frequencies). Given these numbers, we can then calculate the expected flux density and brightness temperature noise values, as shown in Table 2. ## 3 Giant Planets Observations of the giant planets in the frequency range of SKA are sensitive to both thermal and nonthermal emissions. These emissions are received simultaneously, and can be distinguished from each other by examination of their different spatial, polarization, time (e.g., for lightning), and spectral characteristics. Given the sensitivity and resolution of SKA (see Table 3), detailed images of both of these types of emission will be possible. We note, however, the difficulty in making images with a spatial dynamic range of \\(>\\) 1000 (take the case of Jupiter, with a diameter of 140000 km, and resolution of \\(\\sim\\)100 km) - this will be challenging, not only in the measurements (good short spacing coverage - down to spacings of order meters - is required), but in the imaging itself. ### Nonthermal emission Nonthermal emissions from the giant planets at frequencies between 0.15 and 20 GHz are limited to synchrotron radiation and atmospheric lightning. Both topics have been discussed before in connection to SKA by de Pater (1999). We review and update these discussions here. #### 3.1.1 Synchrotron radiation Synchrotron radiation results from energetic electrons (\\(\\sim\\) 1-100 MeV) trapped in the magnetic fields of the giant planets. At present, synchrotron emission has only been detected from Jupiter, where radiation at wavelengths longer than about 6 cm is dominated by this form of emission (Berge & Gulkis 1976). Saturn has no detectable synchrotron radiation because the extensive ring system, which is almost aligned with the magnetic equatorial plane, absorbs energetic particles (McDonald, Schardt, & Trainor 1980). Both Uranus and Neptune have relatively weak magnetic fields, with surface magnetic field strengths \\(\\sim\\)20-30 times weaker than Jupiter. Because the magnetic axes make large angles (50-60\\({}^{\\circ}\\)) with the rotational axes of the planets, the orientation of the field of Uranus with repect to the solar wind is in fact not too dissimilar from that of Earth (because its rotational pole is nearly in the ecliptic), while the magnetic axis of Neptune is pointed towards the Sun once each rotation period. These profound changes in magnetic field topology have large effects on the motion of the local plasma in the magnetosphere of Neptune. It is unclear if there is a trapped population of high energy electrons in the radiation belt of \\begin{table} \\begin{tabular}{c c c c} \\hline \\hline \\(\ u\\) (GHz) & \\(\\theta_{300}\\) & \\(\\theta_{1000}\\) & \\(\\theta_{3000}\\) \\\\ \\hline 0.5 & 410 & 120 & 40 \\\\ 1.5 & 140 & 40 & 14 \\\\ 5 & 40 & 12 & 4 \\\\ 25 & 8 & 3 & 1 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Resolution in msec for SKA. \\begin{table} \\begin{tabular}{c c c c} \\hline \\hline & Distance & \\multicolumn{2}{c}{resolution (km) \\({}^{*}\\)} \\\\ Body & (AU) & \\(\ u\\) =2 GHz & \\(\ u\\) =20 GHz \\\\ \\hline Jupiter & 5 & 120 & 10 \\\\ Saturn & 9 & 210 & 20 \\\\ Uranus & 19 & 420 & 40 \\\\ Neptune & 30 & 690 & 70 \\\\ \\hline \\hline \\end{tabular} \\({}^{*}\\) assuming maximum baseline of 1000 km \\end{table} Table 3: SKA linear resolution for giant planets. either planet, a necessary condition for the presence of synchrotron radiation. Before the Voyager encounter with the planet, de Pater & Goertz (1989) postulated the presence of synchrotron radiation from Neptune. Based on the calculations in their paper, the measured magnetic field strengths, and 20-cm VLA observations (see, e.g., de Pater, Romani, & Atreya 1991) we would estimate any synchrotron radiation from the two planets not to exceed \\(\\sim\\)0.1 mJy. This, or even a contribution one or two orders of magnitude smaller, is trivial to detect with the SKA. It would be worthwhile for the SKA to search for potential synchrotron emissions off the disks of Uranus and Neptune (and SKA can easily distinguish the synchrotron emission from that from the disk based on the spatial separation), since this information would provide a wealth of information on the inner radiation belts of these planets. Jupiter's synchrotron radiation has been imaged at frequencies between 74 MHz and 22 GHz (see, e.g., de Pater, 1991; de Pater, Schulz, & Brecht 1997; Bolton et al. 2002; de Pater & Butler 2003; de Pater & Dunn 2003). A VLA image of the planet's radio emission at \\(\\lambda=20\\) cm is shown in Figure 1a; the spatial distribution of the synchrotron radiation is very similar at all frequencies (de Pater & Dunn 2003). Because the radio emission is optically thin, and Jupiter rotates in 10 hours, one can use tomographic techniques to recover the 3D radio emissivity, assuming the emissions are stable over 10 hours. An example is shown in Figure 1b (Sault et al. 1997; Leblanc et al. 1997; de Pater & Sault 1998). The combination of 2D and 3D images is ideal to deduce the particle distribution and magnetic field topology from the data (Dulk et al. 1997; de Pater & Sault 1998; Dunn, de Pater, & Sault 2003). The shape of Jupiter's radio spectrum is determined by the intrinsic spectrum of the synchrotron radiating electrons, the spatial distribution of the electrons and Jupiter's magnetic field. Spectra from two different years (1994 and 1998) are shown in Figure 2 (de Pater et al. 2003; de Pater & Dunn 2003). The spectrum is relatively flat shortwards of 1-2 GHz, and drops off more steeply at higher frequencies. As shown, there are large variations over time in the spectrum shortwards of 1-2 GHz, and perhaps also at the high frequencies, where the only two existing datapoints at 15 GHz differ by a factor of \\(\\sim\\)3. Changes in the radio spectrum most likely reflect a change in either the spatial or intrinsic energy distribution of the electrons. The large change in spectral shape between 1994 and 1998 has been attributed to pitch angle scattering by plasma waves, Coulomb scattering and perhaps energy degradation by dust in Jupiter's inner radiation belts, processes which affect in particular the low energy distribution of the electrons. With SKA we may begin investigating the cause of such variability through its imaging capabilities at high angular resolution, and simultaneous good u-v coverage at short spacings. As shown by de Pater (1999), this is crucial for intercomparison at different frequencies. With such images we can determine the spatial distribution of the energy spectrum of electrons, which is tightly coupled to the (still unknown) origin and mode of transport (including source/loss terms) of the high energy electrons in Jupiter's inner radiation belts. \\begin{table} \\begin{tabular}{c c c c c c c} \\hline \\hline \\(\ u\\) (GHz) & \\(\\Delta F_{300}\\) & \\(\\Delta T_{B_{300}}\\) & \\(\\Delta F_{1000}\\) & \\(\\Delta T_{B_{1000}}\\) & \\(\\Delta F_{3000}\\) & \\(\\Delta T_{B_{3000}}\\) \\\\ \\hline 0.5 & 97 & 2.3 & 81 & 22 & 73 & 170 \\\\ 1.5 & 56 & 1.3 & 47 & 12 & 42 & 100 \\\\ 5 & 31 & 0.7 & 26 & 6.8 & 23 & 55 \\\\ 25 & 34 & 0.8 & 29 & 7.6 & 26 & 62 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Sensitivities for SKA in nJy and K in 1 hour of observing. #### 3.1.2 Lightning Lightning appears to be a common phenomenon in planetary atmospheres. It has been observed on Earth, Jupiter, and possibly Venus (Desch et al., 2002). Electrostatic discharges on Saturn and Uranus have been detected by spacecraft at radio wavelengths, and are probably caused by lightning. The basic mechanism for lightning generation in planetary atmospheres is believed to be collisional charging of cloud droplets followed by gravitational separation of oppositely charged small and large particles, so that a vertical potential gradient develops. The amount of charges that can be separated this way is limited; once the resulting electric field becomes strong to ionize the intervening medium, a rapid 'lightning stroke' or discharge occurs, releasing the energy stored in the electric field. For this process to work, the electric field must be large enough, roughly of the order of 30 V per electron mean free path in the gas, so that an electron gains sufficient energy while traversing the medium to cause a collisional ionization. When that condition is met, a free electron will cause an ionization at each collision with a gas molecule, producing an exponential cascade (Gibbard et al., 1999). In Earth's atmosphere, lightning is almost always associated with precipitation, although significant large scale electrical discharges also occur occasionally in connection with volcanic eruptions and nuclear explosions. By analogy, lightning on other planets is only expected in atmospheres where both convection and condensation take place. Moreover, the condensed species, such as water droplets, must be able to undergo collisional charge exchange. It is possible that lightning on other planets is triggered by active volcanism (such as possibly on Venus or Io). We believe that SKA would be an ideal instrument to search for lightning on other planets; the use of multiple beams would facilitate discrimination against lightning in our own atmosphere, and simultaneous observations at different frequencies Figure 1: Radio images of Jupiter’s synchrotron emission. a) (left) Image made from VLA data taken at a frequency of 1450 MHz. Both the thermal (confined to Jupiter’s disk) and nonthermal emissions are visible. The resolution is \\(\\sim\\)0.3 \\(R_{J}\\), roughly the size of the high latitude emission regions. Magnetic field lines from a magnetic field model are superposed, shown every 15\\({}^{\\circ}\\) of longitude. After de Pater, Schulz, & Brecht (1997). b) (right) Three-dimensional reconstruction of the June 1994 data, as seen from Earth. The planet is added as a white sphere in this visualization. After de Pater & Sault (1998). would contribute spectral information. For such experiments one needs high time resolution (as for pulsars) and the ability to observe over a wide frequency range simultaneously, including in particular the very low frequencies (\\(<\\) 300 MHz). ### Thermal emission The atmospheres of the giant planets all emit thermal (blackbody) radiation. At radio wavelengths most of the atmospheric opacity has been attributed to ammonia gas, which has a broad absorption band near 22 GHz. Other sources of opacity are collision induced absorption by hydrogen, H\\({}_{2}\\)S, PH\\({}_{3}\\), H\\({}_{2}\\)O gases, and possibly clouds. Since the overall opacity is dominated by ammonia gas, it decreases approximately with \\(\ u^{-2}\\) for \\(\ u<22\\) GHz. One therefore probes deeper warmer layers in a planet's atmosphere at lower frequencies. Spectra of all four giant planets have been used to extract abundances of absorbing gases, in particular NH\\({}_{3}\\), and for Uranus and Neptune, H\\({}_{2}\\)S (H\\({}_{2}\\)S has been indirectly inferred for Jupiter and Saturn) (see, e.g., Briggs & Sackett, 1989; de Pater, Romani, & Atreya, 1991; de Pater & Mitchell, 1993; DeBoer & Steffes, 1996). The thermal emission from all four giant planets has been imaged with the VLA. To construct high signal-to-noise images, the observations need to be integrated over several hours, so that the maps are smeared in longitude and only reveal brightness variations in latitude. The observed variations have typically been attributed to spatial variations in ammonia gas, as caused by a combination of atmospheric dynamics and condensation at higher altitudes. Recently, Sault, Engel, & de Pater (2004) developed an algorithm to construct longitude-resolved images; they applied this to Jupiter, and their maps reveal, for the first time, hot spots at radio wavelengths which are strikingly similar to those seen in the infrared (Figure 3). At radio wavelengths the hot spots indicate a relative absence of NH\\({}_{3}\\) gas, whereas in the infrared they suggest a lack of cloud particles. The authors showed that the NH\\({}_{3}\\) abundance in hot spots was depleted by a factor of 2 relative to the average NH\\({}_{3}\\) abundance in the belt region, or a factor of 4 compared to zones. Ammonia must be depleted down to pressure levels of \\(\\sim\\)5 bar in the hot spots, the approximate altitude of the water cloud. The algorithm of Sault, Engel, & de Pater (2004) only works on short wavelength data of Jupiter, where the synchrotron radiation is minimal. Even the longitudinally smeared images are important in deducing the state of the deep atmospheres of the giant planets, as attested by numerous publications on the giant planets. Here we discuss specifically the case of Uranus, where radio images made with the VLA since 1981 at 2 and 6 cm have shown changes in the deep atmosphere which appear to be related to the changing insolation as the two poles rotate in and out of sunlight over the 40 year uranian year. Since the first images were made, the south pole has appeared brighter than equato Figure 2: The radio spectrum of Jupiter’s synchrotron emission as measured in September 1998 (lower curve) and June 1994 (upper curve), with high frequency data points from March 1991 (VLA) and January 2001 (Cassini; Bolton et al., 2002). Superposed are model calculations that match the data (Adapted from de Pater & Dunn, 2003). rial regions. In the last decade, however, the contrast between the two regions and the latitude at which the transition occurs has changed (Hofstadter & Butler, 2003). Figure 4 shows an image from the VLA made from data taken in the summer of 2003, along with an image at near-infrared wavelengths (1.6 \\(\\mu\\)m) taken with the adaptive optics system on the Keck telescope in October 2003 (Hammel et al., 2004). The VLA image clearly shows that the south pole is brightest, but it also shows enhanced brightness in the far-north (to the right on the image). At near-infrared wavelengths Uranus is visible in reflected sunlight, and hence the bright regions are indicative of clouds/hazes at high (upper troposphere) altitudes, presumably indicative of rising gas (with methane condensing out). We note that the bright band around the south pole is at the lower edge of the VLA-bright south polar region. It appears as if air is rising (with condensibles forming clouds) along the northern edge of the south polar region and descending over the pole, where the low radio opacity is indicative of dry air. With a sensitivity of SKA 2 orders of magnitude better than that of the VLA, and excellent instantaneous UV coverage, images of a planet's thermal emission can be obtained within minutes, rather than hours. This would enable direct mapping of hot spots at a variety of frequencies, including low frequencies where both thermal and nonthermal radiation is received. We can thus obtain spectra of hot spots, which allow us to derive the altitude distribution of absorbing gases, something that hitherto could only be obtained via _in situ_ probes. Equally exciting is the prospect of constructing complete 3D maps of the ammonia abundance (or total opacity, to be precise) at pressure levels between 0.5 and \\(\\sim\\) 20-50 bars (these levels vary some from planet to planet). Will ammonia, and other sources of opacity, be homogeneous in a planet's deep atmosphere (i.e., at pressure levels \\(\\lower 2.15pt\\hbox{$\\;\\buildrel>\\over{\\sim}\\;$}\\) 10 bar)? Could there be giant thunderstorms rising up from deep down, bringing up concentrations of ammonia and other gases from a planet's deep atmosphere, i.e., reflecting the true abundance at deep levels? Such scenarios have been theorized for Jupiter, but never proven (Showman & de Pater, 2004). A cautionary note here: although excellent images at multiple wavelengths yield, in principle, information on a giant planet's deep atmosphere, detailed modeling will be frustrated in part because of a lack of accurate laboratory data on gases and clouds that absorb at microwave frequencies, such as NH\\({}_{3}\\) and H\\({}_{2}\\)O. This severely limits the precision at which one can separate contributions from different gases. Planetary scientists are in particular eager to deduce the water abundance in a planet's deep atmosphere (e.g., Jupiter). The potential of deriving the water abundance in the deep atmosphere of Jupiter from microwave observations was reviewed by de Pater et al. (2004), while Janssen et al. (2004) investigated the potential of using limb darkening measurements on a spinning spacecraft. These studies show that it might be feasible to extract limits on the water abundance in the deep atmosphere, but only if the absorption profile of water and ammonia gas is accurately known. ### Rings Planetary rings emit thermal radiation, but this contribution is very small compared to the planet's thermal emission reflected from the rings. Although all 4 giant planets have rings, radio emissions have only been detected from Saturn's rings. Other rings are too tenuous to reflect detectable amounts of radio emissions (Jupiter's synchrotron radiation, Figure 3: Longitude-resolved image of Jupiter at 2 cm (Sault, Engel, & de Pater, 2004). though, does reflect the presence of its ring via absorption of energetic electrons). Several groups have gathered and analyzed VLA data of Saturn's rings over the past decades (see, e.g., Grossman, Muhleman, & Berge, 1989; van der Tak et al., 1999; Dunn, Molnar, & Fix, 2002). These maps, at frequencies \\(>\\) a few GHz, are usually integrated over several hours, and reveal the classical A, B, and C rings including the Cassini Division. Asymmetries, such as wakes, have been detected in several maps; research is ongoing as to correlations between observed asymmetries with wavelength and ring inclination angle. With the high sanity, angular resolution and simultaneous coverage of short u-v spacings, maps of Saturn's rings can be improved considerably. This would allow higher angular resolution and less longitudinal smearing, allowing searches for longitudinal inhomogeneities. In addition, it may become feasible to detect the uranian \\(\\epsilon\\) ring and perhaps even the main ring of Jupiter during ring plane crossings. We note that the detection of the Jupiter ring is made difficult by being so faint and close to an extremely bright Jupiter. ## 4 Terrestrial Planets Radio wavelength observations of the terrestrial planets (Mercury, Venus, the Moon, Mars) are important tools for determining atmospheric, surface and subsurface properties. For surface and subsurface studies, such observations can help determine temperature, layering, thermal and electrical properties, and texture. For atmospheric studies, such observations can help determine temperature, composition, and dynamics. Given the sensitivity and resolution of SKA (see Table 4), detailed images of both of these types of emission will be possible. We note, however, similarly to the giant planet case above, the difficulty in making images with a spatial dynamic range of \\(>10000\\) (take the case of Venus, with a diameter of 12000 km, and resolution of \\(\\sim\\)1 km). The Moon is a special case, where mosaicing will Figure 4: Two panels comparing VLA (left, Hofstadter & Butler, 2004) and Keck (right, Hammel et al., 2004) images of Uranus from the summer of 2003. In the radio image, red is brighter, cycling to lower brightness through orange, yellow, green, blue, and black. Note the edge of the radio bright region in the south (to the left in the images) corresponds to a prominent band in the infrared. The radio bright region in the north has no corresponding band. The faint line across the planet on the right-hand side of the infrared image is the ring system. likely be required, the emission is bright and complicated, and it is in the near field of SKA (in fact, many of the planets are in the near field formally, but the Moon is an extreme case). The VLA has been used to image the Moon (Margot et al., 1997), and near field imaging techniques are being advanced (Cornwell, 2004), but imaging of the Moon will be a challenge for SKA. ### Surface and subsurface The depth to which temperature variations penetrate in the subsurface is characterized by its thermal skin depth, where the magnitude of the diurnal temperature variation is decreased by 1/e: \\(l_{t}=\\sqrt{kP/(\\pi C_{p}\\rho)}\\), where \\(k\\) is the thermal conductivity, \\(P\\) is the rotational period, \\(\\rho\\) is the mass density, and \\(C_{p}\\) is the heat capacity. For the terrestrial planets, using thermal properties of lunar soils and the proper rotation rates, the skin depths are of order a few cm (Earth and Mars) to a few 10's of cm (Moon, Mercury, and Venus, because of their slow rotation). The 1/e depth to which a radio wavelength observation at wavelength \\(\\lambda\\) probes in the subsurface is given by: \\(l_{r}=\\lambda/(2\\pi\\sqrt{\\epsilon_{r}}\\tan\\Delta)\\), where \\(\\epsilon_{r}\\) is the real part of the dielectric constant, and \\(\\tan\\Delta\\) is the \"loss tangent\" of the material - the ratio of the imaginary to the real part of the dielectric. For all of the terrestrial planets, given reasonable regolith dielectric constant, this is roughly 10 wavelengths. So, the wavelengths of SKA are well matched to probing both above and below the thermal skin depths of the terrestrial planets. The thermal emission from Mercury has been mapped with the VLA and BIMA by Mitchel & de Pater (1994), who determined that not only was the subsurface probably layered, but that the regolith is likely relatively basalt free. Figure 5 shows a VLA observation, compared with the detailed model of Mitchell & de Pater. Observations with SKA will further determine our knowledge of these subsurface properties. Furthermore, given the 1 km resolution, mapping of the near-surface temperatures of the polar cold spots (inferred from the presence of odd radar scattering behavior - Harmon, Perillat, & Slade, 2001) will be possible, a valuable constraint on their composition. Finally, given accurate enough (well calibrated, on an absolute scale) measurements, constraints on the presence or absence of an internal dynamo may be placed. The question of the long wavelength emission from Venus could be addressed by SKA observations. Recent observations have verified that the emission from Venus at long wavelengths (\\(\\lower 2.15pt\\hbox{$\\;\\buildrel>\\over{\\sim}\\;$}\\) 6 cm) are well below predicted - by up to 200 K (Butler & Sault, 2003). Figure 6 shows this graphically. There is currently no explanation for this depression. Resolved images at long wavelengths (say 500 MHz, where the resolution of SKA is of order 100 km at the distance of Venus using only the 300 km baselines and less, and the brightness temperature sensitivity is about 3 K in 1 hour) will help in determining whether this is a global depression, or limited to particular regions on the planet. Although NASA has been sending multiple spacecraft to Mars, there are still uses for Earth-based radio wavelength observations. To our knowledge, there is currently no planned microwave mapper for a Mars mission, other than the deep sounding very long wavelength radar mappers (MARSIS, for example). So observations in the meter-to-cm wavelength range are still important for deducing the properties of the important near-surface layers of the planet. Observations of the seasonal caps as they form and subsequently recede would provide valuable constraints on their structure. Observations of the odd \"stealth\" region (Edgett et al., 1997) would help constrain its composition and structure, and in combination with imagery constrain its em \\begin{table} \\begin{tabular}{c c c c} \\hline \\hline & Distance & \\multicolumn{2}{c}{resolution (km) \\({}^{*}\\)} \\\\ Body & (AU) & \\(\ u=\\)2 GHz & \\(\ u=\\)20 GHz \\\\ \\hline Moon & 0.002 & 0.015 & 0.004 \\\\ Venus & 0.3 & 2 & 0.7 \\\\ Mercury, Mars & 0.6 & 4 & 1.3 \\\\ \\hline \\hline \\end{tabular} \\({}^{*}\\) assuming maximum baseline of 1000 km \\end{table} Table 4: SKA linear resolution for terrestrial planets. placement history. ### Atmosphere The Moon and Mercury have no atmosphere to speak of, but Venus and Mars will both benefit from SKA observations of their atmospheres. Short wavelength observations of the venusian atmosphere (\\(\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}\\) 3 cm) probe the lower atmosphere, below the cloud layer (\\(\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}\\) 40 km). Given the abundance of sulfur-bearing molecules in the atmosphere, and their high microwave opacity, such observations can be used to determine the abundances and spatial distribution of these molecules. Jenkins et al. (2002) have mapped Venus with the VLA at 1.3 and 2 cm, determining that the below-cloud abundance of SO\\({}_{2}\\) is lower than that inferred from infrared observations, and that polar regions have a higher abundance of H\\({}_{2}\\)SO\\({}_{4}\\) vapor than equatorial regions, supporting the hypothesis of Hadley cell circulation. VLA observations are hampered both by sensitivity and spatial dynamic range. The EVLA will solve part of the sensitivity problem, but will not solve the instantaneous spatial dynamic range problem - only the SKA can do both. Given SKA observations, cloud features (including at very small scales), and temporal variation of composition (which could be used as as proxy to infer active volcanism, since it is thought that significant amounts of sulfur-bearing molecules would be released in such events) could be sensed and monitored. Observations of the water in the Mars atmosphere with the VLA have provided important Figure 5: Image of Mercury at 1.3 cm made from data taken at the VLA (Mitchel & de Pater 1994). The left panel shows the image, where red is brighter (hotter), cycling to lower brightness through orange, yellow, green, blue, and purple to white. The right panel shows this image after subtraction of a detailed model. The solid line is the terminator, the circle is Caloris basin. The model does well except at the terminator and in polar regions, most likely because of unmodelled topography and surface roughness. constraints on atmospheric conditions and the climate of the planet (Clancy et al., 1992). The 22 GHz H\\({}_{2}\\)O line is measured, and emission is seen along the limb, where pathlengths are long (this fact is key - the resolution of the atmosphere along the limb is critical). Figure 7 shows an image of this. For added sensitivity in these kinds of observations (needed to improve the deduction of temperature and water abundance in the atmosphere), only the SKA will help. ## 5 Large Icy Bodies In addition to their odd radar scattering properties (see the Radar section below), the Galilean satellites Europa and Ganymede exhibit unusually low microwave emission (de Pater, Brown, & Dickel, 1984; Muhleman et al., 1986; Muhleman & Berge, 1991). Observations with SKA will determine the deeper subsurface properties of the Galilean satellites, Titan, the larger uranian satellites, and even Triton, Pluto, and Charon. For example, given a resolution of 40 km at 20 GHz (appropriate for the mean distance to Uranus), maps of hundreds of pixels could be made of the uranian moons Titania, Oberon, Umbriel, Ariel, and Miranda. Pushing to 3000 km baselines, maps of tens of pixels could even be made of the newly discovered large KBOs Quaoar and Sedna (Quaoar is estimated to be \\(\\sim\\)40 masec in diameter, Sedna about half that, (Brown & Trujillo, 2004; Brown, Trujillo, & Rabinowitz, 2004)). These bodies are some 10's of K in physical temperature, probably with an emissivity of \\(\\sim\\)0.9 (by analog with the icy satellites), so with a brightness temperature sensitivity of a few K in a few masec beam, SKA should have no problem making such maps with an SNR of the order of 10's in each pixel. SKA will be unique in its ability to make such maps of these bodies - optical images will come nowhere near this resolution unless space-based interferometers become a reality. ## 6 Small Bodies Perhaps the most interesting solar system science with SKA will involve the smaller bodies in the solar system. Because of their small size, their emission is weak, and they have therefore Figure 6: Microwave brightness temperature spectrum of Venus, from Butler & Sault (2003). The depression of the measured emission compared to models at long wavelengths, up to 200 K, is evident. Figure 7: Map of water vapor in the Mars atmosphere made from data taken at the VLA in 1991. The colored background is the thermal emission from the surface. The contours are the H\\({}_{2}\\)O emission, seen only along the limb. From Clancy et al. (1992). not been studied very extensively, particularly at longer wavelengths. Such bodies include the smaller satellites, asteroids, Kuiper Belt Objects (KBOs), and comets. These bodies are all important probes of solar system formation, and will yield clues as to the physical and chemical state of the protoplanetary and early planetary environment, both in the inner and outer parts of the solar system. ### Small satellites It is sometimes hypothesized that Phobos and/or Deimos are captured asteroids because of surface spectral reflectivity properties. This is inconsistent, however, with their current dynamical state and low internal density (see, e.g., the discussions in Burns 1992; Rivkin et al. 2002). The two moons could also have been formed via impact of a large asteroid into Mars, which could also have helped in forming the north-south dichotomy on the planet (Craddock 1994). Determination of the properties of the surface and near-surface could help unravel this mystery. These bodies are \\(\\sim\\)10 km in diameter, so at opposition will be \\(\\sim\\)30 masec in apparent diameter, so SKA will be able to map them with a few 10's of pixels on the moons. This will provide some of these important properties and their variation as a function of location on the moons (notably regolith depth and thermal and electrical properties). As another example, consider the eight outer small jovian satellites, about which little is known, either physically or chemically. All eight of them, with diameters of from 15 to 180 km (Himalia), could be resolved by SKA at 20 GHz, determining their shapes as well as their surface and subsurface properties. We note, however, that the imaging of these small satellites can be challenging, as they are often in close proximity to a very bright primary which may have complex brightness structure. As such, even with the specification that SKA must have a dynamic range of \\(10^{6}\\), it will not be trivial to make images of these small, relatively weak satellites. ### Main Belt Asteroids The larger of the main belt asteroids are the only remaining rocky protoplanets (bodies of order a few hundred to 1000 km in diameter), the others having been dispersed or catastrophically disrupted, leaving the comminuted remnants comprising the asteroid belt today (Davis et al. 1979). They have experienced divergent evolutionary paths, probably as a consequence of forming on either side of an early solar system dew line beyond which water was a significant component of the forming bodies. Vesta is thought to have accreted dry, consequently experiencing melting, core formation, and volcanism covering its surface with basalt (Drake 2001). Ceres and Pallas, thermally buffered by water never exceeding 400K, experienced aqueous alteration processes evidenced by clay minerals on their surfaces (Rivkin 1997). These three large MBAs all reach apparent sizes of nearly \\(1^{\\prime\\prime}\\) at opposition, so maps with hundreds of pixels across them can be made, with high SNR (brightness temperature is of order 200 K, while brightness temperature sensitivity is of order a few K). Such maps will directly probe regolith depth and properties across the asteroids, yielding important constraints on formation hypotheses. SKA will also be able to detect and map the smaller MBAs. Given the distances of the MBAs to the Sun, they typically have surface/sub-surface brightness temperatures (the brightness temperature is just the physical temperature multiplied by the emissivity) of \\(\\sim\\) 200 K. Given a typical distance (at opposition) of 1.5 AU, this gives diameters of 2, 20, and 200 masec for MBAs of 1, 10, and 100 km radius, with flux densities of 0.3, 30, and 3000 \\(\\mu\\)Jy at \\(\\lambda\\) = 1 cm. So the larger MBAs will be trivial to detect and map, but the smallest of them will be somewhat more difficult to observe (but not beyond the sensitivity of SKA - see the discussion above on Instrumental Capabilities). There are more than 1500 MBAs with diameter \\(>\\) 20 km just in the IRAS survey (Tedesco et al. 2002). ### Near Earth Asteroids In addition to being important remnants of solar system formation, NEAs are potential hazards to us here on Earth (Morbidelli et al. 2002). As such, their characterization is important (Cellino, Zappala, & Tedesco 2002). SKA will easily detect and image such asteroids. As they pass near the Earth, they are typically at a brightness temperature of 300 K, and pass at a distance of a few lunar radii (\\(\\sim\\)0.005 AU). This distance gives diameters of 6, 60, and 600 masec for NEAs of 10, 100, and 1000 m, with flux densities of 0.005, 0.5, and 50 mJy \\(\\mu\\)Jy at \\(\\lambda=1\\) cm. Again, these will be easily detected and mapped. ALMA will also be an important instrument for obeying these bodies (Butler & Gurwell 2001), but it is the combination of the data from ALMA and SKA that allows a complete picture of the surface and subsurface properties to be formed. ### Kuiper Belt Objects In general KBOs are detected at optical/near-IR wavelengths in reflected sunlight. Since the albedo of comet Halley was measured (by spacecraft) to be 0.04, comets/KBOs are usually assumed to have a similar albedo (which most likely is not true). This assumed albedo is then used to derive an estimate of the size based on the magnitude of the reflected sunlight. Resolved images, and hence size estimates, only exist of the largest KBOs (see, e.g., Brown & Trujillo 2004). The only other possibility (ignoring occultation experiments - Cooray 2003) to determine the size is via the use of radiometry, where observations of both the reflected sunlight and longer wavelength observations of thermal emission are used to derive both the albedo and radius of the object. This technique has been used, for example, for asteroids in the IRAS sample (Tedesco et al. 2002). Although more than 100 KBOs have been found to date, only two have been detected in direct thermal emission, at wavelengths around 1 mm (Jewitt, Aussel, & Evans 2001; Margot et al. 2002a) - the emission is simply too weak. ALMA will be an extremely important telescope for observing KBOs (Butler & Gurwell 2001), but will just barely be able to resolve the largest KBOs (with a resolution of 12 masec at 350 GHz in its most spread out configuration). SKA, with a resolution of a few masec and a brightness temperature sensitivity of a few K, will resolve all of the larger of the KBOs (larger than 100 km or so), and will easily detect KBOs with radii of 10's of km. Combined observations with ALMA and SKA will give a complete picture of the surface, near-surface, and deeper subsurface of these bodies. ### Comets In addition to holding information on solar system formation, comets are also potentially the bodies which delivered the building blocks of life (both simple and complex organic molecules) to Earth. As such, they are important astronomical targets, as we would like to understand their current properties and how that constrains their history. #### 6.5.1 Nucleus Long wavelengths (cm) are nearly unique in their ability to probe right to the surfaces of active comets. Once comets come in to the inner solar system, they generally produce so much dust and gas that the nucleus is obscured to optical, IR, and even mm wavelengths. At cm wavelengths, however, one can probe right to (and into) the nucleus of all but the most productive comets. For example, comet Hale-Bopp was detected with the VLA at X-band (Fernandez 2002). Given nucleus sizes of a few to a few 10's of km, and distances of a few tenths to 1 AU, the flux densities from cometary nuclei should be from about 1 \\(\\mu\\)Jy to 1 mJy at 25 GHz - easy to detect with SKA. Multi-wavelength observations should tell us not only what the surface and near-surface density is, but if (and how) it varies with depth. These nuclei should be roughly 10-100 masec in apparent size, so can be resolved at the high frequencies of SKA. With resolved images, in principle it would be possible to determine which areas were covered with active (volatile) material, i.e., ice, and which were covered with rocky material, and for the rocky material whether it was dust (regolith) or solid rock. #### 6.5.2 Ice and dust grain halo Large particles (rocks and ice cubes) are clearly shed from cometary nuclei as they become active, as shown by radar observations (Harmon et al. 1999). The properties of these activity byproducts are important as they contain information on the physical structure and composition of the comets from which they are ejected. Observations at the highest SKA frequencies should be sensitive to emission from these large particles (even though they also probe down through them to the nucleus), and can thus be used to make images of these particles - telling us what the distribution (both spatially, and the size distribution of the particles) and total mass is, and how it varies with time. #### 6.5.3 Coma Observations of cometary comae will tell us just what the composition of the comets is - both the gas to dust ratio, and the relative ratios of the volatile species. Historically, observations of cometary comae at cm wavelengths have been limited to OH, but with the sensitivity of SKA, other molecules such as formaldehyde (detected in comet Halley with the VLA - Snyder, Palmer, & de Pater, 1989) and CH should be observable. The advantage of long wavelength transitions is that we observe rotational transitions of the molecules, which are much easier to understand and accurately characterize in the statistical equilibrium and radiative transfer models (necessary to turn the observed intensities into molecular abundances). Millimeter wavelength observations of cometary molecules have proven fertile ground (see, e.g., Biver et al., 2002), but the cm transitions of molecules are also important as they probe the most populous energy states, and some unique molecules which do not have observable transitions in the mm-submm wavelengths. The volatile component of comets is \\(\\sim\\)80% water ice, with the bulk of the rest CO\\({}_{2}\\). All other species are present only in small quantities. As the comet approaches the Sun, the water starts to sublimate, and along with the liberated dust forms the coma and tails. At 1 AU heliocentric distance, the typical escape velocity of the water molecules is 1 km/s, and the lifetime against dissociation is about 80000 sec, which leads to a water coma of radius 80000 km. Although there have been some claims of direct detection of the 22 GHz water line in comets, a very sensitive search for this emission from Hale-Bopp detected no such emission (Graham et al., 2000). With SKA, such observations should be possible and will likely be attempted. The problem is that the resolution is too high - with such a large coma, most of the emission will be resolved out. Most of the water dissociates into H and OH. The hydroxyl has a lifetime of \\(1.6\\times 10^{5}\\) sec at 1 AU heliocentric distance, implying a large OH coma - of order \\(10^{\\prime}\\) apparent size at 1 AU geocentric distance. The OH is pumped into disequilibrium by solar radiation, and acts as a maser. As such, the emission can be quite bright, and is regularly observed at cm wavelengths by single dishes as it amplifies the galactic or cosmic background (Schloerb & Gerard, 1985). Since the spatial scale is so large, however, VLA observations of cometary OH have been limited to observations of only a few comets - Halley, Wilson, SL-2, and Hale-Bopp (de Pater, Palmer, & Snyder, 1991; Butler & Palmer, 1997). Figure 8 shows a VLA image of the OH emission from comet Halley. Though scant, these observations have helped demonstrate that the OH in cometary comae is irregularly distributed, likely due to quenching of the population inversion from collisions in the inner coma. Similar to the case for water above, SKA will resolve out most of this emission. It will certainly provide valuable observations of the distribution of OH in the inner coma, but not much better than is possible with the VLA currently. The real power of the SKA will be in observations of background sources amplified by the OH in the coma. The technique is described in Butler et al. (1997) and was demonstrated successfully for Hale-Bopp (Butler & Palmer, 1997). Figure 9 shows example spectra. As a comet moves relative to a background source, the OH abundance along a chord through the coma is probed. Since the SKA will be sensitive to very weak background sources, many such sources should be available for tracking at any time, providing a nearly full 2-D map of the coma at high resolution (each chord is sampled along a pencil beam through the coma with diameter corresponding to the resolution of the interferometer at the distance of the comet). Combined with single-dish observations,this should provide a very accurate picture of the OH in cometary comae. Among the five most common elements in cometary comae, the chemistry involving nitrogen is one of the least well understood (along with sulfur). In addition, ammonia is particularly important in terms of organic precursor molecules, and can also be used as a good thermometer for the location where comets formed - whether the nitrogen is in N\\({}_{2}\\) or NH\\({}_{3}\\) depends on the temperature of the local medium, among other things (Charnley & Rodgers, 2002). Ammonia has a rich microwave spectrum which has been extensively observed in interstellar molecular clouds (see, e.g., Ho & Townes, 1983). Observations of ammonia in cometary comae are therefore potentially very valuable in terms of determining current chemistry and formation history. Recently, comets Hyakutake and Hale-Bopp were observed in NH\\({}_{3}\\)(Bird et al., 1997; Hirota et al., 1999; Butler et al., 2002). Observations of cometary NH\\({}_{3}\\) will suffer from the same problem as the H\\({}_{2}\\)O and OH - the NH\\({}_{3}\\) coma is large (although about a factor of 10 smaller than the water coma). However, if the individual elements are relatively small, and have any reasonable single dish capability, the NH\\({}_{3}\\) may still be detected. ## 7 Radar Radar observations of solar system bodies contribute significantly to our understanding of the solar system. Radar has the potential to deliver information on the spin and orbit state, and the surface and subsurface electrical properties and texture of these bodies. The two most powerful current planetary radars are the 13 cm wavelength system on the 305 m Arecibo telescope and the 3.5 cm system on the 70 m Goldstone antenna. A radar that made use of the SKA for both transmitting and receiving the echo would have a sensitivity many hundreds of times greater than the Arecibo system, the most sensitive of the two current systems. However, while, in theory, it would be possible to transmit with all, or a substantial fraction of, the SKA antennas, the additional complexity of controlling transmitters at each antenna, providing adequate power and solving atmospheric phase problems makes this option potentially prohibitively expensive. Used with the Arecibo antenna as a transmitting site, an Arecibo/SKA radar system would have 30 to 40 times the sensitivity of the current Arecibo planetary radar accounting for integration time and possible use of a shorter wavelength than 13 cm. If it were combined with a specially built transmitting station (100 m antenna equivalent size, 5 MW of transmitted power, 3 cm wavelength) the SKA would have 150 to 200 times the sensitivity of the current Arecibo system. This sensitivity would open up new areas of solar system studies especially those related to small bodies and the satellites of the outer planets. Figure 9: Spectra of the OH emission from comet Hale-Bopp made as the comet occulted background sources. From Butler & Palmer (1997). Imaging with the current planetary radar systems is achieved by either measuring echo power as a function of the target body's delay dispersion and rotationally induced Doppler shift - delay-Doppler mapping - or by using a radio astronomy synthesis interferometer system to spatially resolve a radar illuminated target body. Delay-Doppler mapping of nearby objects such a Near Earth asteroids (NEAs) can achieve resolutions as high as 15 m (Figure 10) but such images suffer from ambiguity (aliasing) problems due to two or more locations on the body having the same distance and velocity relative to the radar system. Synthesis imaging of radar illuminated targets provides unambiguous plane-of-sky images but, to date, the spatial resolution has been considerably less than can be achieved by delay-Doppler imaging. A noted example of the synthesis imaging technique was the discovery of water ice at the poles of Mercury by using the Goldstone transmitter in combination with the Very Large Array (VLA), another is the discovery of the so-called \"Stealth\" region on Mars by that same combined radar (Figure 11). As discussed below, using the SKA as a synthesis instrument will not provide adequate spatial resolution for studies of Near Earth Objects (NEOs) but it will resolve them, mitigating the effects of ambiguities in delay-Doppler imaging. ### Terrestrial planets At the distances of the closest approaches of Mercury, Venus and Mars to the Earth, the spatial resolution of a 3,000 km baseline SKA at 10 GHz will be approximately 1 km, 0.5 km and 0.7 km, respectively. The SKA-based radar system would be capable of imaging the surface of Mercury at 1 km resolution with a 1.0-sigma sensitivity limit corresponding to a radar cross section per unit area of about -30 db, good enough to map to very high incidence angles. For Mars, the equivalent spatial resolution for the same sensitivity limit would be \\(<\\) 1 km. The very high absorption in the Venus atmosphere at 10 GHz would reduce the echo strength and, hence, limit the achievable resolution. However, short wavelength observations would complement the longer 13 cm imagery from Magellan, provide additional information about the electrical properties of the surface via studies of the polarization properties of the echo (Haldemann et al., 1997; Carter, Campbell, & Campbell, 2004), and monitor the surface for signs of current volcanic activity. For both Mercury and Mars, radar images at 1 km resolution would poten Figure 8: Images of the OH emission from comet Halley made with the VLA at low (left) and high (right) resolution. From de Pater, Palmer, & Snyder (1991). tially be of great interest for studying regolith properties on Mercury and probing the dust that covers much of the surface of Mars. For the polar ice deposits on Mercury the sensitivity would allow sub-km resolution, significantly better than the 2 km Arecibo delay-Doppler imagery of Harmon, Perillat, & Slade (2001). However, this will require the capability to perform delay-Doppler imaging within the SKA's synthesized beam areas. ### Icy Satellites Radar is uniquely suited to the study of icy surfaces in the solar system and a SKA based system would provide images (or at least detections) of these bodies in the parameters responsible for their unusual radar scattering properties. As shown by recent Arecibo radar observations of Iapetus, the third largest moon of Saturn, the radar reflection properties of icy bodies can be used to infer surface chemistry in that pure ice surfaces can be distinguished from ones which incorporate impurities such as ammonia that suppresses the low loss volume scattering properties of the ice (Black et al., 2004) The unusual radar scattering properties of the Galilean satellites have been known for some time (Campbell et al., 1978; Ostro et al., 1992). As such, they are inviting targets for a SKA radar. At a distance to the jovian system of 4.2 AU, the smallest spatial size of the SKAs synthesized beam would be about 6 km while, given the very high backscatter cross sections of the icy Galilean satellites, signal-to-noise considerations would allow imaging with about 5 km resolution, a good match to the size of the synthesized beam. Depending on the prospects for NASA's proposed Jupiter Icy Moons Mission (JIMO) and its instrument payload, radar images of the icy moons at resolutions of a few km would provide unique information about the regoliths/upper surface layers of the icy satellites. Past radar observations of Titan have been instrumental in shaping our ideas of what resides on the surface there - the existence of a deep, global methane/ethane ocean was disproved (Muhleman et al., 1990), but recent Arecibo radar observations have provided Figure 11: Images made with the combined Goldstone+VLA radar instrument. Mars (left) observations done in October 1988. Mercury (right) observations done in August 1991. In both images, areas of brighter radar reflectivity are red, cycling to lower reflectivity through orange, yellow, green, light blue, blue, purple, and black. After (Muhleman et al., 1991; Butler, Muhleman, & Slade, 1993; Muhleman, Grossman, & Butler, 1995). evidence for the possible presence of small lakes or seas (Campbell et al., 2003) The Cassini mission, just arriving in the saturnian system, will make radar reflectivity measurements of Titan, but they will not be global, nor will the resolution be as fine as desired. At a distance of 8.0 AU, the spatial resolution of a 3,000 km baseline SKA at 10 GHz will be approximately 12 km - global radar imagery at this scale would be a powerful tool for studying the surface and subsurface of this enigmatic body. Given the extreme sensitivity of the SKA for radar observations, it would even be possible to make detections of Triton and Pluto. At the distances of these bodies, it will probably not be possible to make resolved images of them (although theoretically it is possible, given the SKA resolution) we can still at least measure the bulk properties of their surfaces and make crude hemispherical maps. A SKA system could also be used to investigate the radar scattering properties of some of the smaller satellites of Jupiter, Saturn and Uranus. It will be possible to investigate the radio wavelength scattering properties of most of the satellites of Jupiter, satellites of Saturn with larger than 50 to 100 km and the five large satellites of Uranus. ### Small bodies #### 7.3.1 Primary scientific objectives While spacecraft have imaged a small number of asteroids and comets, Earth based planetary radars will be the dominant means for the foreseeable future for obtaining astrometry, and determining the dynamical state and physical properties of small bodies in the inner solar system. Internal structure and collisional histories, important for solar system formation theories, can be deduced from measurements of asteroid sizes and shapes and from detailed imagery of their surfaces. Variations in the reflection properties of main belt asteroids with distance from the sun could pinpoint the transition region from rocky to icy bodies, again important for theories of solar system formation. There is also considerable uncertainty as to the size distribution of comets that a SKA based radar system could resolve. Bernstein et al. (2004) have pointed out that there is a significant shortage of KBOs at small sizes if comets have nuclei that are in the 10 km range as currently thought. #### 7.3.2 Near Earth Asteroids Astrometry and characterization would be major objectives of an SKA based radar system. NEAs are of great interest due to their potential hazard to the Earth, as objectives for future manned space missions to utilize their resources and as clues to the early history of the solar system. Astrometry and measurements of their sizes and spin vectors will greatly reduce the uncertainties in projecting their future orbits including non-gravitational influences such as the Yarkovsky effect (Figure 12). Measurements of the shapes, sizes and densities will provide insights as into the internal structure of NEOs, important both for understanding their history and also for designing mitigation methods should an Figure 10: A shape model for the 0.5 km NEA 6489 Golevka derived from Arecibo delay-Doppler images (Hudson et al., 2000; Chesley et al., 2003). The colors indicate the relative size of local gravitational slopes. NEO pose a significant threat to Earth. Unambiguous surface imagery at resolutions of a few meters will give insights into their collisional histories while the polarization properties of the reflected echo can be used to detect the presence of regoliths. Shapes, sizes and surface structure are currently obtained from multiple aspect angle delay-Doppler images (Figure 10 and Hudson, 1993). A radar equipped SKA will have the capability to image NEOs out to about 0.3 AU from Earth allowing large numbers to be imaged at resolutions of less than 20 m. The current Arecibo 13 cm radar system has the capability to image NEOs with about 20 m resolution to distances of approximately 0.05 AU. With over 100 times Arecibo's current sensitivity, an SKA based radar system could achieve similar resolutions at 0.15 to 0.20 AU and much higher resolutions for closer objects. The synthesized beam of the SKA (assuming 3,000 km baseline and 10 GHz frequency) has a spatial resolution at 0.2 AU of about 300 m, very much larger than the achievable resolution based on the sensitivity but small enough to mitigate the effects of delay-Doppler ambiguities allowing improved shape modeling and surface imagery. Doppler discrimination in the synthesis imagery will provide the plane-of-sky direction of the rotation vector (de Pater et al., 1994) and polarization properties will elucidate regolith properties. Because of their implications for both the composition and internal structure of asteroids, measurements of densities would be a major objective of SKA observations of NEAs. The discovery of binary NEAs (Figure 13; Margot et al., 2002b) provided the first opportunity for direct measurements of densities for the 10-20% of NEAs that are estimated to be in binary configurations. However, while they provide important information about NEA densities, the primary and secondary components of these binaries are a particular class of NEAs (Margot et al., 2002b) and are not fully representative of the general population. An alternative method of estimating densities for NEAs is the measurement of the Yarkovsky effect via long term astrometric observations (Vokrouhlicky et al., 2004). The size of the effect is dependent on the spin rate, the thermal inertia of the surface and the mass. The first two of these can be measured or estimated allowing the mass to be estimated and, hence, the density if the asteroids volume is known via a shape model. #### 7.3.3 Main Belt Asteroids A SKA based radar system would have a unique ability to measure the properties of small bodies out to the far edge of the main asteroid belt; sizes, shapes, albedoes and orbital parameters. The current Arecibo radar system has only been able to obtain a shape model for one MBA, Kleopatra (Figure 14; Ostro et al., 2000) and measure the radio wavelength reflection properties of a relatively small number of asteroids near the inner edge of the belt (Magri et al., 2001) plus those for a few of the very largest MBAs such Figure 12: Prediction error ellipses for the location in time delay (distance) and Doppler shift (line-of-sight velocity) of the 0.5 km NEA 6489 Golevka for an Arecibo observation in 2003 based on not including (SUM1) and including (SUM2) the non-gravitational force known as the Yarkovsky effect. The actual measurement indicated by ”Arecibo astrometry” clearly shows that the Yarkovsky effect is important in modifying the orbits of small bodies (from Chesley et al., 2003). as Ceres and Vesta (M. Nolan private communication). Main belt issues that a SKA based radar could address are: 1) The size distribution of MBAs would provide valuable constraints on material strength and, hence, on collisional evolution models; 2) Measurement of proportion of MBAs that are in binary systems would provide information about the collisional evolution of the main belt and detection of these systems would also provide masses and densities for a large number of MBAs; 3) Astrometry would also provide masses and densities via measurements of the gravitational perturbation from nearby passes of two bodies and also, for small bodies, from measurements of the Yarkovsky effect; 4) From radar albedo measurements determine whether there is a switch within the main belt from rocky to icy objects and, if so, whether it is gradual or abrupt. #### 7.3.4 Comets Spacecraft flybys have provided reasonable detailed information about three comets, Halley, Borelly and Wild, and over the next 1-2 decades, prior to the completion of the SKA, a small number of additional comets will be studied from spacecraft such as the already launched Deep Impact and Rosetta missions and from potential new missions such as a successor to the failed Contour mission. Direct measurements of the sizes of three comets have indicated that cometary nuclei have very low optical albedo and this has led to an upward revision of the size estimates of comets based on measurements of their absolute magnitudes. However, the very small sample means that the distribution of cometary albedoes is very uncertain and, hence, there is still considerable uncertainty as to the size distribution of comets. An SKA based radar system could resolve this issue which has ramifications related to the assumed source of short period comets in the Kuiper belt. Bernstein et al. (2004) have pointed out that there is a significant shortage of KBOs at small sizes if comets have nuclei that are in Figure 14: Shape models for the metallic main belt asteroid 216 Kleopatra derived from Arecibo delay-Doppler radar images. The model shows Kleopatra to be 217 km x 94 km x 81 km. It may be the remains of a collision of two former pieces of an ancient asteroids disrupted core. The color coding indicates the gravitational slopes (Ostro et al., 2000). Figure 13: A composite Arecibo delay-Doppler image of the binary near Earth asteroid 2000 DP107 showing the primary body with the location of the secondary on the dates shown in 2000. The diameters of the two bodies are about 800 m and 300 m, the orbital radius and period are 2.6 km and 1.76 days, respectively, giving a density for the primary of approximately 1.7 g cm\\({}^{-3}\\)(Margot et al., 2002b). Figure courtesy of J.L. Margot. the 10 km range as currently thought. A SKA based radar would be able to image cometary nuclei out to about 1 AU obtaining sizes, shapes, rotation vectors, and actual nucleus surface images. For objects at larger distances, size estimates will be obtained from range dispersion and also from rotation periods from radar light curves combined with measurements of Doppler broadening. Over time these measurements would be the major source of cometary size estimates. ### Technical issues For many bodies, unambiguous plane-of-sky synthesis imagery is superior to delay-Doppler imagery. Consequently, for both imaging and astrometric observations, a SKA based radar would need to have the capability to do both traditional radar delay-Doppler observations and synthesis imaging of radar illuminated objects. For both range-Doppler imaging and astrometric observations of near earth objects, range resolutions of 20 ns or better will be required. At 10 GHz the angular resolution of even the proposed central compact array will be smaller than the angular size of some NEOs requiring that for delay-Doppler and astrometric observations the SKA will still need to be used in an imaging mode. While adding to the complexity of the observations, the small spatial extent of the synthesized beam will greatly assist in mitigating the ambiguities inherent in delay-Doppler imaging. Delay-Doppler processing will require access to the complex outputs of the correlator at a 20 ns or better sampling rate. This requirement may not be dissimilar from that required for pulsar observations but it will have the added complexity that the NEOs will be in the near field of the SKA. ## 8 Extrasolar Giant Planets The detection of extrasolar giant planets is one of the most exciting discoveries of astronomy in the past decade. Despite the power of the radial velocity technique used to find these planets, it is biased to finding planets which are near their primary and orbiting edge-on. To augment those planets found by radial velocity searches, detections using astrometry, which are most sensitive to planets orbiting face-on, are needed. Many researchers are eagerly searching for ways to directly detect these planets, so they can be properly characterized (only the orbit and a lower limit to the mass is known for most extrasolar planets). Below we will discuss potential contributions for SKA. ### Indirect detection by astrometry The orbit of any planet around its central star causes that star to undergo a reflexive circular motion around the star-planet barycenter. By taking advantage of the incredibly high resolution of SKA, we may be able to detect this motion. Making the usual approximation that the planet mass is small compared to the stellar mass, the stellar orbit projected on the sky is an ellipse with angular semi-major axis \\(\\theta_{r}\\) (in arcsec) given by: \\[\\theta_{r}=\\frac{m_{p}}{M_{*}}\\frac{a_{AU}}{D_{pc}}\\quad, \\tag{1}\\] where \\(m_{p}\\) is the mass of the planet, \\(M_{*}\\) is the mass of the star, \\(a_{AU}\\) is the orbital distance of the planet (in AU), and \\(D_{pc}\\) is the distance to the system (in parsecs). The astrometric resolution of SKA, or the angular scale over which changes can be discriminated (\\(\\Phi\\)), is proportional to the intrinsic resolution of SKA, and inversely proportional to the signal to noise with which the stellar flux density is detected (SNR\\({}_{*}\\)): \\[\\Phi=\\frac{\\theta_{HPBW}}{2\\cdot\\mathrm{SNR}_{*}}\\quad. \\tag{2}\\] This relationship provides the key to high precision astrometry: the astrometric accuracy increases both as the intrinsic resolution improves and also as the signal to noise ratio is increased. Astrometry at radio wavelengths routinely achieves absolute astrometric resolutions 100 times finer than the intrinsic resolution, and can achieve up to 1000 times the intrinsic resolution with special care. As long as the phase stability specifications for SKA will allow such astrometric accuracy to be achieved for wide angle astrometry, such accuracies can be reached. When the astrometric resolution is less than the reflexive orbital motion, that is, when \\(\\Phi\\ \\lower 2.15pt\\hbox{$\\buildrel<\\over{\\sim}$}\\ \\theta_{r}\\), SKA will detect that motion. We use the approximation that \\(\\theta_{HPBW}\\sim\\lambda/B_{max}\\), so that detection will occur when: \\[{\\rm SNR}_{*}\\ \\lower 2.15pt\\hbox{$\\buildrel>\\over{\\sim}$}\\ 10^{5}\\,{ \\lambda\\over B_{max}}\\left({m_{p}\\over M_{*}}\\,{a_{AU}\\over D_{pc}}\\right)^{-1}\\ \\ . \\tag{3}\\] The factor of \\(2\\times 10^{5}\\) enters in to convert from radians to arcseconds. Note, however, that astrometric detection of a planet requires that curvature in the apparent stellar motion be measured, since linear terms in the reflex motion are indistinguishable from ordinary stellar proper motion. This implies that at the very minimum, one needs three observations spaced in time over roughly half of the orbital period of the observed system. A detection of a planetary system with astrometry would thus require some type of periodic monitoring. We use the technique described in Butler, Wootten, & Brown (2003) to calculate the expected flux density from stars, and whether we can detect their wobble from the presence of giant planets. If all of the detectable stars for SKA (roughly 4300), had planetary companions, how many of them could be detected (via astrometry) with SKA? We assume that the planets are in orbits with semimajor axis of 5 AU. We consider 3 masses of planetary companions: 5 times jovian, jovian, and neptunian. We assume integration times of 5 minutes, at 22 GHz. From the Hipparcos catalog (Perryman et al., 1997), there are \\(\\sim\\) 1000 stars around which a 5*jovian companion could be detected, \\(\\sim\\) 620 stars around which a jovian companion could be detected, and \\(\\sim\\) 40 stars around which a neptunian companion could be detected. Virtually none of these stars are solar-type. From the Gliese catalog (Gliese & Jahreiss, 1988), there are \\(\\sim\\) 1430 stars around which a 5*jovian companion could be detected, \\(\\sim\\) 400 stars around which a jovian companion could be detected, and \\(\\sim\\) 60 stars around which a neptunian companion could be detected. Of these, \\(\\sim\\) 130 are solar analogs. ### Direct detection of gyro-cyclotron emission Detection of the thermal and synchrotron emission from Jupiter, taken to distances of the stars, is beyond even the sensitivity of SKA unless prohibitively large amounts of integration time are spent. However, Jupiter experiences extremely energetic bursts at long wavelengths. If extrasolar giant planets exhibit the same bursting behavior, SKA might be used to detect this emission. If such a detection occurred, it would provide information on the rotation period, strength of the magnetic field, an estimate of the plasma density in the magnetosphere, and possibly the existence of satellites. The presence of a magnetic field is also potentially interesting for astrobiology, since such a field could shield the planet from the harsh stellar environment. Some experiments have already been done to try to detect this emission (Bastian, Dulk, & Leblanc, 2000). These bursts come from keV electrons in the magnetosphere of the planet. The solar wind deposits these electrons, which can subsequently develop an anisotropy in their energy distribution, becoming unstable. When deposited in the auroral zones of the planet, emission results at the gyrofrequency of the magnetic field at the location of the electron (\\(f_{g}=2.8B_{Gauss}\\) MHz, for magnetic field strength \\(B_{gauss}\\) in G). This kind of emission occurs on Earth, Saturn, Jupiter, Uranus, and Neptune in our solar system. The emission can be initiated or modulated by the presence of a satellite (Io, in the case of Jupiter). If we took the mean flux density of Jupiter at 30 MHz (\\(\\sim\\) 50000 Jy at 4.5 AU) to 10 pc, the resultant emission would only be 0.2 \\(\\mu\\)Jy. This is very difficult to detect, given the expected sensitivity of SKA at the lowest frequencies. However, the emission is variable (over two orders of magnitude), some EGPs may have intrinsically more radiated power, and if the emission is beamed, there is a significant increase in the expected flux density. The details of the expected radiated power from this emission mechanism are outlined in Farrell, Desch, & Zarka (1999) and Zarka et al. (2001). We summarize the discussion here. There exists a very good correlation amongst those plan ets that emit long wavelength radio waves between radiated power and input kinetic power from the solar wind. Given expressions for the solar wind input power and conversion factor, and a prediction of the magnetic moment of a giant planet, we can write the expected radiated power as: \\[P_{rad}\\sim 400\\left(\\frac{\\omega}{\\omega_{j}}\\right)^{\\frac{4}{5}}\\left(\\frac{M }{M_{j}}\\right)^{\\frac{4}{3}}\\left(\\frac{d}{d_{j}}\\right)^{\\frac{8}{5}}\\ \\mathrm{[GW]}\\, \\tag{4}\\] where \\(\\omega\\), \\(M\\), and \\(d\\) are the rotational rate, mass, and distance to primary of the planet, and the subscripted \\(j\\) quantities are those values for Jupiter. The expected received flux density can then be easily calculated, assuming isotropic radiation. The frequency at which the power is emitted is limited at the high end by the maximum gyrofrequency of the plasma: \\(f_{g}\\sim 2.8B_{gauss}\\) [MHz], for magnetic field strength \\(B_{gauss}\\) in G. This usually limits such emissions to the 10's of MHz (Jupiter's cutoff is \\(\\sim\\)40 MHz), but in some cases (for the larger EGPs), can extend into the 100's of MHz. For this reason, these kinds of experiments might be better done with LOFAR, but there is still a possibility of seeing some of them at the lower end of the SKA frequencies. If we take the current list of EGPs and use the above formalism to calculate the expected flux density, we can determine which are the best candidates to try to observe gyrocyclotron emission from. In this exercise, we exclude those planets with cutoff frequencies \\(<\\) 10 MHz (Earth's ionospheric cutoff frequency), and those in the galactic plane (because of confusion and higher background temperature). Table 5 shows the top four candidates, from which it can be seen that the maximum predicted emission is of order a few mJy (note that Farrell, Desch, & Zarka (1999) found similar values despite using slightly different scaling laws). But, again, this is the mean emission, so bursts would be much stronger, and beaming could improve the situation dramatically. Given the multi-beaming capability of SKA, it would be productive to attempt monitoring of some of the best candidates for these kinds of outbursts in an attempt to catch one. ## 9 The Sun The Sun is a challenging object for aperture synthesis, especially over a wide frequency range, due to its very wide range of spatial scales (of order 1 degree down to 1\\({}^{\\prime\\prime}\\)), its lack of fine spatial structure below about 1\\({}^{\\prime\\prime}\\), its great brightness (quiet Sun flux density can be 10\\({}^{6}\\)-10\\({}^{7}\\) Jy) and variability (flux density may change by 4-5 orders of magnitude in seconds), and its variety of relevant emission mechanisms (at least three--bremsstrahlung, gyroemission, and plasma emission--occur regularly, and others may occur during bursts). The key to physical interpretation of solar radio emission is the analysis of the brightness temperature spectrum, and because of the solar variability this spectrum must be obtained over relatively short times (less than 1 s for bursts, and of order 10 min for slowly varying quiescent emission). This means that broad parts of the RF spectrum must be observed simultaneously, or else rapid frequency switching must be possible. The Sun produces only circularly polarized emission-any linear component is destroyed due to extreme Faraday rotation during passage through the corona. High precision and sensitivity in circular polarization measurements will be extremely useful in diagnostics of the magnetic field strength and direction. Through long experience with the VLA and other instruments, it has been found that only antenna spacings less than about 6 km are useful, which corresponds to a synthesized beam of 10\\({}^{\\prime\\prime}\\)/\\(\ u_{\\mathrm{GHz}}\\). This empirical finding agrees with expectations for scattering in the solar atmosphere (Bastian 1994). \\begin{table} \\begin{tabular}{c c c} \\hline \\hline Star & \\(f_{g}\\) (MHz) & \\(F_{r}\\) (mJy) \\\\ \\hline \\(\\tau\\) Bootes & 42 & 4.8 \\\\ Gliese 86 & 44 & 2.3 \\\\ HD 114762 & 202 & 0.28 \\\\ 70 Vir & 94 & 0.13 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 5: Four best candidates for EGP gyrocyclotron emission detection. Given the specifications of the SKA, some unique solar science can be addressed in niche areas, but only if the system takes account of the demands placed on the instrument as mentioned above. For flares, the system should be designed with an ALC/AGC time constant significantly less than 1 s, should allow for rapid insertion of attenuation, and should allow for rapid frequency switching. There will be little use for the beamforming (phased array) mode, since even very low sidelobes washing over the Sun will dominate the signal, and there is no way to predict where a small beam should be placed to catch a flare. In synthesis mode, the main advantage of SKA will be in its high sensitivity to low surface brightness variations. The following solar science could be addressed: ### Solar bursts and activity The Frequency Agile Solar Radiotelescope (FASR) will be designed to do the best possible flare-related science, and it is hard to identify unique science to be addressed by SKA in this area. However, if SKA is placed at a significantly different longitude than FASR, it can cover the Sun at other times and produce useful results. To cover the full Sun, small antennas (of order 2 m above 3 GHz, and 6 m below 3 GHz) are required. Larger antenna sizes, while restricting the field of view, can also be useful when pointed at the most flare-likely active region. ### Quiet sun magnetic fields The magnetic geometry of the low solar atmosphere governs the coupling between the chromosphere/photosphere and the corona. Hence, it plays an important role in coronal heating, solar activity, and the basic structure of the solar atmosphere. One can uniquely measure the magnetic field through bremsstrahlung emission of the chromosphere and corona, which is circularly polarized due to the temperature gradient in the solar atmosphere. At \\(\ u\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}10\\) GHz, bremsstrahlung is often swamped by gyroemission, but it dominates at higher frequencies over much of the Sun. By measuring the percent polarization \\(P\\%\\) and the local brightness temperature spectral slope \\(n=-\\partial\\log T_{b}/\\partial\\log\ u\\), one can deduce the longitudinal magnetic field \\(B_{\\ell}=(107/n\\lambda)P\\%\\), with \\(B_{\\ell}\\) in G (Gelfreikh 2004). To reach a useful range of field strengths, say 10 G, the polarization must be measured to a precision of about 0.1-0.2% (since \\(n\\) is typically between 1 and 2). Both FASR and EVLA will address this science area, but FASR's small (2 m) antennas mean that the complex solar surface will have to be imaged over the entire disk with high polarization precision, while EVLA's relatively small number of baselines will make imaging at the required precision difficult. If SKA has relatively large antennas (20 m) and high polarization precision, it will be able to add significantly to this important measurement. ### Coronal Mass Ejections Coronal Mass Ejections (CMEs) are an important type of solar activity that dominates conditions in the interplanetary median and the Sun's influence on the Earth. Understanding CME initiation and development in the low solar atmosphere is critical to efforts to understand and predict the occurrence of CMEs. It is expected that CMEs can be imaged through their bremsstrahlung emission, but such emission will be of low contrast with the background solar emission. Bastian & Gary (1997) determined that the best contrast should occur near 1 GHz. Although one of the FASR goals is to observe CMEs, the nearly filled aperture and high sensitivity of SKA to low contrast surface brightness variations can make it very sensitive to CMEs. In addition to following the temporal development of the CME morphology, SKA spectral diagnostics can constrain the temperature, density, and perhaps magnetic field within the CME and surrounding structures. ## Acknowledgements Comments from Jean-Luc Margot, Mike Nolan, and Steve Ostro were appreciated. ## References * [1] Bastian, T.S. 1994, ApJ, 426, 774 * [2] Bastian, T.S., & D.E. Gary 1997, JGR, 102, 14031 * [3] Bastian, T.S., G.A. Dulk, & Y. Leblanc 2000, ApJ, 545, 1058* () Bernstein, G., et al., 2004, ApJ, accepted * () Berge, G.L., & S. Gulkis 1976, In: Jupiter, ed T. Gehrels, UofA Press * () Bird, M.K., W.K. Huchtmeier, P. Gensheimer, T.L. Wilson, P. Janardhan, and C. Lemme 1997, A&A, 325, L5 * () Biver, N., et al., 2002, EM&P, 90, 323 * () Black, G.J., D.B. Campbell, L.M. Carter, & S.J. Ostro 2004, Science, 304, 553 * () Bolton, S.J., et al., 2002, Nature, 415, 987 * () Briggs, F.H., & P.D. Sackett 1989, Icarus, 80, 77 * () Brown, M.E., & C.A. Trujillo 2004, AJ, 127, 2413 * () Brown, M.E., C.A. Trujillo, & D. Rabinowitz 2004, ApJ, in press * () Burns, J.A. 1992, In: Mars, ed H.H. Kieffer, B.M. Jakosky, C.W. Snyder, & M.S. Matthews, UofA Press * () Butler, B.J., D.O. Muhleman, & M.A. Slade 1993, JGR, 98, 15003 * () Butler, B.J., A.J. Beasley, J.M. Wrobel, & P. Palmer 1997, AJ, 113, 1429 * () Butler, B.J., & P. Palmer 1997, BAAS, 29, 1040 * () Butler, B.J., & M.A. Gurwell 2001, In: Science with the Atacama Large Millimeter Array (ALMA), ed A. Wootten, ASP Conference Series, 235 * () Butler, B.J., A. Wootten, P. Palmer, D. Despois, D. Bockelee-Morvan, J. Crovisier, & D. Yeomans 2002, ACM meeting, Berlin * () Butler, B.J., A. Wootten, & B. Brown 2003, ALMA Memo 475 * () Butler, B.J., & R.J. Sault 2003, IAUSS, 1E, 17B * () Campbell, D.B., J.F. Chandler, S.J. Ostro, G.H. Pettensil, & I.I. Shapiro 1978, Icarus, 34, 254 * () Campbell, D.B., G.J. Black, L.M. Carter, & S.J. Ostro 2003, Science, 302, 431 * () Carter, L.M., D.B. Campbell, & B.A. Campbell 2004, JGR, 109, E06009 * () Cellino, A., V. Zappala, & E.F. Tedesco 2002, M&PS, 37, 1965 * () Charnley, S.B., & S.D. Rodgers 2002, ApJ, 569, L133 * () Chesley, S.R., et al., 2003, Science, 302, 1739 * () Clancy, R.T., A.W. Grossman, & D.O. Muhleman 1992, Icarus, 100, 48 * () Cooray, A. 2003, ApJ, 589, L97 * () Cornwell, T., 2004, EVLA Memo 75 * () Craddock, R.A. 1994, LPSC XXV, 293 * () Davis, D., C. Chapman, R. Greenberg, S. Weidenschilling, & A.W. Harris 1979, In: Asteroids, ed T. Gehrels, UofA Press * () de Pater, I., R.A. Brown, & J.R. Dickel 1984, Icarus, 57, 93 * () de Pater, I., & C.K. Goertz 1989, GRL, 16, 97 * () de Pater, I. 1991, AJ, 102, 795 * () de Pater, I., P.N. Romani, & S.K. Atreya 1991, Icarus, 91, 220 * () de Pater, I., P. Palmer, & L.E. Snyder 1991, In: Comets in the Post-Halley Era, ed R.L. Newburn, Jr. et al., Kluwer * () de Pater, I., & D.L. Mitchell 1993, JGR, 98, 5471 * () de Pater, I., P. Palmer, D.L. Mitchell, S.J. Ostro, & D.K. Yeomans 1994, Icarus, 111, 489 * () de Pater, I., M. Schulz, & S.H. Brecht 1997, JGR, 102, 22043 * () de Pater, I., & R.J. Sault 1998, JGR, 103, 19973 * () de Pater, I. 1999, In: Perspectives in Radio Astronomy: Science with Large Antenna Arrays, ed M.P. van Haarlem, ASTRON Press * () de Pater, I., & B.J. Butler 2003, Icarus, 163, 428 * () de Pater, I., & D.E. Dunn 2003, Icarus, 163, 449 * () de Pater, I., et al., 2003, Icarus, 163, 434 * () de Pater, I., D.R. DeBoer, M. Marley, R. Freedman, & R. Young 2004, Icarus, submitted * () DeBoer, D.R., & P.G. Steffes 1996, Icarus, 123, 324 * () Desch, S.J., W.J. Borucki, C.T. Russell, & A. Bar-Nun 2002, Rep. Prog. Phys., 65, 955 * () Drake, M. 2001, M&PS, 36, 501 * () Dulk, G.A., Y. Leblanc, R.J. Sault, H.P. Ladreiter, & J.E.P. Connerney 1997, A&A, 319, 282 * () Dunn, D.E., I. de Pater, & R.J. Sault 2003, Icarus, 165, 121 * () Dunn, D.E., L.A. Molnar, & J.D. Fix 2002, Icarus, 160, 132 * () Edgett, K.S., B.J. Butler, J.R. Zimbelman, & V.E. Hamilton 1997, JGR, 102, 21545 * () Farrell, W.M., M.D. Desch, & P. Zarka 1999, JGR, 104, 14025 * () Fernandez, Y. 2002, EM&P, 89, 3 * () Gelfreikh, G.B. 2004, In: Solar and Space Weather Radiophysics, eds D.E. Gary & C.O. Keller, Kluwer, in preparation * () Gibbard, S.G., E.H. Levy, J.I. Lunine, & I. de Pater 1999, Icarus, 139, 227 * () Gliese, W, & H. Jahreiss 1988, In: Star Catalogues: a Centennial Flribute to A.N. Vyssotsky, ed A.G.D. Philip & A.R. Upgren, L. Davis Press * () Graham, A.P., B.J. Butler, L. Kogan, P. Palmer, & V. Strelnitski 2000, AJ, 119, 2465 * () Grossman, A.W., D.O. Muhleman, & G.L. Berge 1989, Science, 245, 1211 * () Haldemann, A.F.C., D.O. Muhleman, B.J. Butler, & M.A. Slade 1997, Icarus, 128, 398 * () Hammel, H.B., I. de Pater, S. Gibbard, & G.W. Lockwood 2004, Icarus, in preparation * () Harmon, J.K., D.B. Campbell, S.J. Ostro, & M.C. Nolan 1999, P&SS, 47, 1409 * () Harmon, J.K., P.J. Perillat, & M.A. Slade 2001, Icarus, 149, 1 * () Hirota, T., S. Yamamoto, K. Kawaguchi, A. Sakamoto, & N. Ukita 1999, ApJ, 520, 895 * () Ho, P.T.P., & C.H. Townes 1983, ARA&A, 21, 239 * () Hofstadter, M.D., & B.J. Butler 2003, Icarus, 165, 168 * () Hofstadter, M.D., & B.J. Butler 2004, Icarus, in preparation * () Hudson, R.S. 1993, Rem. Sens. Rev., 8, 195 * () Hudson, R.S., et al., 2000, Icarus, 48, 37 * () Janssen, M.A., M.D. Hofstadter, S. Gulkis, A.P. Ingersoll, M. Allison, S.J. Bolton, & L.W. Kamp 2004, Icarus, submitted * () Jenkins, J.M., M.A. Kolodner, B.J. Butler, S.H. Suleiman, & P.G. Steffes 2002, Icarus, 158, 312 * () Jewitt, D., H. Aussel, & A. Evans 2001, Nature, 411, 446 * () Jones, D.L. 2004, SKA Memo 45* () Leblanc, Y., G.A. Dulk, R.J. Sault, & R.W. Hunstead 1997, A&A, 319, 274 * () Magri, C., G.J. Consolmagno, S.J. Ostro, L.A.M. Benner, & B.R. Beeney 2001, M&PS, 36, 1697 * () Margot, J.L., D.B. Campbell, B.A. Campbell, & B.J. Butler 1997, LPSC, 28, 871 * () Margot, J.-L., C. Trujillo, M.E. Brown, & F. Bertoldi 2002, BAAS, 34, 871 * () Margot, J.-L., M.C. Nolan, L.A.M. Benner, S.J. Ostro, R.F. Jurgens, J.D. Giorgini, M.A. Slade, & D.B. Campbell 2002, Science, 296, 1445 * () McDonald, F.B., A.W. Schardt, & J.H.T. Trainor 1980, JGR, 85, 5813 * () Mitchell, D.L., & I. de Pater 1994, Icarus, 110, 2 * () Morbidelli, A., R. Jedicke, W.F. Bottke, P. Michel, & E.F. Tedesco 2002, Icarus, 158, 329 * () Muhleman, D.O., G.L. Berge, D. Rudy, & A.E. Neill 1986, AJ, 92, 1428 * () Muhleman, D.O., A.W. Grossman, B.J. Butler, & M.A. Slade 1990, Science, 248, 975 * () Muhleman, D.O., B.J. Butler, A.W. Grossman, & M.A. Slade 1991, Science, 253, 1508 * () Muhleman, D.O., & G.L. Berge 1991, Icarus, 92, 263 * () Muhleman, D.O., A.W. Grossman, & B.J. Butler 1995, ARE&PS, 23, 337 * () Ostro, S.J., et al., 1992, JGR, 97, 18277 * () Ostro, S.J., et al., 2000, Science, 288, 836 * () Perryman M.A.C., et al., 1997, A&A, 323, L49 * () Rivkin, A. 1997, PhD Thesis, UofA * () Rivkin, A.S., R.H. Brown, D.E. Trilling, J.F. Bell III, & J.H. Plassman 2002, Icarus, 156, 64 * () Sault, R.J., T. Oosterloo, G.A. Dulk, & Y. Leblanc 1997, A&A, 324, 1190 * () Sault, R.J., C. Engel, & I. de Pater 2004, Icarus, 168, 336 * () Schloerb, F.P., & E. Gerard 1985, AJ, 90, 1117 * () Showman, A., & I. de Pater 2004, Icarus, submitted * () Snyder, L.E., P. Palmer, & I. de Pater 1989, AJ, 97, 246 * () Tedesco, E.F., P.V. Noah, M. Noah, & S.D. Price 2002, AJ, 123, 1056 * () van der Tak, F., I. de Pater, A. Silva, & R. Millan 1999, Icarus, 142, 125 * () Vokrouhlicky, D., D. Capek, S.R. Chesley, & S.J. Ostro 2004, Workshop on Asteroid Dynamics, Arecibo Observatory * () Zarka, P., R.A. Treumann, B.P. Ryabov, & V.B. Ryabov 2001, Ap&SS, 277, 293
Radio wavelength observations of solar system bodies reveal unique information about them, as they probe to regions inaccessible by nearly all other remote sensing techniques and wavelengths. As such, the SKA will be an important telescope for planetary science studies. With its sensitivity, spatial resolution, and spectral flexibility and resolution, it will be used extensively in planetary studies. It will make significant advances possible in studies of the deep atmospheres, magnetospheres and rings of the giant planets, atmospheres, surfaces, and subsurfaces of the terrestrial planets, and properties of small bodies, including comets, asteroids, and KBOs. Further, it will allow unique studies of the Sun. Finally, it will allow for both indirect and direct observations of extrasolar giant planets.
Write a summary of the passage below.
arxiv-format/0409724v1.md
# Coronal Heating Versus Solar Wind Acceleration Steven R. Cranmer Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138, USA; _Email: [email protected]_ To be published in the proceedings of _SOHO-15: Coronal Heating,_ 6-9 Sept. 2004, St. Andrews, Scotland, ESA SP-575 ## 1 Introduction The origin of coronal heating is intimately linked to the existence and physical cause of the acceleration of the solar wind. The early history of both \"unsolved problems\" reaches back into the 19th century (e.g., Huftbauer 1991; Parker 1999, 2001; Soon and Yaskell 2004). Parker (1958, 1963) combined existing empirical clues concerning an outflow of particles from the Sun with the earlier discovery of a hot corona to postulate his transonic flow solution. (An explicit closed-form solution to the isothermal Parker solar wind equation was derived by Cranmer 2004.) In Parker's original models, gravity was counteracted solely by the large gas pressure gradient of the million-degree corona, and wind speeds up to \\(\\sim\\)1000 km/s were possible with mean coronal temperatures of order 3-4 million K. _Mariner 2_ confirmed the existence of a continuous supersonic solar wind just a few years after Parker's initially controversial work, and also showed that the wind exists in two relatively distinct states: slow (300-500 km/s) and fast (600-800 km/s). The succeeding decades saw a more comprehensive _in situ_ exploration of the solar wind. Before the late 1970s, though, the slow-speed component of the wind was believed to be the \"quiet\" background state of the plasma; the high-speed streams were seen as occasional disturbances (see Hundhausen 1972). This view was bolstered by increasing evidence that average coronal temperatures (in open magnetic regions feeding the solar wind) probably did not exceed \\(\\sim\\)2 million K, thus making the slow wind easier to explain with Parker's basic theory. However, we know now that this this idea came from the limited perspective of spacecraft that remained in or near the ecliptic plane; it gradually became apparent that the fast wind is indeed the more \"ambient\" steady state (e.g., Feldman et al. 1976; Axford 1977). The polar passes of _Ulysses_ in the 1990s confirmed this revised paradigm (Gosling 1996; Marsden 2001). In the 1970s and 1980s, it became increasingly evident that even the most sophisticated solar wind models could not produce a _fast wind_ without the deposition of heat or momentum in some form into the corona (e.g., Holzer and Leer 1980). It is still unclear what fraction of the fast wind's acceleration comes from the gas pressure gradient (i.e., from coronal heating) and what fraction is directly added to the plasma from some other source (usually believed to be waves). This paper surveys our current understanding of the fast wind with an eye on the relative impact of coronal heating (SS 2) and external momentum deposition (SS 3). A brief review of _SOHO_ results concerning slow wind acceleration--highlighting the similarities and differences between the fast and slow wind--is given in SS 4. Conclusions and a \"wish list\" of key measurements for future missions are given in SS 5. ## 2 Fast Wind: Coronal Heating Much of the SOHO-15 Workshop was devoted to studying the so-called \"basal\" coronal heating problem; i.e., the physical origin of the heat deposited below a heliocentric distance of about 1.5 \\(R_{\\odot}\\). At these heights, different combinations of mechanisms (e.g., magnetic reconnection, turbulence, wave dissipation, and plasma instabilities) are probably responsible for the varied appearance of coronal holes, quiet regions, isolated loops, and active regions (Priest et al. 2000; Aschwanden et al. 2001; Cargill and Klimchuk 2004). In the open magnetic flux tubes that feed the fast solar wind, though, additional heating at distances greater than about 2 \\(R_{\\odot}\\) isbelieved to be needed (Leer et al. 1982; Parker 1991). In coronal holes, the plasma at these larger heights is almost completely collisionless. Thus, the ultimate energy dissipation mechanisms at large heights are probably _qualitatively different_ from the smallest-scale collision-dominated mechanisms (i.e., resistivity, viscosity, ion-neutral friction) that act near the base. The necessity for \"extended coronal heating\" in addition to that at the base comes from three general sets of empirical constraints (see also Cranmer 2002a). 1. As summarized above, pressure-driven models of the high-speed wind cannot be made consistent with the relatively low inferred temperatures in coronal holes (especially electron temperatures \\(T_{e}\\) less than about 1.5 \\(\\times\\) 10\\({}^{6}\\) K) without some kind of additional energy deposition. Because electron heat conduction is so much stronger than proton heat conduction, it was realized rather early that one cannot produce the observed _in situ_ property of \\(T_{p}>T_{e}\\) at 1 AU without additional heating (e.g., Hartle and Sturrock 1968). 2. Spacecraft in the interplanetary medium have measured radial gradients in proton and electron temperatures that are substantially shallower than predicted from pure adiabatic expansion, indicating gradual energy addition (e.g., Phillips et al. 1995; Richardson et al. 1995). _Helios_ measurements of radial growth of the proton magnetic moment between the orbits of Mercury and the Earth (Schwartz and Marsch 1983; Marsch 1991) point to specific collisionless processes. 3. _SOHO_ has provided more direct evidence for extended heating. UVCS (the Ultraviolet Coronagraph Spectrometer) measured extremely high heavy ion temperatures, faster bulk ion outflow compared to protons, and strong anisotropies (with \\(T_{\\perp}>T_{\\parallel}\\)) of ion velocity distributions in the extended corona (Kohl et al. 1997, 1998, 1999; Noci et al. 1997; Li et al. 1998; Cranmer et al. 1999b; Giordano et al. 2000). SUMER (Solar Ultraviolet Measurements of Emitted Radiation) has shown that preferential ion heating may begin very near the limb, in regions previously thought to be in collisional equilibrium and thus dominated by more traditional heating mechanisms (e.g., Tu et al. 1998; Peter and Vocks 2003; Moran 2003; L. Dolla, these proceedings). The list of possible physical processes responsible for extended coronal heating is limited both by the nearly collisionless nature of the plasma and by the observed temperatures (\\(T_{\\rm ion}\\gg T_{p}>T_{e}\\)). Most suggested mechanisms involve the transfer of energy from _propagating fluctuations_--such as waves, shocks, or turbulent eddies--to the particles. This broad consensus has arisen because the ultimate source of energy must be solar in origin, and thus it must somehow be transmitted out to the distances where the heating occurs (see, e.g., Hollweg 1978a; Tu and Marsch 1995). The _SOHO_ observations discussed above have given rise to a resurgence of interest in collisionless wave-particle resonances (typically the ion cyclotron resonance) as potentially important mechanisms for damping wave energy and preferentially energizing positive ions (e.g., McKenzie et al. 1995; Tu and Marsch 1997, 2001; Hollweg 1999a, 2000; Axford et al. 1999; Cranmer et al. 1999a; Li et al. 1999; Cranmer 2000, 2001, 2002a,b; Galinsky and Shevchenko 2000; Hollweg and Isenberg 2002; Vocks and Marsch 2002; Gary et al. 2003; Marsch et al. 2003; Voitenko and Goossens 2003, 2004; Gary and Nishimura 2004; Gary and Borovsky 2004; Markovskii and Hollweg 2004; see also E. Marsch, these proceedings). There remains some controversy over whether ion cyclotron waves generated only at the coronal base can heat the extended corona, or if a more gradual generation of these waves is needed over a range of heights. If the latter, then there is also uncertainty concerning the origin of such extended wave generation. MHD turbulence has long been proposed as a likely means of transforming fluctuation energy from low frequencies (e.g., periods of a few minutes; believed to be emitted copiously by the Sun) to the high frequencies required by cyclotron resonance theories (e.g., 10\\({}^{2}\\) to 10\\({}^{4}\\) Hz). However, both numerical simulations and analytic descriptions of turbulence indicate that the cascade from large to small length scales occurs most efficiently for modes that do not increase in frequency (for a recent survey, see Oughton et al. 2004). In the corona, the expected type of turbulent cascade would tend to most rapidly increase electron \\(T_{\\parallel}\\), not the ion \\(T_{\\perp}\\) as observed. Cranmer and van Ballegoijen (2003) discussed this issue at length and surveyed possible solutions. Much of the work cited above can be broadly summarized as \"working backwards\" from the measured plasma parameters in the extended corona to deduce the properties of the kinetic-scale fluctuations that would provide the required energy. However, in many models (especially those involving turbulence) the ultimate dissipation at small scales has its origin on much larger scales. It is therefore worthwhile to study the energy input at the largest scales as a constraint on how much deposition will eventually be channeled through the smaller scales. The remainder of this section is thus devoted to presenting an empirically constrained model of low-frequency (10\\({}^{-5}\\) to 1 Hz) Alfven wave heating in a representative open coronal-hole flux tube (Cranmer and van Ballegooijen 2004). This model follows the radial evolution of the power spectrum of non-WKB Alfven waves (i.e., waves propagating both outwards and inwards along the flux tube) and allows the turbulent energy injection rate (and thus the heating rate) to be derived as a function of height. The Alfven waves have their origin in the transverse shaking of strong-field (\\(\\sim\\)1500 G) thin flux tubes in the photosphere, and in the supergranular network these flux tubes merge with one another in the mid-chromosphere to form the bases of flux-tube \"funnels\" that expand outwards into the solar wind (e.g., Hassler et al. 1999; Peter 2001; T. Aiouaz, these proceedings). Figure 1 shows a summary of various results from this model. Figure 1a plots the adopted zero-order \"background\" plasma state (magnetic field strength, wind speed, and Alfven speed) on which the wave perturbations were placed. The model extends from the photosphere into the outer heliosphere (truncated at 4 AU for convenience). The magnetic field \\(B_{r}\\) was computed below 1.02 \\(R_{\\odot}\\) with a 2.5D magnetostatic model of expanding granular and supergranular flux tubes (see, e.g., Hasan et al. 2003). Above 1.02 \\(R_{\\odot}\\), the magnetic field was adopted from the solar-minimum model of Banaszkiewicz et al. (1998). The density was specified empirically from VAL/FAL model C (e.g., Fontenla et al. 1993) at low heights, and white-light polarization brightness measurements at large heights. Mass flux conservation was used to compute the outflow speed, normalized by the solar-minimum _Ulysses_ polar mass flux (for more details, see Cranmer and van Ballegooijen 2004). The bottom boundary condition on the power spectrum of transverse fluctuations came from measurements of G-band bright point motions in the photosphere (e.g., Nisenson et al. 2003). The observationally inferred power spectrum was summed from two phases of bright-point motion assumed to be statistically independent: isolated random walks and occasional rapid jumps due to flux-tube merging and fragmenting. Below the mid-chromosphere, where the bright-point flux tubes are isolated and thin, we solved a non-WKB form of the kink-mode wave equations derived by Spruit (1981, 1984). Above the mid-chromosphere, where the flux tubes have merged into a more homogeneous network \"funnel,\" we solved the wind-modified non-WKB wave transport equations of Heinemann and Olbert (1980). These wave equations were solved for each frequency in a grid spanning periods from 3 seconds to 3 days, and the full radially varying power spectrum was integrated to find the kinetic and magnetic Alfven wave amplitudes \\(\\delta V_{\\perp}\\) and \\(\\delta B_{\\perp}\\). Figure 1b shows these amplitudes for both the initial undamped model and another model with turbulent damping (see below). The various observational data points are described in the caption. We note here that the on-disk SUMER nonthermal line widths of Chae et al. (1998) are most probably not transverse Alfven waves, but their agreement with the _magnetic_ fluctuation amplitude in our model may imply some mode coupling between transverse and longitudinal waves. Figure 1c shows the departures from a simple WKB model of purely outward-propagating Alfven waves. Our model contains linear reflection that produces an inward component of the wave energy density \\(U_{+}\\) from the predominantly outward component \\(U_{-}\\) and does not always exhibit the ideal WKB equipartition between kinetic (\\(U_{K}\\)) and magnetic (\\(U_{B}\\)) fluctuations. The total fluctuation energy density is given by \\(U_{K}+U_{B}=U_{+}+U_{-}\\). Note that in Figure 1b the _in situ_ measurements fall well below the undamped wave amplitudes. This heliospheric \"deficit\" of wave power, compared to most prior assumptions about the wave power in the solar atmosphere, is well known (Roberts 1989; Mancuso & Spangler 1999). It seems clear that damping is required in order to agree Figure 1: **(a)** Steady-state plasma conditions along the modeled flux tube: wind speed (solid line), Alfvén speed (dashed line), and magnetic field strength (dotted line). Arrows at the top show the mid-chromospheric “merging height” of thin flux tubes into network funnels, the transition region, and the orbit of the Earth. **(b)** Frequency-integrated wave amplitudes (see plot for line styles). Observational data points from left to right: circles (Chae et al. 1998), X’s (Banerjee et al. 1998), gray region (Esser et al. 1999), stars (Armstrong and Woo 1981), struts (Canals et al. 2002), filled rectangles (Bavassano et al. 2000). **(c)** Energy density ratios defined in the plot. with the totality of the measurements, and Cranmer and van Ballegooijen (2004) showed that traditional collisional (i.e., linear viscous) Alfven wave damping is probably negligible in the fast solar wind. However, if a turbulent cascade has time to develop, the waves can be damped by small-scale kinetic/collisionless processes at a rate governed by the large-scale energy injection rate. The most likely place for this damped wave energy to go is into extended heating. In a field-free hydrodynamic fluid, turbulent eddies are isotropic and the energy injection rate follows the Kolmogorov (1941) form. This results in a volumetric heating rate (erg cm\\({}^{-3}\\) s\\({}^{-1}\\)) \\[Q_{\\rm Kolm}\\,\\approx\\,\\frac{\\rho\\,\\langle\\delta V\\rangle^{3}}{\\ell} \\tag{1}\\] where \\(\\rho\\) is the mass density, \\(\\langle\\delta V\\rangle\\) is the r.m.s. fluctuation velocity at the largest scale (called here, possibly imprecisely, the \"outer scale\") and \\(\\ell\\) is a representative outer-scale length (i.e., the size of the largest turbulent eddies). Heating rates of this general form were applied quite early in studies of solar wind heating (Coleman 1968) and have been used more-or-less continuously over the past few decades (e.g., Hollweg 1986; Li et al. 1999; Chen and Li 2004). In a magnetized low-beta plasma, the above Kolmogorov heating rate does not apply because the turbulence is not isotropic. In addition to the well-known MHD anisotropy that allows the cascade to proceed much more efficiency in directions perpendicular to the field than along the field, there is another (possibly more important) departure from isotropy: the outward-propagating Alfven waves (at outer-scale wavelengths) have a much stronger amplitude than inward-propagating waves. The outer-scale energy injection rate depends critically on the disparity between the outward and inward wave energy densities. In terms of Elsasser's (1950) variables (\\(Z_{\\pm}\\equiv\\delta V\\pm\\delta B/\\sqrt{4\\pi\\rho}\\)), where \\(Z_{-}\\) represents outward waves and \\(Z_{+}\\) represents inward waves, the energy injection rate for anisotropic MHD turbulence can be written as \\[Q\\,=\\,\\alpha\\,\\rho\\,\\frac{\\langle Z_{-}\\rangle^{2}\\langle Z_{+}\\rangle+ \\langle Z_{+}\\rangle^{2}\\langle Z_{-}\\rangle}{4\\ell_{\\perp}} \\tag{2}\\] where \\(\\alpha\\) is an order-unity calibration factor and \\(\\ell_{\\perp}\\) is a purely transverse outer-scale correlation length (see, e.g., Hossain et al. 1995; Matthaeus et al. 1999; Dmitruk et al. 2001, 2002). In Figure 2 we plot the anisotropic and equivalent Kolmogorov heating rates per unit mass (\\(Q/\\rho\\)) for outer-scale lengths that expand with the transverse width of the open flux tube (i.e., \\(\\ell_{\\perp}\\propto B_{r}^{-1/2}\\)). The lengths are normalized so that the damping consistent with the anisotropic heating rate matches the _in situ_ amplitudes in Figure 1b. (The resulting normalization yields a value of \\(\\ell_{\\perp}\\) at the chromospheric merging height of about 1100 km, which seems appropriate for motions excited between granules of the same spatial scale.) We note that this model is completely consistent only above \\(r=1.1\\,R_{\\odot}\\), where both the damping and the heating were computed together. Below this height, Cranmer and van Ballegooijen (2004) determined that the turbulence would not have time to develop fully, and thus no damping was applied. The heating rates below 1.1 \\(R_{\\odot}\\) should be considered upper-limit estimates based on the undamped wave amplitudes. Figure 2a shows the comparison between the anisotropic and Kolmogorov heating rates. The curves are substantially different from one another nearly everywhere, which indicates that the inward/outward imbalance generated by non-WKB reflection is probably a very important ingredient in Alfven wave heating models of the solar wind. The differences are small in the photosphere and low chromosphere, where strong reflection leads to nearly equal inward and outward wave power. In the extended corona, though, the Kolmogorov heating rate begins to exceed the anisotropic turbulent heating rate by _as much as a factor of 30_. The isotropic Kolmogorov assumption assumes the maximal amount of possible mixing between inward and outward modes, which is incon Figure 2: **(a) Heating rates per unit mass for the fully anisotropic MHD turbulence model (solid line) and a model that assumes isotropic Kolmogorov turbulence (dashed line). (b) Comparing the same solid line from above with several sets of empirically constrained heating rates: dashed/light-gray (Wang 1994), dotted/dark-gray (Hansteen and Leer 1995), dash-dotted (Allen et al. 1998).** sistent with the relatively weak reflection computed for the corona in our models. Figure 2b compares the derived anisotropic heating rate with various empirically constrained heating rates--usually specified via sums of exponential functions--from a selection of 1D solar wind models. In these models the parameters of these heating functions were varied freely until sufficiently \"realistic\" solar wind conditions were produced. A selection of the models presented by Wang (1994) and Hansteen and Leer (1995) are shown, and the SW2 model of Allen et al. (1998) is plotted. The order-of-magnitude agreement, especially in the extended corona at \\(r\\approx\\) 1.5-4 \\(R_{\\odot}\\), indicates that MHD turbulence may be a dominant contributor to the extended heating in the fast wind (see also Dmitruk et al. 2002, for similar comparisons). ## 3 Fast Wind: Direct Acceleration Just as electromagnetic waves carry momentum and exert pressure on matter, acoustic and MHD waves that propagate through an inhomogeneous medium also do work on the fluid via similar radiation stresses. This nondissipative net momentum deposition has been studied for several decades in a solar wind context and is generally called either \"wave pressure\" or a ponderomotive force (e.g., Bretherton and Garrett 1968; Dewar 1970; Belcher 1971; Alazraki and Couturier 1971; Jacques 1977). Initial computations of the net work done on the bulk fluid have been augmented by calculations of the acceleration imparted to individual ion species (Isenberg & Hollweg 1982; McKenzie 1994; Li et al. 1999; Laming 2004), estimates of the departures from Maxwellian velocity distributions induced by the waves (Goodrich 1978; Hollweg 1978b), and extensions to nonlinearly steepened wave trains (e.g., Koninx 1992). For non-WKB Alfven waves propagating along a radially oriented (but potentially superradially expanding) flux tube, Heinemann and Olbert (1980) gave the general expression for the wave pressure acceleration \\(a_{\\rm wp}\\), \\[\\rho a_{\\rm wp}\\,=\\,-\\frac{\\partial U_{B}}{\\partial r}+(U_{B}-U_{K})\\,\\frac{ \\partial}{\\partial r}({\\rm ln}\\,B_{r}) \\tag{3}\\] where, as above, \\(U_{B}\\) and \\(U_{K}\\) are the magnetic and kinetic energy densities of the waves, \\[U_{B}=\\frac{\\langle\\delta B_{\\perp}\\rangle^{2}}{8\\pi}\\ \\,\\ \\ \\ \\ U_{K}=\\frac{ \\rho\\langle\\delta V_{\\perp}\\rangle^{2}}{2}\\ . \\tag{4}\\] In the ideal WKB limit (i.e., for purely outward-propagating Alfven waves), \\(U_{B}=U_{K}\\) and only the first term on the right-hand side is present. The above expression also assumes an isotropic pressure (i.e., \\(T_{\\parallel}=T_{\\perp}\\) for the electrons and protons), but for a low-beta plasma, modest departures from gas-pressure isotropy do not substantially alter the wave pressure. Cranmer and van Ballegooijen (2004) provide plots of \\(a_{\\rm wp}\\) versus height for the coronal hole flux tube model discussed in SS 2. We summarize those results briefly by mentioning that the weak degree of reflection in the extended corona (leading to \\(U_{B}\\approx U_{K}\\) above about 1.05 \\(R_{\\odot}\\)) validates the use of the simplified WKB form of the wave-pressure acceleration in most solar wind models. Rather than simply present plots of \\(a_{\\rm wp}(r)\\), here we examine the impact of the \"known\" wave properties on the acceleration region of the solar wind. There are two semi-empirical ways of using the above-described values for \\(\\langle\\delta V_{\\perp}\\rangle\\) and \\(a_{\\rm wp}\\) to put constraints on the temperature of the extended corona. Figure 3 shows coronal temperatures derived from the following two methods: 1. UVCS measurements of the widths of the H I Ly\\(\\alpha\\) resonance line are useful for their sampling of the motions of hydrogen atoms along the line of sight. For the first few solar radii above the surface, efficient charge exchange processes keep the proton and neutral hydrogen temperatures coupled to one another. For off-limb observations of coronal holes, the line of sight samples mainly directions perpendicular to the nearly radial field lines, and the \\(1/e\\) line width \\(V_{1/e}\\) arises from two primary types of motion: \\[V_{1/e}^{2}\\,=\\,\\frac{2k_{\\rm B}T_{p\\perp}}{m_{p}}+\\langle\\delta V_{\\perp} \\rangle^{2}\\] (5) where \\(k_{\\rm B}\\) is Boltzmann's constant and \\(m_{p}\\) is the mass of a proton. The two terms on the right-side represent random \"thermal\" motions and unresolved transverse wave motions. Using observed values of \\(V_{1/e}\\) and the modeled values of \\(\\delta V_{\\perp}\\), we can solve the above equation for \\(T_{p\\perp}\\). Note that the Cranmer et al. (1999b) data points in Figure 3a were derived from \\(V_{1/e}\\) values that have subtracted out the projected component of the outflow speed along the line of sight; the other values are straightforward line widths. 2. If the steady-state density and outflow speed are known in conjunction with the wave-pressure acceleration, the solar wind momentum conservation equation can be solved empirically for the gas pressure term: \\[\\frac{\ abla P}{\\rho}\\,=\\,a_{\\rm wp}-\\frac{GM_{\\odot}}{r^{2}}-u\\frac{du}{dr}\\] (6) (see also Sittler and Guhathakurta 1999, 2002, for similar work). To obtain the pressure \\(P\\) as a function of radius, we integrated \\(\ abla P\\) inwards from 1 AU assuming a wide range of possible temperatures at the outer boundary. The resulting coronal \\(P(r)\\) was quite insensitive to the boundary conditions, however, because the gas pressure is so much larger in the corona than at 1 AU. An averaged proton-electron temperature \\(T_{\\rm avg}\\) was derived assuming a fully ionized hydrogen-helium plasma: \\[P\\,=\\,n_{p}k_{\\rm B}T_{\\rm avg}\\left[2+\\frac{n_{\\alpha}}{n_{p}}\\left(2+\\frac{T_ {\\alpha}}{T_{p}}\\right)\\right]\\] (7) where we assumed \\(n_{\\alpha}/n_{p}=\\) 0.05 and we used two extreme values for the alpha-to-proton temperature ratio: 1 and 4. Figure 3a displays the results of both kinds of semi-empirical temperature determination discussed above. The UVCS H I Ly\\(\\alpha\\) line width observations (all in solar-minimum polar coronal holes) exhibit a moderate spread that probably can be attributed to different line-of-sight contributions from polar plumes and low count-rate Poisson statistics (as can be seen from the 1\\(\\sigma\\) error bars plotted for the data points of Cranmer et al. 1999b). The overall agreement between both methods of determining the temperature is an adequate consistency check, but note that the UVCS-derived values are specifically _proton_ temperatures, while the momentum-conservation values are essentially (\\(T_{p}+T_{e}\\))/2. There is evidence from SUMER and CDS observations below \\(r\\approx 1.5\\,R_{\\odot}\\) that \\(T_{e}\\) is substantially less than 1 MK, and if this trend continues above 1.5 \\(R_{\\odot}\\), it would imply that the momentum-conservation values of \\(T_{p}\\) must be _larger_ than plotted. Thus, the rough agreement between the two methods in Figure 3a may imply that \\(T_{p}\\approx T_{e}\\) remains the case several solar radii out into the extended corona, in contrast with earlier conclusions that \\(T_{p}>T_{e}\\). Figure 3b shows the fraction of the total outward acceleration (gas pressure + wave pressure) that comes from wave pressure. This plot quantitatively answers the question that was implicitly posed in the title of this paper; i.e., how do coronal heating and direct acceleration \"compete\" in the fast solar wind? The gas pressure term is decidedly stronger in the first several solar radii (i.e., the primary fast-wind acceleration region), but wave pressure soon reaches a point where it provides roughly half of the acceleration. All of the above discussion of wave-pressure acceleration was focused solely on _Alfven waves,_ but it is not yet clear that these are the only MHD wave modes to exist in the extended corona and solar wind. There is some evidence for both fast-mode and slow-mode magnetosonic waves in the corona, but they have been observed mainly in relatively confined regions such as loops and plumes (Ofman et al. 1999; Nakariakov et al. 2004). Fast and slow modes are believed to be more strongly attenuated by collisional damping processes than Alfven waves before they reach the corona (e.g., Osterbrock 1961; Whang 1997). However, fast-mode waves that propagate _parallel_ to the magnetic field behave essentially the same as Alfven waves (putting aside their kinetic-scale polarization and their preferred cascade directions in \\(k\\)-space), so they may exist at some low level in the corona. It is worthwhile, at least in a preliminary sense, to compare the wave-pressure accelerations expected from Alfven waves and from fast-mode waves. For outward-propagating fast-mode waves with an isotropic distribution of wavevectors, Jacques (1977) derived the especially simple expression \\[\\rho a_{\\rm wp}\\,=\\,\\frac{1}{3}\\frac{\\partial U_{B}}{\\partial r}+\\frac{4U_{B} }{r} \\tag{8}\\] in the limit of zero plasma beta and \\(U_{K}=U_{B}\\) (note also the opposite sign of the derivative term compared to eq. [3]). The case of an isotropic distribution of wavevectors is likely to be the case for fast-mode waves undergoing a turbulent cascade (e.g., Cho and Lazarian 2003). If we assume that Alfven and fast-mode waves have identical amplitudes at \\(r=2\\,R_{\\odot}\\), and that they both follow their own linear wave-action conservation equations at heights above and below \\(2\\;R_{\\odot}\\), we can compare their respective values of \\(a_{\\rm wp}\\) as a function of height. Figure 4 shows this comparison, and above \\(r\\approx 3\\,R_{\\odot}\\) the fast-mode acceleration is stronger than that of Alfven waves. This is only an approximate and suggestive result, but it seems to imply that a renewed study of fast-mode waves in the solar wind is warranted (see also Habbal and Leer 1982; Wentzel 1989; Kaghashvili and Esser 2000). ## 4 Slow Wind: Similarities and Differences The slow-speed component of the solar wind is believed to originate mainly from the bright helmet streamers seen in coronagraph images. However, since these structures are thought to be mainly closed magnetic loops Figure 3: **(a)** _Derived coronal temperatures from UVCS H I Ly\\(\\alpha\\) line widths (symbols), and from empirical momentum conservation. For the latter, the two limiting values for the alpha-to-proton temperature ratio \\(T_{\\alpha}/T_{p}\\) are 1 (solid line), and 4 (dashed line)._ **(b)** _Fraction of total fast-wind acceleration from wave pressure, i.e., \\(\\rho a_{\\rm wp}/(\\rho a_{\\rm wp}+|\ abla P|)\\)._ or arcades, it is uncertain how the plasma expands into a roughly time-steady flow. Does the slow wind flow mainly along the open-field edges of these closed regions, or do the closed fields occasionally open up and release plasma into the heliosphere? _SOHO_ has provided evidence that both processes occur, but an exact census or mass budget of slow-wind source regions has not yet been constructed. (This is a necessary prerequisite for studying slow-wind \"heating versus acceleration.\") UVCS has shown, at least for the large quiescent equatorial band at solar minimum, that streamers appear differently in the emission of H I Ly\\(\\alpha\\) and O VI 1032 A. The Ly\\(\\alpha\\) intensity pattern is similar to that seen in LASCO visible-light images; i.e., the streamer is brightest along its central axis. In O VI, though, there is a darkening in the core whose only interpretation can be a substantial abundance depletion. The solar-minimum equatorial streamers showed an oxygen abundance of 0.3 the photospheric value along the streamer edges, or \"legs,\" and between 0.01 and 0.1 times the photospheric value in the core (Raymond et al. 1997; Vasquez and Raymond 2004). Low FIP (first ionization potential) elements such as Si and Fe were enhanced by a relative factor of 3 in both cases (Raymond 1999; see also Uzzo et al. 2004). Abundances observed in the legs are consistent with abundances measured in the slow wind _in situ_. This is a strong indication that the majority of the slow wind originates along the open-field edges of streamers. The extremely low abundances in the streamer core, on the other hand, are evidence for gravitational settling of the heavy elements in long-lived closed regions, a result that was confirmed by SUMER (Feldman et al. 1998, 1999). UVCS measurements have also been used to derive the wind outflow speeds in streamers. Strachan et al. (2002) found zero flow speed at various locations inside in the closed-field core region of an equatorial streamer. Outflow speeds consistent with the slow solar wind were only found along the higher-latitude edges and above the probable location of the magnetic \"cusp\" between about 3.6 and 4.1 \\(R_{\\odot}\\). Frazin et al. (2003) used UVCS to determine that O\\({}^{5+}\\) ions in the legs of a similar streamer have significantly higher kinetic temperatures than hydrogen and exhibit anisotropic velocity distributions with \\(T_{\\perp}>T_{\\parallel}\\), much like coronal holes (see also Parenti et al. 2000; L. Strachan, these proceedings). However, the oxygen ions in the closed-field core exhibit neither this preferential heating nor the temperature anisotropy. The analysis of UVCS data has thus led to evidence that the fast and slow wind share at least some of the same physical processes. Evidence for another kind of slow wind in streamers came from visible-light coronagraph movies. The increased photon sensitivity of LASCO over earlier instruments revealed an almost continual release of low-contrast density inhomogeneities, or \"blobs,\" from the cusps of streamers (Sheeley et al. 1997; see also Tappin et al. 1999). These features are seen to accelerate to speeds of order 300-400 km/s by the time they reach \\(\\sim\\)30 \\(R_{\\odot}\\). Wang et al. (2000) reviewed three proposed scenarios for the production of these blobs: (1) \"streamer evaporation\" as the loop-tops are heated to the point where magnetic tension is overcome by high gas pressure; (2) plasmoid formation as the distended streamer cusp pinches off the gas above an X-type neutral point; and (3) reconnection between one leg of the streamer and an adjacent open field line, transferring some of the trapped plasma from the former to the latter and allowing it to escape. Wang et al. (2000) concluded that all three mechanisms might be acting simultaneously, but the third one seems to be dominant. Because of their low contrast, though (i.e., only about 10% brighter than the rest of the streamer), the blobs themselves cannot comprise a large fraction of the mass flux of the slow solar wind. This is in general agreement with the above abundance results from UVCS. Despite these new observational clues, the overall energy budget in coronal streamers is still not well understood, nor is their temporal MHD stability. Recent models run the gamut from simple, but insightful, analytic studies (Suess and Nerney 2002) to time-dependent multidimensional simulations (e.g., Wiegelmann et al. 2000; Lionello et al. 2001; Ofman 2004). Notably, a two-fluid study by Endeve et al. (2004) showed that the stability of streamers may be closely related to the kinetic partitioning of heat to protons versus electrons. When the bulk of the heating goes to the protons, the modeled streamers become unstable to the ejection of massive plasmoids; when the electrons are heated more strongly, the streamers are stable. It is possible that the observed (small) mass fraction of LASCO blobs can give us an observational \"calibration\" of the relative amounts of heat deposited in the proton and electron populations. Figure 4: Comparison of ideal WKB wave-pressure acceleration for Alfvén waves (solid line) and an isotropic distribution of \\(\\beta=0\\) fast-mode waves (dashed line). Wave amplitudes were set equal to one another at \\(r=2\\,R_{\\odot}\\). Conclusions and Future Missions Our understanding of the dominant physics of solar wind acceleration has progressed rapidly in the _SOHO_ era. Unfortunately, the multi-scale _complexity_ of the plasma in the extended corona has also been progressively revealed during this same time period. The solar physics community has benefited from increased interaction with the space physics community, the latter having decades more experience grappling with kinetic-scale plasma physics and MHD turbulence. It has been 5 years since Hollweg (1999b) asserted that the \"Holy Grail\" for theoreticians is the self-consistent modeling of both the full wavenumber spectrum of MHD fluctuations and the spatial dependence of proton, electron, and ion velocity distributions. Much of the recent work cited in this paper, both observational and theoretical, is helping the community get closer to this goal. The remainder of this section highlights several areas where future space missions (and future ground-based observatories such as ATST) can provide key constraints that refine and test theoretical explanations for solar wind acceleration. The plasma parameters of both the major species (protons, electrons, and He\\({}^{2+}\\)) and minor ions are not yet known in the wind's acceleration region with sufficient accuracy. Figure 3a highlights the level of our uncertainty about \\(T_{p}\\) and \\(T_{e}\\) in coronal holes. Progress in identifying some of the most basic aspects of extended heating can be made only by constraining these basic parameters more tightly. In addition, only by better \"filling out\" our knowledge of minor ion properties (as a function of ion charge and mass) can we hope to uniquely identify the ultimate kinetic damping mechanisms of waves and/or turbulence (see Cranmer 2001, 2002b). _Spectroscopy is key_--especially in combination with coronagraph occultation--in order to measure line profiles out into the wind's acceleration region. The full power spectrum of fluctuations (as a function of distance, wavenumber \\(k_{\\parallel}\\) and \\(k_{\\perp}\\), and solar wind type) is a strong driver of solar wind physics, but we still have only indirect constraints on its properties in the corona. The assimilation of multiple data sources, including radio sounding, is crucial (e.g., Spangler 2002, 2003). All previous _in situ_ missions that measured wave power spectra in the solar wind have been \"contaminated\" by the solar rotation, which sweeps new, uncorrelated flux tubes past the spacecraft on time scales of tens of minutes. Cranmer and van Ballegooijen (2004) predicted that much of the measured power with periods longer than about 30 minutes may be due to this effect, and that a spacecraft that could sample the fluctuations in a single flux tube would see intrinsically higher-frequency \"fossil\" fluctuations from the Sun. Solar _co-rotation_ of _in situ_ missions (such as Solar Orbiter) may be key, even if the co-rotation is not exact or long-lived. The origin of waves in jostled photospheric flux-tube motions needs to be pinned down to a much better degree than at present, in order to put firmer empirical constraints on the \"lower boundary condition\" of mechanical energy input into the corona. Synergy between 3D convection simulations and high-resolution observations is becoming more common (e.g., Sanchez Almeida et al. 2003). Although space missions may one day boost collecting areas rivaling those of ground-based telescopes, in the near future it is the latter that will push the envelope to provide the necessary constraints. Existing sub-arcsecond spatial resolution needs to be matched by sub-second time resolution, so that the kinetic energy power spectra of small-scale flux tubes (e.g., G-band bright point motions) can be measured more accurately. ## Acknowledgments This work is supported by the National Aeronautics and Space Administration (NASA) under grants NAG5-11913, NAG5-10996, NNG04GE77G, and NNG04G-E84G to the Smithsonian Astrophysical Observatory, by Agenzia Spaziale Italiana, and by the Swiss contribution to ESA's PRODEX program. ## References * [1] Alazraki, G., Couturier, P., 1971, A&A, 13, 380 * [2] Allen, L. A., Habbal, S. R., Hu, Y. Q., 1998, JGR, 103, 6551 * [3] Antonucci, E., Dodero, M. A., Giordano, S., 2000, Solar Phys., 197, 115 * [4] Armstrong, J. W., Woo. R., 1981, A&A, 103, 415 * [5] Aschwanden, M. J., Poland, A. I., Rabin, D. M., 2001, Ann. Rev. Astron. Astrophys., 39, 175 * [6] Axford, W. I., 1977, in _Study of Travelling Interplanetary Phenomena_, ed. M. A. Shea, D. F. Smart, S. T. Wu (Reidel), 145 * [7] Axford, W. I., McKenzie, J. F., Sukhorukova, G. V., Banaszkiewicz, M., Czechowski, A., Ratkiewicz, R., 1999, Space Sci. Rev., 87, 25 * [8] Banaszkiewicz, M., Axford, W. I., McKenzie, J. F., 1998, A&A, 337, 940 * [9] Banerjee, D., Teriaca, L., Doyle, J. G., Wilhelm, K., 1998, A&A, 339, 208 * [10] Bavassano, B., Pietropaolo, E., Bruno, R., 2000, JGR, 105, 15959 * [11] Belcher, J. W., 1971, ApJ, 168, 509 * [12] Bretherton, F. P., Garrett, C. J. R., 1968, Proc. Roy. Soc. A, 302, 529 * [13] Canals, A., Breen, A. R., Ofman, L., Moran, P. J., Fallows, R. A., 2002, Ann. Geophys., 20, 1265 * [14] Cargill, P. J., Klimchuk, J. A., 2004, ApJ, 605, 911 * [15] Chae, J., Schuhle, U., Lemaire, P., 1998, ApJ, 505, 957 * [16] Chen, Y., Li, X., 2004, ApJ, 609, L41 * [17] Cho, J., Lazarian, A., 2003, MNRAS, 345, 325 * [18] Coleman, P. J., Jr., 1968, ApJ, 153, 371Cranmer, S. R., 2000, ApJ, 532, 1197 * () Cranmer, S. R., 2001, JGR, 106, 24937 * () Cranmer, S. R., 2002a, Space Sci. Rev., 101, 229 * () Cranmer, S. R., 2002b, in _SOHO-11: From Solar Minimum to Solar Maximum,_ ESA SP-508, 361 (arXiv astro-ph/0209301) * () Cranmer, S. R., 2004, American J. Phys., 72, 1397 (arXiv astro-ph/0406176) * () Cranmer, S. R., Field, G. B., Kohl, J. L., 1999a, ApJ, 518, 937 * () Cranmer, S. R., Kohl, J. L., Noci, G., et al., 1999b, ApJ, 511, 481 * () Cranmer, S. R., van Ballegooijen, A. A., 2003, ApJ, 594, 573 * () Cranmer, S. R., van Ballegooijen, A. A., 2004, ApJ Suppl., submitted * () Dewar, R. L., 1970, Phys. Fluids, 13, 2710 * () Dmitruk, P., Matthaeus, W. H., Milano, L. J., Oughton, S., Zank, G. P., Mullan, D. J., 2002, ApJ, 575, 571 * () Dmitruk, P., Milano, L. J., Matthaeus, W. H., 2001, ApJ, 548, 482 * () Elsasser, W. M., 1950, Phys. Rev., 79, 183 * () Endeve, E., Holzer, T. E., Leer, E., 2004, ApJ, 603, 307 * () Esser, R., Fineschi, S., Dobrzycka, D., Habbal, S. R., Edgar, R. J., Raymond, J. C., Kohl, J. L., Guhathakurta, M., 1999, ApJ, 510, L63 * () Feldman, U., Doschek, G. A., Schuhle, U., Wilhelm, K., 1999, ApJ, 518, 500 * () Feldman, U., Schuhle, U., Widing, K. G., Laming, J. M., 1998, ApJ, 505, 999 * () Feldman, W. C., Asbridge, J. R., Bame, S. J., Gosling, J. T., 1976, JGR, 81, 5054 * () Feldman, W. C., Marsch, E., 1997, in _Cosmic Winds and the Heliosphere_, ed. J. R. Jokipii, C. P. Sonett, M. S. Giampapa (Univ. Arizona Press), 617 * () Fontenla, J. M., Avrett, E. H., Loeser, R., 1993, ApJ, 406, 319 * () Frazin, R. A., Cranmer, S. R., Kohl, J. L., 2003, ApJ, 597, 1145 * () Galinsky, V. L., Shevchenko, V. I., 2000, Phys. Rev. Lett., 85, 90 * () Gary, S. P., Borovsky, J. E., 2004, JGR, 109 (A6), A06105, 10.1029/2004JA010399 * () Gary, S. P., Nishimura, K., 2004, JGR, 109 (A2), A02109, 10.1029/2003JA010239 * () Gary, S. P., Yin, L., Winske, D., Ofman, L., Goldstein, B. E., Neugebauer, M., 2003, JGR, 108 (A2), 1068, 10.1029/2002JA009654 * () Giordano, S., Antonucci, E., Noci, G., Romoli, M., Kohl, J. L., 2000, ApJ, 531, L79 * () Goodrich, C. C., 1978, Ph.D. Dissertation, Massachusetts Institute of Technology * () Gosling, J. T., 1996, Ann. Rev. Astron. Astrophys., 34, 35 * () Habbal, S. R., Leer, E., 1982, ApJ, 253, 318 * () Hansteen, V. H., Leer, E., 1995, JGR, 100, 21577 * () Hansteen, V. H., Leer, E., Holzer, T. E., 1997, ApJ, 482, 498 * () Hartle, R. E., Sturrock, P. A., 1968, ApJ, 151, 1155 * () Hasan, S. S., Kalkofen, W., van Ballegooijen, A. A., Ulmschneider, P., 2003, ApJ, 585, 1138 * () Hassler, D. M., Dammasch, I. E., Lemaire, P., Brekke, P., Curdt, W., Mason, H. E., Vial, J.-C., Wilhelm, K., 1999, Science, 283, 810 * () Heinemann, M., Olbert, S., 1980, JGR, 85, 1311 * () Hollweg, J. V., 1978a, Rev. Geophys. Space Phys., 16, 689 * () Hollweg, J. V., 1978b, JGR, 83, 563 * () Hollweg, J. V., 1986, JGR, 91, 4111 * () Hollweg, J. V., 1999a, JGR, 104, 505 * () Hollweg, J. V., 1999b, JGR, 104, 24781 * () Hollweg, J. V., 2000, JGR, 105, 15699 * () Hollweg, J. V., Isenberg, P. A., 2002, JGR, 107 (A7), 1147, 10.1029/2001JA000270 * () Holzer, T. E., Leer, E., 1980, JGR, 85, 4665 * () Hossain, M., Gray, P. C., Pontius, D. H., Jr., Matthaeus, W. H., Oughton, S., 1995, Phys. Fluids, 7, 2886 * () Hufbauer, K., 1991, _Exploring the Sun: Solar Science since Galileo_ (Johns Hopkins Univ. Press) * () Hundhausen, A. J., 1972, _Coronal Expansion and Solar Wind_ (Springer-Verlag) * () Isenberg, P. A., Hollweg, J. V., 1982, JGR, 87, 5023 * () Jacques, S. A., 1977, ApJ, 215, 942 * () Kaghashvili, E. K., Esser, R., 2000, ApJ, 539, 463 * () Kohl, J. L., Esser, R., Cranmer, S. R., et al., 1999, ApJ, 510, L59 * () Kohl, J. L., Noci, G., Antonucci, E., et al., 1997, Solar Phys., 175, 613 * () Kohl, J. L., Noci, G., Antonucci, E., et al., 1998, ApJ, 501, L127 * () Kolmogorov, A. N., 1941, Dokl. Akad. Nauk SSSR, 30, 301 * () Koninx, J. P. M., 1992, Ph.D. Dissertation, Rijksuniversiteit Utrecht * () Laming, J. M., 2004, ApJ, in press (arXiv astro-ph/0405230) * () Leer, E., Holzer, T. E., 1980, JGR, 85, 4631 * () Leer, E., Holzer, T. E., Fla, T., 1982, Space Sci. Rev., 33, 161 * () Li, X., Habbal, S. R., Hollweg, J. V., Esser, R., 1999, JGR, 104, 2521 * () Li, X., Habbal, S. R., Kohl, J. L., Noci, G., 1998, ApJ, 501, L133* (2001) Lionello, R., Linker, J. A., Mikic, Z., 2001, ApJ, 546, 542 * (1999) Mancuso, S., Spangler, S. R., 1999, ApJ, 525, 195 * (2004) Markovskii, S. A., Hollweg, J. V., 2004, ApJ, 609, 1112 * (1991) Marsch, E., 1991, in _Physics of the Inner Heliosphere,_ vol. 2, ed. R. Schwenn, E. Marsch (Springer-Verlag), 45 * (2003) Marsch, E., Axford, W. I., McKenzie, J. F., 2003, in _Dynamic Sun,_ ed. B. N. Dwivedi (Cambridge Univ. Press), 374 * (2001) Marsden, R. G., 2001, Astrophys. Space Sci., 277, 337 * (1999) Matthaeus, W. H., Zank, G. P., Oughton, S., Mullan, D. J., Dmitruk, P., 1999, ApJ, 523, L93 * (1994) McKenzie, J. F., 1994, JGR, 99, 4193 * (1995) McKenzie, J. F., Banaszkiewicz, M., Axford, W. I., 1995, A&A, 303, L45 * (2003) Moran, T. G., 2003, ApJ, 598, 657 * (2004) Nakariakov, V. M., Arber, T. D., Ault, C. E., Katsiyannis, A. C., Williams, D. R., Keenan, F. P., 2004, MNRAS, 349, 705 * (2003) Nisenson, P., van Ballegooijen, A. A., de Wijn, A. G., Sutterlin, P., 2003, ApJ, 587, 458 * (1997) Noci, G., et al., 1997, Adv. Space Res., 20 (12), 2219 * (2004) Ofman, L., 2004, Adv. Space Res., 33 (5), 681 * (1999) Ofman, L., Nakariakov, V. M., DeForest, C. E., 1999, ApJ, 514, 441 * (1961) Osterbrock, D. E., 1961, ApJ, 134, 347 * (2004) Oughton, S., Dmitruk, P., Matthaeus, W. H., 2004, Phys. Plasmas, 11, 2214 * (2000) Parenti, S., Bromage, B. J. I., Poletto, G., Noci, G., Raymond, J. C., Bromage, G. E., 2000, A&A, 363, 800 * (1958) Parker, E. N., 1958, ApJ, 128, 664 * (1963) Parker, E. N., 1963, _Interplanetary Dynamical Processes_ (Interscience) * (1991) Parker, E. N., 1991, ApJ, 372, 719 * (1999) Parker, E. N., 1999, in _Solar Wind Nine,_ ed. S. Habbal, R. Esser, J. Hollweg, and P. Isenberg, AIP Conf. Proc. 471, 3 * (2001) Parker, E. N., 2001, JGR, 106, 15797 * (2001) Peter, H., 2001, A&A, 374, 1108 * (2003) Peter, H., Vocks, C., 2003, A&A, 411, L481 * (1995) Phillips, J. L., Feldman, W. C., Gosling, J. T., Scime, E. E., 1995, Adv. Space Res., 16 (9), 95 * (2000) Priest, E. R., Foley, C. R., Heyvaerts, J., Arber, T. D., Mackay, D., Culhane, J. L., Acton, L. W., 2000, ApJ, 539, 1002 * (1999) Raymond, J. C., 1999, Space Sci. Rev., 87, 55 * (1997) Raymond, J. C., Kohl, J. L., Noci, G., et al., 1997, Solar Phys., 175, 645 * (1995) Richardson, J. D., Paularena, K. I., Lazarus, A. J., Belcher, J. W., 1995, GRL, 22, 325 * (1989) Roberts, D. A., 1989, JGR, 94, 6899 * (2003) Sanchez Almeida, J., Emonet, T., Cattaneo, F., 2003, ApJ, 585, 536 * (1983) Schwartz, S. J., Marsch, E., 1983, JGR, 88, 9919 * (1997) Sheeley, N. R., Jr., Wang, Y.-M., Hawley, S. H., et al., 1997, ApJ, 484, 472 * (1999) Sittler, E. C., Jr., Guhathakurta, M., 1999, ApJ, 523, 812 * (2002) Sittler, E. C., Jr., Guhathakurta, M., 2002, ApJ, 564, 1062 * (2004) Soon, W. W.-H., Yaskell, S. H., 2004, _The Maunder Minimum and the Variable Sun-Earth Connection_ (World Scientific) * (2002) Spangler, S. R., 2002, ApJ, 576, 997 * (2003) Spangler, S. R., 2003, Nonlin. Proc. Geophys., 10, 179 * (1981) Spruit, H. C., 1981, A&A, 98, 155 * (1984) Spruit, H. C., 1984, in _Small-scale Dynamical Processes in Quiet Stellar Atmospheres,_ ed. S. L. Keil (NSO), 249 * (2000) Strachan, L., Panasyuk, A. V., Dobrzycka, D., Kohl, J. L., Noci, G., Gibson, S. E., Biesecker, D. A., 2000, JGR, 105, 2345 * (2002) Strachan, L., Suleiman, R., Panasyuk, A. V., Biesecker, D. A., Kohl, J. L., 2002, ApJ, 571, 1008 * (2002) Suess, S.T., Nerney, S. F., 2002, ApJ, 565, 1275 * (1999) Tappin, S. J., Simnett, G. M., Lyons, M. A., 1999, A&A, 350, 302 * (1995) Tu, C.-Y., Marsch, E., 1995, Space Sci. Rev., 73, 1 * (1997) Tu, C.-Y., Marsch, E., 1997, Solar Phys., 171, 363 * (2001) Tu, C.-Y., Marsch, E., 2001, JGR, 106, 8233 * (1998) Tu, C.-Y., Marsch, E., Wilhelm, K., Curdt, W., 1998, ApJ, 503, 475 * (2004) Uzzo, M., Ko, Y.-K., Raymond, J. C., 2004, ApJ, 603, 760 * (2004) Vasquez, A. M., Raymond, J. C., 2004, ApJ, submitted * (2002) Vocks, C., Marsch, E., 2002, ApJ, 568, 1030 * (2003) Voitenko, Y., Goossens, M., 2003, Space Sci. Rev., 107, 387 * (2004) Voitenko, Y., Goossens, M., 2004, ApJ, 605, L149 * (1994) Wang, Y.-M., 1994, ApJ, 435, L153 * (2000) Wang, Y.-M., Sheeley, N. R., Jr., Socker, D. G., Howard, R. A., Rich, N. B., 2000, JGR, 105, 25133 * (1989) Wentzel, D. G., 1989, ApJ, 336, 1073 * (1997) Whang, Y. C., 1997, ApJ, 485, 389 * (2000) Wiegelmann, T., Schindler, K., Neukirch, T., 2000, Solar Phys., 191, 391 * (1999) Zangrilli, L., Nicolosi, P., Poletto, G., Noci, G., Romoli, M., Kohl, J. L., 1999, A&A, 342, 592
Parker's initial insights from 1958 provided a key causal link between the heating of the solar corona and the acceleration of the solar wind. However, we still do not know what fraction of the solar wind's mass, momentum, and energy flux is driven by Parker-type gas pressure gradients, and what fraction is driven by, e.g., wave-particle interactions or turbulence. _SOHO_ has been pivotal in bringing these ideas back to the forefront of coronal and solar wind research. This paper reviews our current understanding of coronal heating in the context of the acceleration of the fast and slow solar wind. For the fast solar wind, a recent model of Alfven wave generation, propagation, and non-WKB reflection is presented and compared with UVCS, SUMER, radio, and _in situ_ observations at the last solar minimum. The derived fractions of energy and momentum addition from thermal and non-thermal processes are found to be consistent with various sets of observational data. For the more chaotic slow solar wind, the relative roles of steady streamer-edge flows (as emphasized by UVCS abundance analysis) versus bright blob structures (seen by LASCO) need to be understood before the relation between streamer heating and and slow-wind acceleration can be known with certainty. Finally, this presentation summarizes the need for next-generation remote-sensing observations that can supply the tight constraints needed to unambiguously characterize the dominant physics. coronal heating; MHD waves; solar corona; solar wind; plasma physics; turbulence; UV spectroscopy.
Write a summary of the passage below.
arxiv-format/0410025v1.md
# Introduction Medium effects on the vector meson masses are much interested in the hadron and nuclear physics. [1] Because of its short life time, the reduction of the \\(\\rho\\)-meson mass is expected to be a signal of the hot and dense matter which may be produced in the high-energy heavy ion collisions. [2] On the other hand, the \\(\\omega\\)-meson is important for the nuclear structure. It is reported that the reduction of the effective vector meson mass makes the nuclear matter EOS stiffer. [3] In this paper, we discuss the relation among the effective vector meson masses, the effective vector meson-nucleon coupling and the nuclear EOS, using the generalized mean field theory. [4,5] We show that, if we assume that the \\(\\omega\\) (\\(\\rho\\))-meson mean field is proportional to the baryon (isovector) density, the effective \\(\\omega\\) (\\(\\rho\\))-nucleon coupling also becomes smaller as the effective \\(\\omega\\) (\\(\\rho\\))-meson masses becomes smaller and the EOS becomes softer. [5] We examine the assumption by using the auxiliary field method [6] at finite temperature and/or finite density. It is shown that, in the simple model with four fermion interactions, the value of the \\(\\omega\\)-meson mean field is exactly proportional to the baryon density. **2 Effective meson mass and effective meson-nucleon coupling** In the generalized mean field theory, [4,5] the effective couplings \\(\\hat{g}_{\\sigma,\\omega,\\rho}\\) and the effective meson masses \\(m^{*}_{\\sigma,\\omega,\\rho}\\) are defined by \\[\\hat{g}_{\\sigma} = -\\frac{\\partial\\Sigma_{\\rm v}}{\\partial\\sigma}+\\frac{m^{*}}{E^{* }_{\\rm F}}\\frac{\\partial\\Sigma_{\\rm s}}{\\partial\\sigma},\\ \\hat{g}_{\\omega}=-\\frac{\\partial\\Sigma_{\\rm v}}{\\partial\\omega}+\\frac{m^{*}}{E^ {*}_{\\rm F}}\\frac{\\partial\\Sigma_{\\rm s}}{\\partial\\omega},\\ \\hat{g}_{\\rho}=-\\frac{ \\partial\\Sigma_{\\rm v}}{\\partial\\rho}+\\frac{m^{*}}{E^{*}_{\\rm F}}\\frac{ \\partial\\Sigma_{\\rm s}}{\\partial\\rho},\\] \\[{m^{*}_{\\sigma}}^{2} = \\frac{\\partial^{2}\\epsilon}{\\partial\\sigma^{2}},\\ {m^{*}_{\\omega}}^{2}=-\\frac{ \\partial^{2}\\epsilon}{\\partial\\omega^{2}}\\ {\\rm and}\\ {m^{*}_{\\rho}}^{2}=-\\frac{ \\partial^{2}\\epsilon}{\\partial\\rho^{2}};\\ \\ E^{*}_{\\rm F}=\\sqrt{k^{2}_{\\rm F}+{m^{*}}^{2}}, \\tag{1}\\] where \\(\\sigma\\), \\(\\omega\\), \\(\\rho\\), \\(\\Sigma_{\\rm s}\\), \\(\\Sigma_{\\rm v}\\), \\(k_{\\rm F}\\), \\(m^{*}\\) and \\(\\epsilon\\) are the \\(\\sigma\\)-meson mean field, the \\(\\omega\\)-meson mean field, the \\(\\rho\\)-meson mean field, the scalar self-energy for the nucleon, the vector self-energy for the nucleon, the Fermi momentum, the effective nucleon mass and the energy density for the nuclear matter, respectively. If there is no mixing element in the effective meson-mass matrix, [4,5] the first derivative of the pressure \\(P\\) for the symmetric nuclear matter with respect to the baryon density \\(\\rho_{\\rm B}\\) is given by \\[\\frac{dP}{d\\rho_{\\rm B}} = \\left(\\frac{k^{2}_{\\rm F}}{3\\rho_{\\rm B}E^{*}_{\\rm F}}+\\frac{\\hat {g}^{2}_{\\omega}}{{m^{*}_{\\omega}}^{2}}+\\frac{\\hat{g}^{2}_{\\rho}}{{m^{*}_{\\rho} }^{2}}-\\frac{\\hat{g}^{2}_{\\sigma}}{{m^{*}_{\\sigma}}^{2}}\\frac{{m^{*}}^{2}}{{E ^{*}_{\\rm F}}^{2}}\\right)\\rho_{\\rm B}. \\tag{2}\\]If we assume that the value of the \\(\\omega\\)-meson mean field is proportional to the baryon density, \\(\\hat{g}_{\\omega}\\) is related to \\(m_{\\omega}^{*}\\) by the relation \\[\\frac{\\hat{g}_{\\omega}}{g_{\\omega}}=\\frac{{m_{\\omega}^{*}}^{2}}{m_{ \\omega}^{2}}, \\tag{3}\\] where \\(g_{\\omega}\\) and \\(m_{\\omega}\\) are the \\(\\omega\\)-nucleon coupling and the \\(\\omega\\)-meson mass at zero density, respectively. [5] Similarly, if the \\(\\rho\\)-meson mean field is proportional to the isovector density \\(\\rho_{3}\\), the effective coupling \\(\\tilde{g}_{\\rho}\\) for the asymmetric nuclear matter is related with \\(m_{\\rho}^{*}\\) by the relation \\[\\frac{\\tilde{g}_{\\rho}}{g_{\\rho}}=\\frac{{m_{\\rho}^{*}}^{2}}{m_{ \\rho}^{2}}, \\tag{4}\\] where \\(g_{\\rho}\\) and \\(m_{\\rho}\\) are the \\(\\rho\\)-nucleon coupling and the \\(\\rho\\)-meson mass at zero density, respectively. [5] If the values of the vector meson fields are proportional to the corresponding baryonic currents, the mixing elements of the effective meson mass matrix vanish and Eq. (2) holds true. Putting Eq. (3) into Eq. (2), we obtain \\[\\frac{dP}{d\\rho_{\\rm B}} = \\left(\\frac{k_{\\rm F}^{2}}{3\\rho_{\\rm B}E_{\\rm F}^{*}}+\\frac{ \\hat{g}_{\\omega}g_{\\omega}}{{m_{\\omega}}^{2}}+\\frac{\\hat{g}_{\\rho}^{2}}{{m_{ \\rho}^{*}}^{2}}-\\frac{\\hat{g}_{\\sigma}^{2}}{{m_{\\sigma}^{*}}^{2}}\\frac{{m^{*} }^{2}}{E_{\\rm F}^{*}}^{2}\\right)\\rho_{\\rm B}. \\tag{5}\\] If \\(m_{\\omega}^{*}\\) decreases, \\(\\hat{g}_{\\omega}\\) also decreases according to Eq. (3) and \\(\\frac{dP}{d\\rho_{\\rm B}}\\) becomes smaller according to Eq. (5). Therefore, the EOS becomes softer. On the other hand, due to the equation (4), the EOS for the asymmetric nuclear matter becomes softer if the effective \\(\\rho\\)-meson mass becomes smaller. [5] **3 Auxiliary field method** In the previous section, we assume that the value of the \\(\\omega\\)-meson mean field is proportional to the baryon density. In view point of quark (\\(q\\)) physics, the assumption might be natural. [7] In this section, we examine the assumption, using the four fermion interaction model and the auxiliary field method [6] at finite temperature (\\(T\\)) and/or finite density. We start with the following generating function with finite source \\(J_{\\mu}\\) for the vector current \\(\\bar{q}\\gamma_{\\mu}q\\). \\[Z(J) = \\int d\\bar{q}dq\\exp\\left(-\\int_{\\beta V}d^{4}x\\left\\{\\bar{q}( \\partial_{\\mu}\\gamma_{\\mu}-J_{\\mu}\\gamma_{\\mu})q-\\frac{\\lambda}{2}(\\bar{q} \\gamma_{\\mu}q)^{2}\\right\\}\\right)\\!,\\] where \\(\\beta=1/T\\) and \\(V\\) is the three dimensional volume. Inserting the identity for the auxiliary field \\(\\Omega_{\\mu}\\) \\[1 = \\int d\\Omega\\exp\\left\\{-\\frac{1}{2\\lambda}\\int_{\\beta V}d^{4}x \\left(-\\Omega_{\\mu}+\\lambda\\bar{q}\\gamma_{\\mu}q\\right)^{2}\\right\\} \\tag{7}\\] into Eq. (6), we obtain \\[Z(J) = \\int d\\bar{q}dqd\\Omega\\exp\\left(\\int_{\\beta V}d^{4}x\\left\\{- \\frac{1}{2\\lambda}\\Omega_{\\mu}^{2}-\\bar{q}(\\partial_{\\mu}\\gamma_{\\mu}-(\\Omega _{\\mu}+J_{\\mu})\\gamma_{\\mu})q\\right\\}\\right) \\tag{8}\\]If we define \\(\\tilde{\\Omega}_{\\mu}=\\Omega_{\\mu}+J_{\\mu}\\), we obtain \\[Z(J)=\\int d\\bar{q}dqd\\tilde{\\Omega}\\exp\\bigg{(}\\int_{\\beta V}d^{4 }x\\left\\{-\\frac{1}{2\\lambda}\\tilde{\\Omega}_{\\mu}^{2}+\\frac{1}{\\lambda}\\tilde{ \\Omega}_{\\mu}J_{\\mu}\\right.\\] \\[\\left.-\\bar{q}(\\partial_{\\mu}\\gamma_{\\mu}-\\tilde{\\Omega}_{\\mu} \\gamma_{\\mu})q-\\frac{1}{2\\lambda}J_{\\mu}^{2}\\right\\}\\bigg{)}. \\tag{9}\\] Differentiating the logarithms of Eqs. (6) and (9) with respect to \\(J_{\\mu}\\), we obtain \\[\\frac{3g_{\\omega}}{\\lambda}\\omega_{\\mu} \\equiv \\frac{1}{\\lambda}<\\Omega_{\\mu}>=\\frac{1}{\\lambda}<\\tilde{\\Omega}_{ \\mu}>-\\frac{1}{\\lambda}J_{\\mu}=<\\bar{q}\\gamma_{\\mu}q> \\tag{10}\\] Putting \\(J_{i}=0\\) (\\(i=1,2,3\\)), \\(\\omega=\\omega_{0}\\) and \\(\\lambda=g_{\\omega}^{2}/m_{\\omega}^{2}\\), we obtain, \\[\\omega=\\frac{g_{\\omega}}{3m_{\\omega}^{2}}<\\bar{q}\\gamma_{0}q>=\\frac{g_{\\omega }}{m_{\\omega}^{2}}\\rho_{\\rm B} \\tag{11}\\] Therefore, the \\(\\omega\\)-meson mean field is proportional to the baryon density \\(\\rho_{\\rm B}\\). **4 Summary and discussions** In summary, we have discussed the relation between the effective vector meson masses and equation of state (EOS) for nuclear matter in the framework of the generalized mean field theory. We have shown that, if we assume that the \\(\\omega\\) (\\(\\rho\\))-meson mean field is proportional to the baryon (isovector) density, the effective \\(\\omega\\) (\\(\\rho\\)) -nucleon coupling also becomes smaller as the effective \\(\\omega\\) (\\(\\rho\\))-meson masses becomes smaller and the EOS becomes softer. We examine the assumption by using the auxiliary field method at finite temperature and/or finite density. In the simple model with four fermion interactions, the mean field of the \\(\\omega\\)-meson, which is composed of quark and anti-quark, is exactly proportional to the baryon density. However, if we use the simplest mean field approximation, the meson-nucleon coupling does not have the density dependence for this simplest model. Therefore, it is needed to generalize our analysis beyond the mean field approximation or to generalize our auxiliary field method to more complex models with many fermion interaction. These works are now in progress. **Acknowledgement** The author would like to thank Prof. T. Kunihiro for useful discussions and suggestions. The author would also like to thank T. Sakaguchi, K. Tuchitani and Y. Horinouchi for the collaboration on the subject discussed in this work. **References** [1] G.E. Brown and M. Rho, Phys. Rev. Lett., **27** (1991) 2720: T. Hatsuda and S. H. Lee, Phys. Rev. **C46** (1992) 46; T. Hatsuda and T. Kunihiro, Phys. Rep. **247** (1994) 221. [2] See, e.g., I. Tserruya, preprint nucl-ex/0204012. [3] F. Weber, Gy. Wolf, T. Maruyama and S. Chiba, preprint nucl-th/0202071; C.H. Hyun, M.H. Kim and S.W. Hong, preprint nucl-th/0308053. [4] K. Tuchitani et al., Int. J. Mod. Phys. **E10** (2001) 245. [5] H. Kouno et al., preprint, nucl-th/0405022; K. Tuchitani et al., preprint, nucl-th/0407004. [6] T. Kashiwa and T. Sakaguchi, Phys. Rev. **D68** (2003) 589, and references therein. [7] T. Kunihiro, private communication.
We discuss the relation between the effective vector meson mass and equation of state (EOS) for nuclear matter. In the mean field approximation, the EOS becomes softer due to the reduction of the effective \\(\\omega\\)-meson mass, if we assume that the \\(\\omega\\)-meson mean field is proportional to the baryon density. We examine the assumption by using the auxiliary field method at finite temperature and /or density. SAGA-HE-217-04 **Auxiliary field method at finite temperature and/or finite density** Hiroaki Kouno _Department of Physics, Saga University, Saga 840-8502, Japan_
Write a summary of the passage below.
arxiv-format/0410039v1.md
# Probabilistic forecasts of temperature: measuring the utility of the ensemble spread Stephen Jewson _Corresponded address:_ RMS, 10 Eastcheap, London, EC3M 1AJ, UK. Email: [email protected] November 14, 2018 ## 1 Introduction Forecasts of the expected surface air temperature over the next 15 days are readily available from commercial forecast vendors. The best of these forecasts have been proven to be consistently better than climatology and such forecasts are widely used within industry. There is also demand within industry for _probabilistic_ forecasts of temperature i.e. forecasts that predict the whole distribution of temperatures. Such forecasts are much more useful than forecasts of the expectation alone in situations where the ultimate variables being predicted are a non-linear function of temperature, as is commonly the case. Probabilistic forecasts of temperature can be made rather easily from forecasts of the expected temperature using linear regression. The parameters of the regression model are derived using past forecasts and past observations after these forecasts and observations have been converted to standardized anomalies using the climatological mean and standard deviation. Probabilistic forecasts made in this way provide a standard against which forecasts made using more sophisticated methods should be compared, and it turns out that they are hard to beat (our own attempts to beat regression, which have more or less failed, are summarised in Jewson (2004)). Regression-based probabilistic forecasts have a skill that doesn't vary with weather state. It has been shown, however, that the uncertainty around forecasts of the expectation _does_ vary with weather state and that these variations are predictable, to a certain extent, using the spread of ensemble forecasts (see, for example, Kalnay and Dalcher (1987), and many others). What is not clear is whether the level of predicability in the variations of the uncertainty is useful in any material sense or whether the beneficial effect on the final forecast of the temperature distribution is too small to be relevant. How might we investigate this question of how much useful information there is in the ensemble spread? One method that is frequently used to assess the amount of information in the spread from ensemble forecasts is the spread-skill correlation (SSC), defined in a number of different ways (see for example Barker (1991), Whitaker and Loughe (1998) and Hou et al. (2001)). SSC is usually calculated before the ensemble forecast has been calibrated (i.e. before it has been turned into a probabilistic forecast). However, it is the properties of the forecast _after_ calibration that we really care about. In this article we investigate some of the properties of the spread-skill correlation, and in particular how it interacts with the calibration procedure. We will show that, under certain combinations of the definition of the SSC and the calibration procedure, the SSC is the same before and after the calibration, implying that pre-calibration estimates of the SSC can be used to predict post-calibration values. However we also note that even the post-calibration SSC is not a particularly good indicator of the level of useful information that can be derived from the ensemble spread and we describe how it can be possible that the SSC is high but the ensemble spread is effectively useless as a predictor of the future temperature distribution. Finally we present some simple measures that improve on the SSC and that can be used to ascertain whether the information in the ensemble spread is really useful or not. ## 2 The linear anomaly correlation We start by reviewing some of the properties of the linear anomaly correlation (LAC). This will help us understand how to think about the properties of the SSC. The amount of information in a temperature forecast from an NWP model is commonly measured using the LAC between the forecast and an analysis. One of the reasons that the LAC is a useful measure is that it is conserved under linear transformations, and so if the forecast is calibrated using a linear transformation (such as linear regression) then the LAC post-calibration is the same as the LAC pre-calibration. This means that one doesn't actually have to perform the calibration to know what the post-calibration LAC is going to be. ## 3 The spread-skill correlation In a similar way the SSC is commonly applied to the output from NWP models to assess the ability of the model to capture variations in the uncertainty (see for example Buizza (1997)). Four commonly used definitions of SSC are: Conservation properties of the spread-skill correlation The conservation properties of the SSC are straightforward and somewhat obvious. They can be derived based on the observation that linear correlations are not affected by linear transformations of either variable. Under the standard deviation based spread regression model the spread skill correlation defined as either SSC\\({}_{1}\\) or SSC\\({}_{2}\\) will be conserved because these measures base the SSC on \\(s\\) and the calibration of \\(s\\) is simply a linear tranformation. The SSC measures based on \\(s^{2}\\) will not, however, be conserved when using standard deviation based spread regression. Alternatively under the variance based spread regression model the spread skill correlation define as either SSC\\({}_{3}\\) or SSC\\({}_{4}\\) will be conserved because these measures base the SSC on \\(s^{2}\\) and the calibration of \\(s^{2}\\) is now a linear transformation. However SSC\\({}_{1}\\) and SSC\\({}_{2}\\) will not be conserved under the variance based spread regression model. Together these results suggest that the choice of which SSC measure to choose is not arbitrary but should be influenced by whichever of the calibration models works better for the data in hand. ## 5 The offset problem We have shown that the SSC can be conserved during calibration as long as the definition of SSC is chosen to match the method used for the calibration. There is, however, a problem with the SSC as a measure for the amount of information in a probabilistic forecast. This problem is caused by the spread-skill offset given by \\(\\gamma+\\delta\\overline{s}\\) in equation 5 and by \\(\\gamma^{2}+\\delta^{2}\\overline{s^{2}}\\) in equation 6. When the offset is large relative to the amplitude of the variability of the uncertainty we find ourselves in a situation in which predictions of the variations of the uncertainty are more or less irrelevant, even if they are very good, simply because they don't contribute much as a fraction of the total uncertainty. In such cases the SSC may be large but the ensemble spread could be ignored without reducing the skill of the calibrated forecast: linear regression would work as well as spread regression. We clearly need other measures to assess whether the spread is really useful that take into account the _size_ of the calibrated variations in uncertainty. Since this question depends crucially on the offset and the offset can only be derived during the calibration procedure it will not be possible to estimate the usefulness of the spread before calibration has taken place. This is a fundamental difference between forecasts of spread and forecasts of the expectation, since, as we have seen, it _is_ possible to estimate the information in a forecast of the expectation before the calibration has taken place. This difference arises because when we predict the mean temperature we are concerned with predicting changes from the normal while when we predict the uncertainty we are only interested in the extent to which our estimate of the uncertainty improves the forecast of the temperature distribution. Thus we are interested in actual values of the uncertainty rather than just departures from normal. ## 6 Other measures of the utility of ensemble spread Because of the offset problem with the SSC we now suggest some alternative methods for measuring the usefulness of the ensemble spread. All of these measures can only be calculated _after_ calibration, as explained above. ### Coefficient of variation of spread Our first measure is the _coefficient of variation of spread_ defined as: \\[\\mathrm{COVS}=\\frac{\\sigma_{\\sigma}}{\\mu_{\\sigma}} \\tag{7}\\] where \\(\\sigma_{\\sigma}\\) is the standard deviation of variations in the uncertainty or the spread, and \\(\\mu_{\\sigma}\\) is the mean level in the uncertainty or the spread. COVS was introduced in Jewson et al. (2003) and measures the size of the variations of the spread relative to the mean spread. Values for the COVS versus lead time for ECMWF ensemble forecasts for London Heathrow are given in that paper. If the post-calibration COVS is small then that implies that the variations in the uncertainty are small relative to the mean uncertainty, and, depending on the level of accuracy required, that it may be reasonable to ignore the variations in the uncertainty completely and model it as constant i.e. that linear regression may be as good as spread regression. ### Spread mean variability ratio The limitation of using the COVS to understand the importance of variations in the ensemble spread is it doesn't take into account the size of the variations in the mean temperature. One can imagine the following two limiting cases: 1. The expected temperature is the same every day but the standard deviation of possible temperatures varies. In this case forecasts of the uncertainty of temperature would be very useful. We call this a '_mean constant spread varies_' world. 2. The expected temperature varies from day to day but the standard deviation of possible temperatures is constant. In this case forecasts of the uncertainty of temperature would not be useful. We call this a '_mean varies spread constant_' world. In order to distinguish between these two scenarios we define the _spread-mean variability ratio_ as: \\[\\mathrm{SMVR}_{1}=\\frac{\\sigma_{\\sigma}}{\\sigma_{\\mu}} \\tag{8}\\] where \\(\\sigma_{\\sigma}\\) is the standard deviation of variations in the uncertainty or the spread and \\(\\sigma_{\\mu}\\) is the standard deviation of variations in the expected temperature. An alternative definition based on variance would be: \\[\\mathrm{SMVR}_{2}=\\frac{\\sigma_{\\sigma}^{2}}{\\sigma_{\\mu}^{2}} \\tag{9}\\] The SMVR measures the size of variations of the spread relative to the size of the variations of the mean. Small values of the SMVR imply that we are close to the mean-varies-spread-constant world while large values of SMVR imply that we are close to the mean-constant-spread-varies world. Figure 1 shows the post-calibration SMVR\\({}_{1}\\) for the forecasts used in Jewson et al. (2003). We see that the SMVR is small at all leads, with smallest values at the shortest leads. We thus see that we are much closer to the mean-varies-spread-constant world than we are to the mean-constant-spread-varies world, and hence that predicting variations in the uncertainty is likely to be less useful than it would be in a world in which the SMVR were larger. ### Impact on the log-likelihood The final measure of the utility of forecasts of spread that we present is simply the change in the cost function that is being used to calibrate and evaluate the forecast. We ourselves prefer to evaluate probabilistic forecasts of temperature using the log-likelihood from classical statistics (Fisher (1912), Jewson (2003)) and hence we consider the change in the log-likelihood due to the inclusion of information from the ensemble spread as a measure of how useful that information is. When we evaluated the usefulness of the spread in temperature forecasts derived from the ECMWF ensemble using this method we found that the spread was not very important (Jewson, 2004). One aspect of our comparison of forecasts using log-likelihoods in Jewson (2004) is that we calculated log-likelihood based on the whole distribution of future temperatures. This was deliberate: it is predicting the whole distribution of temperature that we are interested in. However, if instead we were mainly interested in the tails of the distribution then a version of the log-likelihood based only on the tails would be more appropriate and the ensemble spread would perhaps be more useful. ## 7 Summary We have considered how to measure the importance of variations in the ensemble spread when making probabilistic temperature forecasts. First we have considered the interaction between measures of the spread-skill correlation (SSC) and the methods used to calibrate the forecast. We find that certain definitions of SSC are conserved through the calibration process for certain calibration algorithms, implying that the choice of SSC measure to be used should be linked to the choice of calibration method. However we also discuss why the SSC is not a particularly useful measure of the information in the ensemble spread and explain how a high value of the SSC does not necessarily mean that the spread improves the quality of the final forecast because of the possibility of a large offset in the calibrated uncertainty. We have discussed some alternative and preferable diagnostics that focus on the role the spread plays in the final calibrated forecast. The first of these diagnostics measures the size of variations in the uncertainty relative to the mean uncertainty and the second measures the size of variations in the uncertainty relative to the size of the variations in the expected temperature. We calculate the latter for a year of forecast data and find that we are much closer to a world in which the mean varies and the spread is fixed than we are to a world in which the the spread varies and the mean is fixed. This seems to partly explain why we see so little improvement in the skill of probabilistic forecasts when we add the ensemble spread as an extra predictor. ## 8 Acknowledgements Thanks to Jeremy Penzer and Christine Ziehmann for some interesting discussions on this topic. ## 9 Legal statement SJ was employed by RMS at the time that this article was written. However, neither the research behind this article nor the writing of this article were in the course of his employment, (where 'in the course of their employment' is within the meaning of the Copyright, Designs and Patents Act 1988, Section 11), nor were they in the course of his normal duties, or in the course of duties falling outside his normal duties but specifically assigned to him (where 'in the course of his normal duties' and 'in the course of duties falling outside his normal duties' are within the meanings of the Patents Act 1977, Section 39). Furthermore the article does not contain any proprietary information or trade secrets of RMS. As a result, the author is the owner of all the intellectual property rights (including, but not limited to, copyright, moral rights, design rights and rights to inventions) associated with and arising from this article. The author reserves all these rights. No-one may reproduce, store or transmit, in any form or by any means, any part of this article without the author's prior written permission. The moral rights of the author have been asserted. The contents of this article reflect the author's personal opinions at the point in time at which this article was submitted for publication. However, by the very nature of ongoing research, they do not necessarily reflect the author's current opinions. In addition, they do not necessarily reflect the opinions of the author's employer. ## References * Barker (1991) T Barker. The relationship between spread and forecast error in extended range forecasts. _Journal of Climate_, 4:733-742, 1991. * Buizza (1997) R Buizza. Potential forecast skill of ensemble prediction and spread and skill distributions of the ECMWF ensemble prediction system. _Mon. Wea. Rev._, 125:99-119, 1997. * Fisher (1912) R Fisher. On an absolute criterion for fitting frequency curves. _Messenger of Mathematics_, 41:155-160, 1912. * Hou et al. (2001) D Hou, E Kalnay, and K Droegmeier. Objective verification of the SAMEX ensemble forecasts. _MWR_, 129:73-91, 2001. * Jewson (2003) S Jewson. Use of the likelihood for measuring the skill of probabilistic forecasts. _arXiv:physics/0308046_, 2003. * Jewson (2004) S Jewson. A summary of our recent research into practical methods for probabilistic temperature forecasting. _arxiv:physics/0409096_, 2004. * Jewson et al. (2003) S Jewson, A Brix, and C Ziehmann. A new framework for the assessment and calibration of ensemble temperature forecasts. _Atmospheric Science Letters_, 2003. * Kalnay and Dalcher (1987) E Kalnay and A Dalcher. Forecasting forecast skill. _Monthly Weather Review_, 115:349-356, 1987. * Whitaker and Loughe (1998) J Whitaker and A Loughe. The relation between ensemble spread and ensemble mean skill. _Monthly Weather Review_, 126:3292-3302, 1998. Figure 1: The SMVR\\({}_{1}\\) calculated from one year of ECMWF ensemble forecasts for London Heathrow calibrated using the standard deviation based spread regression model.
The spread of ensemble weather forecasts contains information about the spread of possible future weather scenarios. But how much information does it contain, and how useful is that information in predicting the probabilities of future temperatures? One traditional answer to this question is to calculate the spread-skill correlation. We discuss the spread-skill correlation and how it interacts with some simple calibration schemes. We then point out why it is not, in fact, a useful measure for the amount of information in the ensemble spread, and discuss a number of other measures that are more useful.
Provide a brief summary of the text.
arxiv-format/0410053v1.md
# Probabilistic temperature forecasting: a comparison of four spread-regression models Stephen Jewson _Correspondence address_: RMS, 10 Eastcheap, London, EC3M 1AJ, UK. Email: [email protected] ## 1 Introduction There is considerable demand within industry for probabilistic forecasts of temperature, particularly from industries that routinely use probabilistic analysis such as insurance, finance and energy. However there is considerable disagreement among meteorologists about how such forecasts should be produced and at present no adequately calibrated probabilistic forecasts are available commercially. Those who need to use probabilistic forecasts have to make them themselves. How, then, should probabilistic forecasts of temperature be made? A number of very different methods have been suggested in the literature such as those described in Mylne et al. (2002), Roulston and Smith (2003) and Raftery et al. (2003). However it seems that all three of these methods, although complex, suffer from the shortcoming that they don't calibrate the amplitude of variations in the ensemble spread but rather leave the amplitude to be determined as a by-product of the calibration of the mean. We take a very different, and simpler, approach to the development of probabilistic forecasts than the authors cited above. Our approach is based on the following philosophy: * The baseline for comparison for all probabilistic temperature forecasts should be a distribution derived very simply by using linear regression around a single forecast or an ensemble mean. * More complex methods can then be tested against this baseline. Before anything more complex than linear regression is adopted on an operational basis it should be shown to clearly beat linear regression in out of sample tests. Unfortunately none of the studies cited above compared the methods they proposed with linear regression, and, given that they seem not to calibrate the ensemble spread correctly, it would seem possible that they might not perform as well. We have followed this philosophy and, based on our analysis of one particular dataset of past forecasts and past observations we have shown that: * Moving from constant-parameter linear regression to seasonal parameter linear regression gives a huge improvement in forecast skill for forecasts of both the mean temperature and the distribution of temperatures (Jewson, 2004a) * Adding spread as a predictor gives only a very small improvement (Jewson et al. (2003), Jewson (2003b)). * Generalising to allow for non-normality gives no improvement at all (Jewson, 2003a). All these results are summarised and discussed in Jewson (2004c). In this article we focus on the second of these conclusions: that using the spread as an extra predictor brings only a very small improvement to forecast skill. This is somewhat disappointing given that it had been hoped by some that use of the ensemble spread would turn out to be an important factor in the creation of probabilistic forecasts. We are trying to get a better understanding of _why_ the ensemble spread brings so little benefit in the tests we have performed. In Jewson (2004b) we concluded that this is because of: 1. The scoring system we use. We calibrate and score probabilistic forecasts using the likelihood of classical statistics (Fisher (1912), Jewson (2003c)). Likelihood, as we have used it, is a measure that considers the ability of the forecast to predict the whole distribution of future temperatures. Much of the mass in the distribution of temperature is near the mean and so the likelihood naturally tends to emphasize the importance of the mean rather than the spread. If we were to use a score that puts more weight onto the tails of the distribution then the spread might prove more important (although such a score would not then reflect our main interest, which is in the prediction of the whole distribution). 2. The low values of the coefficient of variation of spread (COVS). Once we have calibrated our ensemble forecast data we find that the uncertainty does not vary very much relative to the mean level of the uncertainty (i.e. the COVS is low). Thus if we approximate the uncertainty with a constant this does not degrade the forecast to any great extent, and we have not been able to detect a significant impact of the spread in out of sample testing. That the variations in the calibrated uncertainty are small could be either because the actual uncertainty does not vary very much or because the ensemble spread is not a good predictor for the actual uncertainty. In fact it is likely to be a combination of these two effects. 3. The low values of the spread mean variability ratio (SMVR). We have also found that the amplitude of the variations in the uncertainty in the calibrated forecast is small relative to the amplitude of the variations in the mean temperature (i.e. the SMVR is low). As a result accurate prediction of the (small) variations in the uncertainty is not very important relative to accurate prediction of the (large) variations in the mean temperature. However in addition to these reasons it is also possible that we have been using the ensemble spread wrongly in our predictions. The model we have been using represents the unknown uncertainty \\(\\sigma\\) as a linear function of the ensemble spread (Jewson et al., 2003): \\[\\sigma = \\hat{\\sigma}+\\mbox{noise} \\tag{1}\\] \\[= \\delta+\\gamma s+\\mbox{noise} \\tag{2}\\] But this model is entirely ad-hoc. Why a linear function? We chose linear because it is the simplest way to calibrate both the mean uncertainty and the amplitude of the variability of the uncertainty, and not on the basis of any theory or analysis of the empirical spread-skill relationship. This suggests it is very important to test other models to see if they perform any better. In this paper we will compare the original spread-regression model with 3 other spread-regression models. The four models we compare all have four parameters and so can be compared in-sample. This is important because the signals we are looking for are weak and obtaining long stationary series of past forecasts is more or less impossible at this point in time. At some point the numerical modellers will hopefully start providing long (i.e. multiyear) back-test time series from their models. This will allow more thorough out of sample testing of calibration schemes such as the spread-regression model and will facilitate the comparison of models with different numbers of parameters: meanwhile we do what we can with the limited data available. ## 2 Four spread regression models The four spread-regression models that we will test are all based on linear regression between anomalies of the temperature and anomalies of the ensemble mean: \\[T_{i}\\sim N(\\alpha+\\beta m_{i},\\hat{\\sigma}) \\tag{3}\\] The difference between the models is in the representation of \\(\\hat{\\sigma}\\). The original standard-deviation-based spread regression model is: \\[\\hat{\\sigma}_{i}=\\gamma+\\delta s_{i} \\tag{4}\\] The variance-based model is: \\[\\hat{\\sigma}_{i}^{2}=\\gamma^{2}+\\delta^{2}s_{i}^{2} \\tag{5}\\] The inverse-standard-deviation-based model is: \\[\\frac{1}{\\hat{\\sigma}_{i}}=\\gamma+\\frac{\\delta}{s_{i}} \\tag{6}\\] and the inverse-variance-based-model is: \\[\\frac{1}{\\hat{\\sigma}_{i}^{2}}=\\gamma^{2}+\\frac{\\delta^{2}}{s_{i}^{2}} \\tag{7}\\] Following Jewson (2004a) the parameters \\(\\alpha,\\beta,\\gamma,\\delta\\) all vary seasonally using a single sinusoid. We fit each model by finding the parameters that maximise the likelihood (using numerical methods). We note that for very small variations in \\(s\\) all these models can be linearised and end up the same as the linear-in-standard-deviation model given in equation 4. ## 3 Results The first and most important test is to see which of the models achieves the greatest log-likelihood at the maximum. The results from this test are shown in figure 1 (actually in terms of negative log-likelihood so that smaller is better). In each case the spread-regression results (dashed lines) are shown relative to results for a constant-variance model (solid line). What we see is that the four models achieve roughly the same decrease in the negative log-likelihood and that in none of the cases is the decrease very large compared with the change in the log-likelihood from one lead time to the next. These changes are also small compared with the change in the log-likelihood that was achieved by making the bias correction vary seasonally (Jewson, 2004a). Figure 2 shows the same data as is shown in figure 1 but as differences between the spread-regression models and the constant-variance model. Again we see that there is little to choose between the models. Figure 3 shows a fifty-day sample of the calibrated mean temperature from the constant-variance model with the spread-regression calibrated temperatures overlaid. The differences are very small indeed and can only really be seen when they are plotted explicitly in figure 4. Figure 5 shows the calibrated spread from the constant-variance model and the calibrated spread from the four spread-regression models. The uncertainty prediction from the constant variance model varies slowly from one season to the next and has a kink because of the presence of missing values in the forecast data. We now see rather significant differences between the four spread regression models. The size of these differences suggests that the variations in \\(s\\) are _not_ so small that the four spread regression models are equivalent to the linear-in-standard-deviation model. ## 4 Conclusions How to produce good probabilistic temperature forecasts from ensemble forecasts remains a contentious issue. This is mainly because of disagreement about how to use the information in the ensemble spread. We have compared 4 simple parametric models that convert the spread into an estimate for the forecast uncertainty. All the models allow for an offset and a term that scales the amplitude of the variability of the uncertainty. Although the four models lead to visible differences in the calibrated spread we have found only tiny differences between the impact of these four models on the log-likelihood achieved. Also none of the models clearly dominates the others. These results lead us to conclude that: * the variations in \\(s\\) are not so small that the calibration of the spread can be linearised, which would make all four models equivalent * but the changes in the calibrated uncertainty _are_ small enough that they do not have a great impact on the maximum likelihood achieved in any of the models * implying that there is simply not very much information in the variations in the spread It is possible that the models are overfitted to a certain extent. This is unavoidable given that we only have one year of data for fitting these multiparameter models. That none of the models dominates is rather curious: perhaps all the models are equally bad and none of them come close to modelling the relationship between spread and skill in a reasonable way. This raises the possibility that better results could perhaps be achieved by using other parametrisations. It is difficult to see how to make further progress on these questions until longer series of stationary back-test data is made available by the numerical modellers. Meanwhile it seems that a pragmatic approach to producing probabilistic forecasts would be to stick with the constant variance model since more complex models have shown only a small benefit in in-sample testing, and do not show a significant benefit in out-of-sample testing. ## 5 Legal statement SJ was employed by RMS at the time that this article was written. However, neither the research behind this article nor the writing of this article were in the course of his employment, (where 'in the course of their employment' is within the meaning of the Copyright, Designs and Patents Act 1988, Section 11), nor were they in the course of his normal duties, or in the course of duties falling outside his normal duties but specifically assigned to him (where 'in the course of his normal duties' and 'in the course of duties falling outside his normal duties' are within the meanings of the Patents Act 1977, Section 39). Furthermore the article does not contain any proprietary information or trade secrets of RMS. As a result, the authors are the owner of all the intellectual property rights (including, but not limited to, copyright, moral rights, design rights and rights to inventions) associated with and arising from this article. The authors reserve all these rights. No-one may reproduce, store or transmit, in any form or by any means, any part of this article without the authors' prior written permission. The moral rights of the authors have been asserted. The contents of this article reflect the authors' personal opinions at the point in time at which this article was submitted for publication. However, by the very nature of ongoing research, they do not necessarily reflect the authors' current opinions. In addition, they do not necessarily reflect the opinions of the authors' employers. ## References * Fisher (1912) R Fisher. On an absolute criterion for fitting frequency curves. _Messenger of Mathematics_, 41:155-160, 1912. * Jewson (2003a) S Jewson. Do probabilistic medium-range temperature forecasts need to allow for non-normality? _arXiv:physics/0310060_, 2003a. * Jewson (2003b) S Jewson. Moment based methods for ensemble assessment and calibration. _arXiv:physics/0309042_, 2003b. * Jewson (2003c) S Jewson. Use of the likelihood for measuring the skill of probabilistic forecasts. _arXiv:physics/0308046_, 2003c. * Jewson (2004a) S Jewson. Improving probabilistic weather forecasts using seasonally varying calibration parameters. _arxiv:physics/0402026_, 2004a. * Jewson (2004b) S Jewson. Probabilistic forecasting of temperature: measuring the useful information in the ensemble spread. _arxiv:physics/0410039_, 2004b. * Jewson (2004c) S Jewson. A summary of our recent research into practical methods for probabilistic temperature forecasting. _arxiv:physics/0409096_, 2004c. * Jewson (2003) S Jewson, A Brix, and C Ziehmann. A new framework for the assessment and calibration of ensemble temperature forecasts. _Atmospheric Science Letters_, 2003. * Mylne et al. (2002) K Mylne, C Woolcock, J Denholm-Price, and R Darvell. Operational calibrated probability forecasts from the ECMWF ensemble prediction system: implementation and verification. In _Preprints of the Symposium on Observations, Data Asimmilation and Probabilistic Prediction_, pages 113-118. AMS, 1 2002. * Mylne et al. (2003)A Raftery, F Balabdaoui, T Gneiting, and M Polakowski. Using Bayesian model averaging to calibrate forecast ensembles. _University of Washington Department of Statistics Technical Report_, 440, 2003. * Roulston and Smith (2003) M Roulston and L Smith. Combining dynamical and statistical ensembles. _Tellus A_, 55:16-30, 2003. Figure 1: The negative log-likelihood scores achieved by a linear regression (solid line) and four spread-regression models (dotted lines). Figure 3: The calibrated mean temperature from linear regression (solid line) and four spread-regression models (dotted lines). The dotted lines cannot be distinguished because they are so close to the solid lines.
Spread regression is an extension of linear regression that allows for the inclusion of a predictor that contains information about the variance. It can be used to take the information from a weather forecast ensemble and produce a probabilistic prediction of future temperatures. There are a number of ways that spread regression can be formulated in detail. We perform an empirical comparison of four of the most obvious methods applied to the calibration of a year of ECMWF temperature forecasts for London Heathrow.
Summarize the following text.
arxiv-format/0410643v1.md
# Galilean invariant exchange correlation functionals with quantum memory Yair Kurzweil and Roi Baer Corresponding author: FAX: +972-2-6513742, [email protected] Department of Physical Chemistry and the Lise Meitner Minerva-Center for Computational Quantum Chemistry, the Hebrew University of Jerusalem, Jerusalem 91904 Israel. ###### Time dependent density functional theory (TDDFT) [1] is routinely used in many calculations of electronic processes in molecular systems. Almost all applications use \"adiabatic\" potentials describing an immediate response of the Kohn-Sham potential to the temporal variations of the electron density. The shortcomings of these potentials were studied by several authors[2-5]. Some of the problems are associated with self interaction, an ailment inherited from ground-state density functional theory[6]. Other deficiencies are known or suspected to be associated with the adiabatic assumption. The first attempt to include non-adiabatic effects[7] was based on a simple form of the exchange-correlation (XC) potential in the linear response limit. Studying an exactly solvable system, this form was shown to lead to spurious time-dependent evolution[8]. The failure was traced back to violation of a general rule: the XC force density, derived from the potential, should integrate to zero [9]. Convincing arguments were then presented[10], demonstrating that non-adiabatic effects cannot be easily described within TDDFT and instead a _current density_ based theory must be used. Vignale and Kohn [10] gave an expression for the XC potentials applicable for linear response and long wave lengths. That the total XC force is zero is a valid fact not only in TDDFT but also in TDDFT. It stems from the basic requirement that the total force on the non-interacting particles must be equal to the total force on the interacting particles. This is so otherwise a different total acceleration results and the two densities or current densities will be at variance. In the interacting system the total (Ehrenfest) force can only result from an external potential: because of Newton's third law the electrons cannot exert a net force upon themselves. In TDDFT the total force equals the sum of the external force, the Hartree force and the XC force. Since the Hartree force integrates to zero (Newton's third law again) the total XC force do so as well. A similar general argument can be applied to the total torque, showing that the net XC torque must be zero. These requirements then have to be imposed on the approximate XC potentials[9]. The question we deal with in this paper is the how to construct simple approximations to the XC potentials that ensure zero XC force and torque. One way to enforce the zero XC force condition is via the requirement that potentials be derived from a TDDFT action that is Galilean invariant. The XC action \\(S\\left[\\mathbf{u}\\right]\\) is a functional of the electron fluid velocity (\\(\\mathbf{u}\\left(\\mathbf{r},t\\right)=\\mathbf{j}/n\\) where \\(n\\left(\\mathbf{r},t\\right)\\) and \\(\\mathbf{j}\\left(\\mathbf{r},t\\right)\\) are the particle and current densities) defined on a Keldysh contour[11, 12], from which the vector potential \\(\\mathbf{a}=\\delta S\\!\\big{/}\\!\\delta\\mathbf{u}\\) is obtained as a functional derivative. Demanding that it is Galilean invariant means that observers in different frames report the same value of the XC action. Galilean frames can be translationally or rotationally accelerating. In variance in the first case is called translational invariance (TI) and in the second case, rotational invariance (RI). We discuss this in more detail bellow. Kurzweil and Baer[12] have recently developed a general TDDFT derived from a TI XC action. Their XC action was however not RI and so did not enforce the zero torque condition. It is the purpose of this paper to further develop the theory along similar lines, to achieve zero XC torque as well. We limit our discussion to as simple a theory as possible, by considering as building blocks only low order derivatives of basic quantities. As noted above, Galilean invariance of the action means that observers in different Galilean frames report the same value for the XC action. We consider two types of relative motion: translational and rotational. One observer, using \"unprimed\" coordinates, denotes the current density as \\(\\mathbf{j}\\left(\\mathbf{R},t\\right)\\) and particle density as \\(n\\left(\\mathbf{R},t\\right).\\) A second observer is using primed coordinates. The primed origin is accelerating with respect to that of the unprimed origin where its location is \\(\\mathbf{x}\\left(t\\right).\\) A given point in space designated as \\(\\mathbf{R}\\) by the first observer and \\(\\mathbf{R}^{\\prime}=\\mathbf{R}+\\mathbf{x}\\left(t\\right)\\) by the second. Here we assume that the axes of the two coordinate systems are parallel, i.e. there is no rotation. Since both observers are studying the same electronic system, the density and velocity functions must be related by: \\[n^{\\prime}\\!\\left(\\mathbf{R}^{\\prime}\\!,t\\right) =n\\left(\\mathbf{R},t\\right)=n\\!\\left(\\mathbf{R}^{\\prime}-\\mathbf{ x}\\left(t\\right)\\!,t\\right) \\tag{1}\\] \\[\\mathbf{u}^{\\prime}\\!\\left(\\mathbf{R}^{\\prime}\\!,t\\right) =\\mathbf{u}\\!\\left(\\mathbf{R},t\\right)+\\dot{\\mathbf{x}}\\left(t \\right)=\\mathbf{u}\\!\\left(\\mathbf{R}^{\\prime}-\\mathbf{x}\\left(t\\right)\\!,t \\right)+\\dot{\\mathbf{x}}\\left(t\\right)\\] In ref. [12] we showed that in order to obtain zero XC force, we demand translational invariance i.e. \\(S\\!\\left[\\mathbf{u}\\right]=S\\!\\left[\\mathbf{u}^{\\prime}\\right]\\). Zero total XC-torque is guaranteed when the XC action isRI, \\(S\\left[\\mathbf{u}\\right]=S\\!\\left[\\mathbf{u}^{\\prime\\prime}\\right]\\) where the double-primed quantities are related to the coordinate system of a third observer whose axis is rotating around the common origin. At time \\(t\\) the point \\(\\mathbf{R}\\) in space will be labeled by this observer as: \\(\\mathbf{R}^{\\prime\\prime}=M\\left(t\\right)\\mathbf{R}\\) where \\(M\\left(t\\right)\\) is some instantaneous orthogonal matrix (with unit determinant) describing the rotated axes (for convenience, we assume that \\(M\\equiv 1\\) when \\(t=0\\)). The density and velocity fields as defined by this third observer are: \\[n^{\\prime\\prime}\\!\\left(\\mathbf{R}^{\\prime\\prime}\\!,t\\right) =n\\left(\\mathbf{R},t\\right)=n\\!\\left(M\\left(t\\right)^{\\!-1} \\mathbf{R}^{\\prime\\prime}\\!,t\\right)\\] \\[\\mathbf{u}^{\\prime\\prime}\\!\\left(\\mathbf{R}^{\\prime\\prime}\\!,t\\right) =M\\left(t\\right)\\mathbf{u}\\!\\left(\\mathbf{R},t\\right)+\\dot{M} \\left(t\\right)\\mathbf{R} \\tag{2}\\] \\[=M\\left(t\\right)\\mathbf{u}\\!\\left(M\\left(t\\right)^{\\!-1}\\mathbf{ R}^{\\prime\\prime}\\!,t\\right)+\\dot{M}\\left(t\\right)M\\left(t\\right)^{\\!-1} \\mathbf{R}^{\\prime\\prime}\\] We want to describe now a method for generating GI actions. The way we follow is to identify GI quantities and write the action in terms of them. What are the simply accessible GI quantities? We follow previous works [8, 12, 13] and consider the Lagrangian coordinates, \\(\\mathbf{R}\\left(\\mathbf{r},t\\right)\\) defined by: \\[\\dot{\\mathbf{R}}\\left(\\mathbf{r},t\\right)=\\mathbf{u}\\!\\left(\\mathbf{R}\\left( \\mathbf{r},t\\right),t\\right)\\qquad\\mathbf{R}\\left(\\mathbf{r},0\\right)=\\mathbf{r} \\tag{3}\\] \\(\\mathbf{R}\\left(\\mathbf{r},t\\right)\\) is the position at time \\(t\\) of a fluid element originating at a point labeled \\(\\mathbf{r}\\); in other words, \\(\\mathbf{R}\\left(\\mathbf{r},t\\right)\\) is the trajectory of the fluid element \\(\\mathbf{r}\\). The coordinate \\(\\mathbf{r}\\) can be viewed as a Eularian coordinate, so \\(\\mathbf{R}\\left(\\mathbf{r},t\\right)\\) is the Eularian-Lagrangian transformation (ELT). Inventing memory functionals in the Lagrangian frame is easier because local memory is naturally described _within_ a fluid element. It can be readily checked that that the Lagrangian density \\(N\\left(\\mathbf{r},t\\right)=n\\left(\\mathbf{R}\\left(\\mathbf{r},t\\right),t\\right)\\) is in fact GI, i.e. it is invariant with respect to both linear and rotational accelerating observers. Consider first accelerations. We assume both observers label the different fluid elements in the same way (i.e. their axes coincide at \\(t=0\\)). Thus: \\(\\mathbf{R}^{\\prime}\\left(\\mathbf{r},t\\right)=\\mathbf{R}\\left(\\mathbf{r},t \\right)+\\mathbf{x}\\left(t\\right)\\) and from (1): \\[N^{\\prime}\\left(\\mathbf{r},t\\right) =n^{\\prime}\\!\\left(\\mathbf{R}^{\\prime}\\!\\left(\\mathbf{r},t \\right),t\\right)=n^{\\prime}\\!\\left(\\mathbf{R}\\left(\\mathbf{r},t\\right)+ \\mathbf{x}\\left(t\\right),t\\right) \\tag{4}\\] \\[=n\\!\\left(\\mathbf{R}\\left(\\mathbf{r},t\\right),t\\right)=N\\left( \\mathbf{r},t\\right).\\] Here and henceforth we use the notation \\(\\partial_{i}\\equiv\\partial\\!\\!/\\partial_{r_{i}}\\), \\(i=1,2,3\\). A rotating observer with the same labeling convention sees \\(\\mathbf{R}^{\\prime\\prime}\\!\\left(\\mathbf{r},t\\right)=M\\left(t\\right)\\mathbf{R} \\left(\\mathbf{r},t\\right)\\), so from (2): \\[N^{\\prime\\prime}\\!\\left(\\mathbf{r},t\\right)=n^{\\prime\\prime}\\!\\left(\\mathbf{R} ^{\\prime\\prime}\\!\\left(\\mathbf{r},t\\right),t\\right)=n\\!\\left(\\mathbf{R}\\left( \\mathbf{r},t\\right),t\\right)=N\\left(\\mathbf{r},t\\right), \\tag{5}\\] Eqs. (4) and (5) show that \\(N\\left(\\mathbf{r},t\\right)\\) is indeed GI so a simple form for the action functional can be immediately written down as \\(S^{\\left(\\mathrm{I}\\right)}\\left[\\mathbf{u}\\right]=s_{\\mathrm{I}}\\left[N\\left[ \\mathbf{u}\\right]\\right]\\). Looking for a more general yet still simple form, we now consider the Jacobian matrix of the ELT: \\[\\Im_{{}_{\\mathrm{I}}}=\\partial_{{}_{\\mathrm{I}}}R_{{}_{\\mathrm{I}}}\\left( \\mathbf{r},t\\right) \\tag{6}\\] This matrix is TI, as can be straightforwardly verified[12]. However, \\(\\Im\\) is not RI. Indeed, the following transformation, derived from the definition of the rotation, \\(\\mathbf{R}^{\\prime\\prime}=M\\left(t\\right)\\mathbf{R}\\), must hold: \\[\\Im^{\\prime\\prime}\\!\\left(\\mathbf{r},t\\right)=M\\left(t\\right)\\Im\\left(\\mathbf{ r},t\\right) \\tag{7}\\] While \\(\\Im\\) is not GI, its determinant is: since \\(\\det\\Im^{\\prime\\prime}=\\det M\\det\\Im\\) and \\(\\det M=1\\). One can then suggest that \\(S^{\\left(\\mathrm{I}\\right)}\\left[\\mathbf{u}\\right]=s_{\\mathrm{I}}\\left[\\det\\Im \\left[\\mathbf{u}\\right]\\right]\\). Comparing with \\(S^{\\left(\\mathrm{I}\\right)}\\) though, we find \\(S^{\\left(\\mathrm{I}\\right)}\\) contains nothing new! This is because the function \\(N\\left(\\mathbf{r},t\\right)\\) is directly related to the Jacobian determinant. Indeed, the number of particles in a fluid element must be constant so \\(n\\!\\left(\\mathbf{R}\\left(\\mathbf{r},t\\right),t\\right)\\!\\,d^{3}R=\\ n\\left(\\mathbf{r},0 \\right)\\!\\,d^{3}r\\), and thus: \\[J\\left(\\mathbf{r},t\\right)^{\\!-1}\\equiv\\!\\left|\\det\\!\\left[\\Im\\left(\\mathbf{r},t\\right)\\right]\\!\\right|^{\\!-1}=N\\left(\\mathbf{r},t\\right)\\!\\left/\\!n_{{}_{0}} \\left(\\mathbf{r}\\right), \\tag{8}\\] where \\(n_{{}_{0}}\\left(\\mathbf{r}\\right)=n\\left(\\mathbf{r},0\\right)\\). Thus, the functional \\(s_{\\mathrm{I}}\\) can also be thought of as a functional of \\(\\det\\left[\\Im\\right]\\). Our first attempt to introduce an action in terms of \\(\\Im\\) yielded nothing new. Let is return to Eq. (7) and search for another invariant quantity. This leads us to consider the \\(3\\times 3\\) symmetric positive-definite ELT metric tensor: \\[g\\left(\\mathbf{r},t\\right)=\\Im\\left(\\mathbf{r},t\\right)^{\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\! \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\! \\!\\!and position \\(\\mathbf{R}\\equiv\\mathbf{R}\\left(\\mathbf{r},t\\right):\\) \\[a_{{}_{k}}\\left(\\mathbf{R},t\\right)=\\frac{1}{2}\\iint Q_{{}_{\\beta}}\\left( \\mathbf{r}^{\\prime},t^{\\prime}\\right)\\frac{\\delta g_{{}_{\\beta}}\\left(\\mathbf{ r}^{\\prime},t^{\\prime}\\right)}{\\delta u_{{}_{k}}\\left(\\mathbf{R},t\\right)}dt^{\\prime}d^{3 }r^{\\prime}, \\tag{1.13}\\] (here we use the convention that repeated indices are summed over). We note that the integration on time here is actually an integration over the Keldysh contour[11], described fully in ref [12]. The derivative is given by: \\[\\frac{\\delta g_{{}_{\\beta}}\\left(x^{\\prime}\\right)}{\\delta u_{{}_{k}}\\left(X \\right)}=\\left[\\Im_{{}_{\\mathbb{I}}}\\left(x^{\\prime}\\right)\\partial_{{}_{j}}^ {\\prime}+\\Im_{{}_{\\mathbb{I}}}\\left(x^{\\prime}\\right)\\partial_{{}_{i}}^{ \\prime}\\right]\\!G_{{}_{\\mathbb{I}}}\\left(x^{\\prime};X\\right) \\tag{1.14}\\] Where \\(x^{\\prime}\\equiv\\left(\\mathbf{r}^{\\prime},t^{\\prime}\\right),\\)\\(X\\equiv\\left(\\mathbf{R},t\\right)\\) and \\(G_{{}_{\\mathbb{I}}}\\) is derived in ref. [12], given by: \\[G_{{}_{\\mathbb{I}}}\\left(x^{\\prime};X\\right)= \\left[\\Im\\left(\\mathbf{r}^{\\prime},t^{\\prime}\\right)\\Im\\left( \\mathbf{r}^{\\prime},t\\right)^{{}^{-1}}\\right]_{{}_{\\mathbb{I}}} \\tag{1.15}\\] \\[\\theta\\left(t^{\\prime}-t\\right)\\delta\\left(\\mathbf{R}\\left( \\mathbf{r}^{\\prime},t\\right)-\\mathbf{R}\\right)\\] Using Eqs. (1.14) and (1.15) in (1.13), we find, integrating by parts: \\[a_{{}_{k}}\\left(\\mathbf{R},t\\right)=-\\iint\\partial_{{}_{i}} \\left[Q_{{}_{\\mathbb{I}}}\\left(\\mathbf{r}^{\\prime},t^{\\prime}\\right)\\Im_{{}_ {\\mathbb{I}}}\\left(\\mathbf{r}^{\\prime},t^{\\prime}\\right)\\right] \\tag{1.16}\\] \\[G_{{}_{\\mathbb{I}}}\\left(\\mathbf{r}^{\\prime},t^{\\prime};\\mathbf{ R},t\\right)d^{3}r^{\\prime}dt^{\\prime}\\] leading to the following general form vector potential: \\[\\mathbf{a}\\left(\\mathbf{R}\\left(\\mathbf{r},t\\right),t\\right)=J\\left(\\mathbf{r },t\\right)^{{}^{-1}}\\Im\\left(\\mathbf{r},t\\right)^{{}^{-1}}\\mathbf{A}\\left( \\mathbf{r},t\\right) \\tag{1.17}\\] Where: \\[A_{{}_{\\mathbb{I}}}\\left(\\mathbf{r},t\\right)=\\int_{{}_{0}}^{{}^{t}}\\Im\\left( \\mathbf{r},t^{\\prime}\\right)_{{}_{nl}}^{{}^{T}}\\partial_{{}_{i}}\\left[\\Im\\left( \\mathbf{r},t^{\\prime}\\right)Q\\left(\\mathbf{r},t^{\\prime}\\right)\\right]_{{}_{ \\mathbb{I}}}dt^{\\prime} \\tag{1.18}\\] Is the \"Lagrangian\" vector potential. An explicit derivation of equation (1.18) shows that it is the time integral \\(\\int_{{}_{\\infty}}^{{}^{t}}\\) instead of \\(\\int_{{}_{0}}^{{}^{t}}\\) which appears. However, had we made the development on a Keldysh contour the correct from of the integral (1.17) would have resulted. The procedure was demonstrated in ref. [12]. Eqs. (1.11), (1.17) and (1.18) are the central result of this paper, resulting in a general form for a potential which yields zero force and torque. This general form should find useful application in cases where the electronic systems interact with strong fields. We would like to compare our results with previous work on TDCDFT potentials in the linear response regime[10; 14]. For this purpose, consider Eqs. (1.17) and (1.18) developed up to first order quantities, linear in the perturbation: \\(\\mathbf{R}\\rightarrow\\mathbf{r}+\\mathbf{R}_{{}_{1}},\\)\\(\\Im\\to 1+\\Delta\\), \\(J\\to 1+tr\\Delta\\) and \\(Q\\to q\\left(\\mathbf{r}\\right)\\)\\(+\\theta\\left(\\mathbf{r},t\\right)\\) where the \\(q\\left(\\mathbf{r}\\right)\\) is a zeroth-order term and \\(\\theta\\left(\\mathbf{r},t\\right)\\) is the first order term, given by the following expression: \\(\\theta_{{}_{nl}}\\left(\\mathbf{r},t\\right)=\\int d^{3}r^{\\prime}\\!\\int_{{}_{0}}^{{}^{t }}\\!\\Theta_{{}_{ml}}^{{}^{\\mu}}\\!\\left(\\mathbf{r},\\mathbf{r}^{\\prime}t-s\\right) \\Delta_{{}_{ll}}\\left(\\mathbf{r}^{\\prime},s\\right)ds\\). Calculating the vector potential to zero and first orders we obtain: \\[\\mathbf{a}_{{}_{0}}+\\mathbf{a}_{{}_{1}}=\\mathbf{A}_{{}_{0}}+\\mathbf{A}_{{}_{1} }-\\left(\\Delta+\\left[tr\\Delta\\right]I\\right)\\mathbf{A}_{{}_{0}} \\tag{1.19}\\] Where the zero order term \\(\\mathbf{A}_{{}_{0}}\\) is given by: \\[\\left(A_{{}_{0}}\\right)_{{}_{m}}\\left(\\mathbf{r},t\\right)=t\\partial_{{}_{i}} \\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{ \\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbfmathbfmathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\, \\,}}}}}}}}}}}}}}}} \\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{ \\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbfmathbf{\\mathbfmathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf\\,}}}}}}}}}}}}}}}} \\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{ \\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbfmathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf\\mathbf\\,}}}}}}}}}}}}}}} \\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{ \\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf\\mathbf\\mathbf\\,}}}}}}}}}}}}}}} \\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{ \\mathbf{\\mathbf{\\mathbf{\\mathbf{\\}}}}}}}}}}}}}})\\mathbf{\\mathbf{\\mathbf{ \\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{ \\mathbf{\\mathbf{\\}}}}}}}}}}}}}}}}\\mathbf{\\mathbf{ \\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{ents of the ELT \\({\\bf R}\\left({\\bf r},t\\right)\\)). **Acknowledgements** We gratefully acknowledge the support of the German Israel Foundation. ## References * [1] E. Runge and E. K. U. Gross, Phys. Rev. Lett. **52**, 997 (1984). * [2] H. Appel, E. K. U. Gross, and K. Burke, Phys. Rev. Lett. **90**, 043005 (2003). * [3] N. T. Maitra, K. Burke, H. Appel, and E. K. U. Gross, in _Reviews in Modern Quantum Chemistry: A celebration of the contributions of R. G. Parr_, edited by K. D. Sen (World-Scientific, Singapore, 2002), in Vol. II, p. 1186. * [4] M. van Faassen, P. L. de Boeij, R. van Leeuwen, J. A. Berger, and J. G. Snijders, Phys. Rev. Lett. **88**, 186401 (2002). * [5] N. T. Maitra, F. Zhang, R. J. Cave, and K. Burke, J. Chem. Phys. **120**, 5932 (2004). * [6] W. Kohn and L. J. Sham, Phys. Rev **140**, A1133 (1965). * [7] E. K. U. Gross and W. Kohn, Phys. Rev. Lett. **55**, 2850 (1985). * [8] J. F. Dobson, Phys. Rev. Lett. **73**, 2244 (1994). * [9] G. Vignale, Phys. Rev. Lett. **74**, 3233 (1995). * [10] G. Vignale and W. Kohn, Phys. Rev. Lett. **77**, 2037 (1996). * [11] R. van Leeuwen, Phys. Rev. Lett. **80**, 1280 (1998). * [12] Y. Kurzweil and R. Baer, J. Chem. Phys. **in press** (2004). * [13] J. F. Dobson, M. J. Bunner, and E. K. U. Gross, Phys. Rev. Lett. **79**, 1905 (1997). * [14] G. Vignale, C. A. Ullrich, and S. Conti, Phys. Rev. Lett. **79**, 4878 (1997). * [15] I. V. Tokatly, **cond-mat/0408352v1** (2004).
Today, most application of time-dependent density functional theory (TDDFT) use adiabatic exchange-correlation (XC) potentials that do not take into account non-local temporal effects. Incorporating such \"memory\" terms into XC potentials is complicated by the constraint that the derived force and torque densities must integrate to zero at every instance. This requirement can be met by deriving the potentials from an XC action that is Galilean invariant (GI). We develop a class of simple but flexible forms for an action that respect these constraints. The basic idea is to formulate the action in terms of the Eularian-Lagrangian transformation (ELT) metric tensor, which is itself GI. The general form of the XC potentials in this class is then derived and the linear response limit is derived as well.
Condense the content of the following passage.
arxiv-format/0411034v1.md
# Neutron Fraction and Neutrino Mean Free Path Predictions in Relativistic Mean Field Models P.T.P. Hutauruk, C.K. Williams, A. Sulaksono, T. Mart Departemen Fisika, FMIPA, Universitas Indonesia, Depok 16424, Indonesia ###### pacs: 13.15.+g, 25.30.Pt, 97.60.Jd + Footnote †: preprint: FIS-UI-TH-04-02 The finite-range (FR) (see Refs. [1; 2; 3; 4]) and point-coupling (PC)(see Refs. [5; 6; 7; 8; 9]) types of relativistic mean field (RMF) models have been quite successful to describe the bulk as well as single particle properties in a wide mass spectrum of nuclei. The early version of RMF-FR is based on a Lagrangian density which uses nucleon, sigma, omega and rho mesons as the degrees of freedom with additional cubic and quartic nonlinearities of sigma meson. For example NLZ, NL1, NL3 and NL-SH parameter sets belong to this version. Recently, inspired by the effective field and density functional theories for hadrons, a new version of this model (ERMF-FR) has been constructed [3; 10]. It has the same terms like the previous RMF-FR but with additional isoscalar and isovector tensor terms and nonlinear terms in the form of sigma, omega and rho mesons combination. One of parameter sets of this version is G2. Besides yielding accurate predictions in finite nuclei and normal nuclear matter [3; 4; 10], G2 has the demanding features like a positive value of quartic sigma meson coupling constant that leads to the existence of lower bound in energy spectrum of this model [11; 12] and to the missing zero sound mode in the high density symmetric nuclear matter [13]. Moreover, the agreement of the nuclear matter and the neutron matter equation of states (EOS) in high density of G2 with the Dirac Brueckner Hartree Fock (DBHF) calculation [4; 11] is better than those of NL1, NL3 and TM1 models (the standard RMF-FR plus a quartic omega meson interaction). The difference between RMF-PC and RMF-FR is due to the replacement of mesonic interactions in the FR model by density dependent interactions. It is evident that RMF-PC and RMF-FR serve similar quality in predicting finite nuclei and normal nuclear matter [5; 7; 9]. This is due to the fact that \"finite-range\" effects in RMF-PC model are effectively absorbed by the coupling constants. Therefore in connection with different treatments of \"finite-range\" in both models, studying the behavior of the PC model in high density should be interesting. In this report, we choose the VA4 parameter set of Ref. [6] (ERMF-PC) because it can be properly extrapolated to the high density and it has also density dependent self- and cross-interactions in the nonlinear terms. So far the EOS of a neutron star has not been known for sure [14]. However, recently [15] the flow of matter in heavy ion collisions has been used to determine the pressure of nuclear matter with a density from 2 until 5 times the nuclear saturation density (\\(\\rho_{0}\\)). Reference [15] has found that these data can be explained only by the variational calculation of Akmal _et al._[16]. Unfortunately, this interaction cannot be successfully applied to the case of finite nuclei [11]. Reference [11] found that the EOS predicted by G2 is in agreement with data. This result is remarkable, since Ref. [17] states that the minimal requirement for an accurate neutrino mean free path (NMFP) is a correct prediction in the low density limit, as well as the consistency with the corresponding EOS. On the other hand, one should remember that many-body corrections are important but they depend on the model and the approximation of strong interaction used [14; 17; 18; 19; 20; 21; 22; 23; 24; 25]. According to Refs. [26; 27] all RMF-FR models yield lower threshold densities for direct URCA process than those of variational calculations [16]. In the neutron star cooling model, Migdal _et al._[28] treated this fact as a fragile point of RMF-FR models. So, they disregarded direct URCA from their analysis but Lattimer _et al._[29] used this fact to develop their direct URCA scenario. Therefore, in this report we will compare the neutron matter prediction in high density from the G2, NLZ and VA4 models in order to check the result of Ref. [11] and the possibly different predictions from ERMF-PC and ERMF-FR due to the different treatment of the \"finite-range effects\". Furthermore, the agreement between the G2 EOS with experimental data has motivated us to calculate NMFP using this model for direct URCA process. A similar assumption as in Ref. [22] is used, i.e., the ground state of the neutron star is reached once the temperature has fallen below a few MeV. This state is gradually reached from the later stages of the cooling phase. The system is then quite dense and cool so that zero temperature is valid. In this case the direct URCA neutrino-neutron scattering is kinematically possible for low energy neutrinos at and above the threshold density when the proton fraction exceeds 1/9 [29] or slightly larger if muons are present. Furthermore, the absorption reaction is suppressed. For simplicity, we neglect the RPA correlations. The effects of self- and cross-interactions terms and the treatment of finite-range in high density can be observed by extrapolating the EOS which is presented by the neutron matter pressure \\(P\\) and the effective mass \\(M^{*}\\), as shown in Figs. 1 and 2, where we compare the results obtained from the G2 [10], NLZ [1], and VA4 [6] models as a function of \\(\\rho_{B}/\\rho_{0}\\). It is found that the nuclear matter EOS of VA4 is stiffer than those of NLZ and G2, even for \\(\\rho_{B}\\) less than \\(\\rho_{0}\\). However, the G2 EOS is softer than the NLZ one at the high density but not at the low density. This fact emphasizes the result of Ref. [11] that the crucial role of self- and cross-interactions of meson exchange model is to soften the EOS at the high density. It is shown in Fig. 2 that for \\(1\\leq\\rho_{B}/\\rho_{0}\\leq 5\\), the effective mass \\(M^{*}_{G2}>M^{*}_{VA4}\\), but for \\(\\rho_{B}/\\rho_{0}\\geq 5\\) one observes that \\(M^{*}_{G2}<M^{*}_{VA4}\\). This indicates that quantitatively \\(M^{*}\\) depends on the model. We note here that the effective masses of G2 and VA4 depend on self- and cross-interaction terms implicitly. We also note that other mechanisms could also produce a larger \\(M^{*}\\), e.g., in the Zimanyi-Moszkowski and linear Hartree-Fock Walecka models [22], where those terms are not present. Although those models give a regular NMFP, they are quite unsuccessful in finite nuclei applications, especially in predicting the single particle spectra of nuclei [30]. Therefore, it is interesting to check whether or not the relation between a large \\(M^{*}\\) and a regular NMFP also appears in the case of ERMF models. Now, we calculate the NMFP of the neutron star matter by employing G2, VA4 and NLZ models. Following Refs. [21; 22], we start with the neutrino differential scattering cross-section \\[\\frac{1}{V}\\frac{d^{3}\\sigma}{d^{2}\\Omega^{\\prime}dE^{\\prime}_{\ u}}\\ =\\ -\\frac{G_{F}}{32\\pi^{2}}\\frac{E^{\\prime}_{\ u}}{E_{\ u}}\\mbox{Im}(L_{\\mu\ u} \\Pi^{\\mu\ u}). \\tag{1}\\] Here \\(E_{\ u}\\) and \\(E^{\\prime}_{\ u}\\) are the initial and final neutrino energies, respectively, \\(G_{F}=1.023\\times 10^{-5}/M^{2}\\) is the weak coupling, and \\(M\\) is the nucleon mass. The neutrino tensor \\(L_{\\mu\ u}\\) can be written as \\[L_{\\mu\ u} = 8[2k_{\\mu}k_{\ u}+(k.q)g_{\\mu\ u}-(k_{\\mu}q_{\ u}+q_{\\mu}k_{\ u}) \\tag{2}\\] \\[\\mp i\\epsilon_{\\mu\ u\\alpha\\beta}k^{\\alpha}q^{\\beta}],\\] where \\(k\\) is the initial neutrino four-momentum and \\(q=(q_{0},\\vec{q})\\) is the four-momentum transfer. The polarization tensor \\(\\Pi^{\\mu\ u}\\), which defines the target particle species, can be written as \\[\\Pi^{j}_{\\mu\ u}(q)\\ =\\ -i\\int\\frac{d^{4}p}{(2\\pi)^{4}}\\mbox{Tr}[G^{j}(p)J^{j}_{ \\mu}G^{j}(p+q)J^{j}_{\ u}], \\tag{3}\\] where \\(j\\)= \\(n,p,e^{-},\\mu^{-}\\). \\(G(p)\\) is the target particle propagator and \\(p=(p_{0},\\vec{p})\\) is the corresponding initial four-momentum. The currents \\(J^{j}_{\\mu}\\) are \\(\\gamma^{\\mu}(C^{j}_{V}-C^{j}_{A}\\gamma_{5})\\). The explicit forms of \\(G^{j}(p)\\), \\(C^{j}_{V}\\) and \\(C^{j}_{A}\\) of every constituent and also their explanations can be found in Ref. [22]. The NMFP (symbolized by \\(\\lambda\\)) as a function of the initial neutrino energy at a certain density is obtained by integrating the cross section over the time- and vector-component of the neutrino momentum transfer. As a result we obtain [21; 22] \\[\\frac{1}{\\lambda(E_{\ u})}=\\int_{q_{0}}^{2E_{\ u}-q_{0}}d|\\vec{q}|\\int_{0}^{ 2E_{\ u}}dq_{0}\\frac{|\\vec{q}|}{E^{\\prime}_{\ u}E_{\ u}}2\\pi\\frac{1}{V}\\frac{ d^{3}\\sigma}{d^{2}\\Omega^{\\prime}dE^{\\prime}_{\ u}}. \\tag{4}\\] Since in our study we assume that the neutron star matter consists only of neutrons, protons, electrons, and muons, the relative fraction of each constituent should Figure 1: Equation of states (EOS) of the neutron matter. Figure 2: Effective masses (\\(M^{*}\\)) of the neutron matter. be taken into account in the NMFP calculation. The relative fraction is determined by the chemical potential equilibrium and the charge neutrality of the neutron star at zero temperature. The neutron fractions for all models are shown in Fig. 3. Qualitatively, all parameter sets have similar trend in fraction of each constituent, i.e., when the neutron fraction is decreasing, other constituent \\((p,e^{-},\\mu^{-})\\) fractions are increasing. Quantitatively, isovector terms are responsible for the high proton fraction. G2 has smaller neutron fraction than VA4 and NLZ. Therefore, even though G2 has an acceptable EOS, it has a too large proton fraction. This fact leads to such a low threshold density for direct URCA process. We note that this fact is ruled out by the analysis of the neutron stars cooling data [26; 31; 32]. Thus, this result indicates that significant improvements in the treatment of isovector sector of ERMF-FR are urgently required. Variational calculation of Akmal _et al._[16] allows for a direct URCA process only for \\(\\rho_{B}/\\rho_{0}>5\\). Linear Walecka (linear FR) and Zimanyi-Moszkowski (derivative coupling) Hartree-Fock models of Ref. [22] yield a higher critical density for the direct URCA process. Isovector contributions of these models do not drastically change the proton fraction. But on the other hand, all Hartree-Fock models of Ref. [22] are unable to give a good prediction in finite nuclei, especially in the single particle properties [33; 34]. It may be interesting to see also the consistency of their EOS with experimental data [15]. The NMFP for all models can be seen in Fig. 4. Here we use a neutrino energy of \\(E_{\ u}=5\\) MeV. In general, from medium to high density, \\(\\lambda_{NLZ}\\) is larger than \\(\\lambda_{G2}\\) and \\(\\lambda_{VA4}\\). In high density region we clearly see that \\(\\lambda_{G2}\\approx\\lambda_{VA4}\\). The NMFP difference among all models appears to be significant around \\(1\\leq\\rho_{B}/\\rho_{0}\\leq 5\\) (medium density). For \\(\\rho_{B}/\\rho_{0}\\) smaller than 1, \\(\\lambda_{VA4}\\approx\\lambda_{G2}\\approx\\lambda_{NLZ}\\). In other words, in the limit of low density all parameter sets serve similar \\(\\lambda\\) prediction as we expected. In Fig. 5 we show the dependence of \\(\\lambda\\) with respect to the \\(M^{*}\\) and neutron fraction. Obviously, NLZ has a maximum NMFP at \\(M^{*}\\approx 200\\) MeV and neutron fraction \\(\\approx 0.75\\). These lead to a bump in \\(\\lambda_{NLZ}\\) as shown in Fig. 4. On the other hand, G2 demonstrates no maximum in \\(M^{*}\\) and neutron fraction dependences, leading to a smoothly decreasing function of \\(\\lambda_{G2}\\) displayed in Fig. 4. For comparison, previous NMFP calculations by using all Hartree-Fock models [22] showed also no anomaly. In these models, the predicted NMFP falls off faster than that of the Hartree type model as the density increases. In conclusion, the EOS and NMFP of ERMF models in the high density states have been studied. It is found that the ERMF-FR and ERMF-PC models have different behaviors in high density and even by using a parameter set that predicts an acceptable EOS, the calculated proton fraction in neutron star is still too large. Figure 4: Neutrino mean free path (NMFP) in the neutron star matter. Figure 5: Neutrino mean free paths (NMFP) in the neutron star matter as functions of \\(M^{*}\\) and neutron fraction for all parameter sets. Figure 3: Neutron fraction in the neutron star matter with and without isovector terms. Isovector terms are responsible for this. Therefore, improvements in the treatment of the isovector sector of ERMF-FR should be done. Different from the Hartree-Fock calculation of Ref. [22], only the parameter set with an acceptable EOS (G2) has a regular NMFP. In order to minimize the anomalous behavior of \\(\\lambda\\), a relatively large \\(M^{*}\\) in RMF models is more favorable. It seems that the relatively large \\(M^{*}\\) in the ERMF models at high density originates from the presence of the self- and cross-interactions in nonlinear terms. The RMF models with relatively large \\(M^{*}\\) retain their regularities partly or fully even for a small neutron fraction. The works of A.S. and T.M. have been supported in part by the QUE project. ## References * (1) P.-G. Reinhard, Rep. Prog. Phys. **52**, 439 (1989); and references therein. * (2) P. Ring, Prog. Part. Nucl. Phys **37**, 193 (1996); and references therein. * (3) B.D. Serot, and J.D. Walecka, Int. J. Mod. Phys. E **6**, 515 (1997); and references therein. * (4) T. Sil, S.K. Patra, B.K. Sharma, M. Centelles, and X. Vinas, nucl-th/0406024 (2004). * (5) B.A. Nikolaus, T. Hoch, D.G. Madland, Phys. Rev. C **46**, 1757 (1992). * (6) J.J. Ruszak and R.J. Furnstahl, Nucl. Phys. A **627**, 95 (1997). * (7) T.Buervenich, D.G. Madland, J.A. Maruhn, P.-G Reinhard, Phys. Rev. C **65**, 044308 (2002). * (8) A. Sulaksono, T. Buervenich, J.A. Maruhn, P-G. Reinhard, and W. Greiner, Ann. Phys. (N.Y.) **308**, 354 (2003). * (9) A. Sulaksono, T. Buervenich, J.A. Maruhn, P-G. Reinhard, and W. Greiner, Ann. Phys. (N.Y.) **306**, 36 (2003). * (10) R.J. Furnstahl, B.D. Serot, and H.B. Tang, Nucl. Phys. A **598**, 539 (1996); _ibid._ Nucl. Phys. A **615**, 441 (1997). * (11) P. Arumugam, B.K. Sharma, P.K. Sahu and S.K. Patra, nucl-th/0308050 (2003). * (12) G. Baym Phys. Rev. **117**, 886 (1960). * (13) J.C. Caillon, P. Gabinski, J. Labarsoque, Nucl. Phys. A **696**, 623 (2001). * (14) S. Reddy, M. Prakash, and J.M. Latimer, Phys. Rev. D **58**, 013009 (1998); and references therein. * (15) P. Danielewicz, R. Lacey, and W. G. Lynch, Science **293**, 1592 (2002). * (16) A. Akmal, V. R Pandharipande and D. G. Ravenhall, Phys. Rev. C **58**, 1804 (1998). * (17) C.J. Horowitz and M.A. Parez-Garcia, Phys. Rev. C **68**, 025803 (2003). * (18) L. Mornas, Nucl. Phys. A **721**, 1040 (2003). * (19) L. Mornas, A Perez, Eur. Phys. J. A **13**, 383 (2002). * (20) S. Reddy, M. Prakash, J.M. Latimer, and J.A. Pons, Phys. Rev. C **59**, 288 (1999). * (21) C. J. Horowitz, K. Wehberger, Nucl. Phys. A **531**, 665 (1991); _ibid._ Phys. Lett. B **266**, 236 (1991). * (22) R. Niembro, P. Bernardos, M. Lopez-Quelle and S. Marcos, Phys. Rev. C **64**, 055802 (2001). * (23) S. Yamada, and H. Toki, Phys. Rev. C **61**, 015803 (2000). * (24) C. Shen, U. Lombardo, N. Van Giai and W. Zuo, Phys. Rev. C **68**, 055802 (2003). * (25) J. Margueron, J. Navarro, and N. Van Giai, Nucl. Phys. A **719**, 169 (2003). * (26) D. Blaschke, H. Grigorian, D.N. Voskresensky, astro-ph/0403170 (2004). * (27) E.E. Kolometisev, D.N. Voskresensky, Phys. Rev. C **68**, 015803 (2003). * (28) A.B. Migdal, E.E. Saperstein, M.A. Troitsky, D.N. Voskresensky, Phys. Report. **192**, 179 (1990). * (29) J.M. Lattimer, C.J. Pethick, M. Prakash, P. Haensel, Phys. Rev. Lett. **66**, 2701 (1991). * (30) Guo Hua, T. v Chossy, W. Stocker, Phys. Rev. C **61**, 014307 (2000). * (31) D.G. Yakovlev,O.Y. Gnedin, A.D. Kaminker, K.P. Lavenfish, A.Y. Potekhin, astro-ph/0306143 (2003). * (32) S. Tsuruta, M.A. Teter, T. Takatsuka, T. Tatsumi, R. Tamagaki, astro-ph/0204508 (2002). * (33) M.L. Quelle, N.V Giai, S. Marcos, L.N. Savushkin, Phys. Rev. C **61**, 064321 (2000). * (34) P. Bernardos, R.J. Lombardo, M.L. Quelle, S. Marcos, R. Niembro, Phys. Rev. C **62**, 024314 (2000).
The equation of state (EOS) of dense matter and neutrino mean free path (NMFP) in a neutron star have been studied by using relativistic mean field models motivated by effective field theory (ERMF). It is found that the models predict too large proton fractions, although one of the models (G2) predicts an acceptable EOS. This is caused by the isovector terms. Except G2, the other two models predict anomalous NMFP. In order to minimize the anomaly, besides an acceptable EOS, a large \\(M^{*}\\) is favorable. A model with large \\(M^{*}\\) retains the regularity in the NMFP even for a small neutron fraction.
Condense the content of the following passage.
arxiv-format/0412356v3.md
How dry is the brown dwarf desert?: Quantifying the relative number of planets, brown dwarfs and stellar companions around nearby Sun-like stars Daniel Grether\\({}^{1}\\) & Charles H. Lineweaver\\({}^{1,2}\\) \\({}^{1}\\) Department of Astrophysics, School of Physics, University of New South Wales, Sydney, NSW 2052, Australia \\({}^{2}\\) Planetary Science Institute, Research School of Astronomy and Astrophysics & Research School of Earth Sciences, Australian National University, Canberra, ACT, Australia ## 1. Introduction The formation of a binary star via molecular cloud fragmentation and collapse, and the formation of a massive planet via accretion around a core in a protoplanetary disk both involve the production of a binary system, but are usually recognized as distinct processes (e.g. Heacox 1999; Kroupa & Bouvier 2003, see however Boss 2002). The formation of companion brown dwarfs, with masses in between the stellar and planetary mass ranges, may have elements of both or some new mechanism (Bate 2000; Rice _et al._ 2003; Jiang, Laughlin & Lin 2004). For the purposes of our analysis brown dwarfs can be conveniently defined as bodies massive enough to burn deuterium (\\(M\\lower 2.15pt\\hbox{$\\;\\buildrel>\\over{\\sim}\\;$}13\\,M_{Jup}\\)), but not massive enough to burn hydrogen (\\(M\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}80\\,M_{Jup}\\) e.g. Burrows 1997). Since fusion does not turn on in gravitationally collapsing fragments of a molecular cloud until the final masses of the fragments are largely in place, gravitational collapse, fragmentation and accretion should produce a spectrum of masses that does not know about these deuterium and hydrogen burning boundaries. Thus, these mass boundaries should not necessarily correspond to transitions in the mode of formation. The physics of gravitational collapse, fragmentation, accretion disk stability and the transfer of angular momentum, should be responsible for the relative abundances of objects of different masses, not fusion onset limits. However, there seems to be a brown dwarf desert - a deficit in the frequency of brown dwarf companions either relative to the frequency of less massive planetary companions (Marcy & Butler 2000) or relative to the frequency of more massive stellar companions to Sun-like hosts. The goal of this work is (i) to verify that this desert is not a selection effect due to our inablility to detect brown dwarfs and (ii) to quantify the brown dwarf desert more carefully with respect to both stars and planets. By selecting a single sample of nearby stars as potential hosts for all types of companions, we can better control selection effects and more accurately determine the relative number of companions more and less massive than brown dwarfs. Various models have been suggested for the formation of companion stars, brown dwarfs and planets (e.g. Larson 2003, Kroupa & Bouvier 2003, Bate 2000, Matzner & Levin 2004, Boss 2002, Rice _et al._ 2003). All models involve gravitational collapse and a mechanism for the transfer of energy and angular momentum away from the collapsing material. Observations of giant planets in close orbits have challenged the conventional view in which giant planets form beyond the ice zone and stay there (e.g. Udry 2003). Various types of migration have been proposed to meet this challenge. The most important factors in determining the result of the migration is the time of formation and mass of the secondary and its relation to the mass and time evolution of the disk (e.g. Armitage & Bonnell 2002). We may be able to constrain the above models by quantitative analysis of the brown dwarf desert. For example, if two distinct processes are responsible for the formation of stellar and planetary secondaries, we would expect well-defined slopes of the mass function in these mass ranges to meet in a sharp brown dwarf valley. We examine the mass, and period distributions for companion brown dwarfs and compare them with those of companion stars and planets. The work most similar to our analysis has been carried out by Heacox (1999); Zucker & Mazeh (2001b) and Mazeh _et al._ (2003). Heacox (1999) and Zucker & Mazeh (2001b) both combined the stellar sample of Duquennoy & Mayor (1991) along with the known substellar companions and identified different mass functions for the planetary mass regime below 10 \\(M_{Jup}\\) but found similar flat distributions in logarithmic mass for brown dwarf and stellar companions. Heacox (1999) found that the logarithmic mass function in the planetary regime is best fit by a power-law with a slightly negative slope whereas Zucker & Mazeh (2001b) found an approximately flat distribution. Mazeh _et al._ (2003) looked at a sample of main sequence stars using infrared spectroscopy and combined them with the known substellar companions and found that in log mass, the stellar companions reduce in number towards the brown dwarf mass range. They identify a flat distribution for planetary mass companions. We discuss the comparison of our results to these in Section 3.1. ## 2. Defining a Less Biased Sample of Companions ### Host Sample Selection Effects High precision Doppler surveys are monitoring Sun-like stars for planetary companions and are necessarily sensitive enough to detect brown dwarfs and stellar companions within the same range of orbital period. However, to compare the relative abundances of stellar, brown dwarf and planetary companions, we cannot select our potential hosts from a non-overlapping union of the FGK spectral type target stars of the longest running, high precision Doppler surveys that are being monitored for planets (Lineweaver & Grether, 2003). This is because Doppler survey target selection criteria often exclude close binaries (separation \\(<2\"\\)) from the target lists, and are not focused on detecting stellar companions. Some stars have also been left off the target lists because of high stellar chromospheric activity (Fischer _et al._, 1999). These surveys are biased against finding stellar mass companions. We correct for this bias by identifying the excluded targets and then including in our sample any stellar companions from other Doppler searches found in the literature. Our sample selection is illustrated in Fig. 1 and detailed in Table 1 (complete list in the electronic version only) for stars closer than 25 pc and Fig. 2 for stars closer than 50 pc. Most Doppler survey target stars come from the Hipparcos catalogue because host stars need to be both bright and have accurate masses for the Doppler method to be useful in determining the companion's mass. One could imagine that the Hipparcos catalogue would be biased in favor of binarity since hosts with bright close-orbiting stellar companions would be over-represented. Figure 1.— Our Close Sample. Hertzsprung-Russell diagram for Hipparcos stars closer than 25 pc. Small black dots are Hipparcos stars not being monitored for possible companions by one of the 8 high precision Doppler surveys considered here (Lineweaver & Grether, 2003). Larger blue dots are the subset of Hipparcos stars that are being monitored (“Target Stars”) but have as yet no known planetary companions. The still larger red dots are the subset of target stars hosting detected planets (“Planet Host Stars”) and the green dots are those hosts with larger mass (\\(M_{2}>13M_{Jup}\\)) companions (“Other Host Stars”). Only companions in our less-biased sample (\\(P<5\\) years and \\(M_{2}>10^{-3}M_{\\odot}\\)) are shown (see Section 2.2). Our Sun is shown as the black cross. The grey parallelogram is the region of \\(M_{u}\\) - (\\(B-V\\)) space that contains the highest fraction (as shown by the triangles) of Hipparcos stars that are being monitored for exoplanets. This Sun-like region – late F to early K type main sequence stars – contains our Hipparcos Sun-like Stars. The target fraction is to be as high as possible to minimize selection effects potentially associated with companion frequency. The target fraction is calculated from the number of main sequence stars, i.e., the number of stars in each bin between the two dashed lines. This plot contains 1509 Hipparcos stars, of which 627 are Doppler target stars. The Sun-like region contains 464 Hipparcos stars, of which 384 are target stars. Thus, the target fraction in the Sun-like grey parallelogram is \\(\\sim 83\\%(=384/464)\\). Figure 2.— Our Far Sample. Same as Fig. 1 but for all Hipparcos stars closer than 50 pc. The major reason the target fraction (\\(\\sim 61\\%\\), triangles) is lower than in the 25 pc sample (\\(\\sim 83\\%\\)) is that K stars become too faint to include in many of the high precision Doppler surveys where the apparent magnitude is limited to \\(V<7.5\\)(Lineweaver & Grether, 2003). This plot contains 6924 Hipparcos stars, of which 2351 are target stars. The grey parallelogram contains 3296 Hipparcos stars, of which 2001 are high precision Doppler target stars (\\(61\\%\\sim 2001/3296\\)). The stars below the main sequence and the stars to the right of the M dwarfs are largely due to uncertainties in the Hipparcos parallax or \\(B-V\\) determinations. We have checked for this over-representation by looking at the absolute magnitude dependence of the frequency of stellar binarity for systems closer than 25 and 50 pc (Fig. 3). We found no significant decrease in the fraction of binaries in the dimmer stellar systems for the 25 pc sample and only a small decrease in the 50 pc sample. Thus, the Hipparcos catalogue provides a good sample of potential hosts for our analysis, since it (i) contains the Doppler target lists as subsets (ii) is volume-limited for Sun-like stars out to \\(\\sim 25\\) pc (Reid 2002) and (iii) it allows us to identify and correct for stars and stellar systems that were excluded. We limit our selection to Sun-like stars (\\(0.5\\leq B-V\\leq 1.0\\)) or approximately those with a spectral type between F7 and K3. Following Udry (private communication) and the construction of the Coralie target list, we limit our analysis to main sequence stars, or those between -0.5 and +2.0 dex (below and above) an average main sequence value as defined by \\(5.4(B-V)+2.0\\leq M_{v}\\leq 5.4(B-V)-0.5\\). This sampled region, which we will call our \"Sun-like\" region of the HR diagram, is shown by the grey parallelograms in Figs. 1 & 2. The Hipparcos sample is essentially complete to an absolute visual magnitude of \\(M_{v}=8.5\\) (Reid 2002) within 25 pc of the Sun. Thus the stars in our 25 pc Sun-like sample represent a complete, volume-limited sample. In our sample we make corrections in companion frequency for stars that are not being targeted by Doppler surveys as well as corrections for mass and period companion detection selection effects (see Section 2.2). The result of these corrections is our less-biased distribution of companions to Sun-like stars within 25 pc. We also analyse a much larger sample of stars out to 50 pc to understand the effect of distance on target selection and companion detection. Although less complete, with respect to the relative number of companions of different masses, the results from the 50 pc sample are similar to the results from the 25 pc sample (Section 3). Stars in our Sun-like region are plotted as a function of distance in Fig. 4. Each histogram bin represents an equal volume spherical shell hence a sample complete in distance would produce a flat histogram. Also shown are the target stars, which are the subset of Hipparcos stars that are being monitored for planets by one of the 8 high precision Doppler surveys (Lineweaver & Grether 2003) analysed here. The triangles in Fig. 4 represent this number as a fraction of Hipparcos stars. Since nearly all of the high precision Doppler surveys have apparent magnitude limited target lists (often \\(V<7.5\\)), we investigate the effect this has on the total target fraction as a function of distance. The fraction of stars having an apparent magnitude \\(V\\) brighter (lower) than a given value are shown by the 5 dotted lines for \\(V<7.5\\) to \\(V<9.5\\). For a survey, magnitude limited to \\(V=7.5\\), 80% of the Sun-like Hipparcos stars will be observable between 0 pc and 25 pc. This rapidly drops to only 20% for stars between 48 and 50 pc. Thus the major reason why the target fraction drops with increasing distance is that the stars become too faint for the high precision Doppler surveys to monitor. The fact that the target fraction (triangles) lie near the \\(V<8.0\\) line indicates that on average \\(V\\sim 8.0\\) is the effective limiting magnitude of the targets monitored by the 8 combined high precision Doppler surveys. In Fig. 1, \\(80(=464-384)\\) or 17% of Hipparcos Sun-like stellar systems are not present in any of the Doppler target lists. The triangles in Fig. 1 indicate that the Figure 4.— Distance Dependence of Sample and Companions. Here we show the number of nearby Sun-like stars as a function of distance. Each histogram bin represents the stars in an equal volume spherical shell. Hence, a sample that is complete in distance out to 50 pc would produce a flat histogram (indicated by the horizontal dashed line). The lightest shade of grey represents Hipparcos Sun-like Stars out to 50 pc that fall within the parallelogram of Fig. 2 (“HSS”). The next darker shade of grey represents Hipparcos stars that are being monitored for planets using the high precision Doppler techniques (8 groups described in Lineweaver & Grether 2003). The triangles represent this number as a fraction of Hipparcos stars. This fraction needs to be as large as possible to minimize distance dependent selection effects in the target sample potentially associated with companion frequency. Also shown (darker grey) are the number of Hipparcos stars that have one or more companions in the mass range \\(10^{-3}<M/M_{\\odot}<1\\), and those that host planets (darkest grey). Only those companions in the less-biased sample, \\(P<5\\) years and \\(M_{2}>10^{-3}M_{\\odot}\\) are shown (Section 2.2). The fraction of stars having an apparent magnitude \\(V\\) brighter (lower) than a given value are shown by the 5 dotted lines for \\(V<7.5\\) to \\(V<9.5\\). Figure 3.— Fraction of stars that are known to be close (\\(P<5\\) years) Doppler binaries as a function of absolute magnitude. For the 25 pc Sun-like sample (large dots), \\(\\sim 11\\%\\) of stars are binaries and within the error bars, brighter stars do not appear to be significantly over-represented. If we include the extra stars to make the 50 pc Sun-like sample (small dots), the stellar binary fraction is lower and decreases as the systems get fainter. ones left out are spread more or less evenly in B-V space spanned by the grey parallelogram. Similarly in Fig. 2, \\(1295(=3296-2001)\\) or \\(39\\%\\) are not included in any Doppler target list, but the triangles show that more K stars compared to FG stars have not been selected, again pointing out that the lower K dwarf stellar brightness is the dominant reason for the lower target fraction, not an effect strongly biased with respect to one set of companions over another. In the Sun-like region of Fig. 1 we use the target number (384) as the mother population for planets and brown dwarfs and the Hipparcos number (464) as the mother population for stars. To achieve the same normalizations for planetary, brown dwarf and stellar companions we assume that the fraction of these 384 targets that have exoplanet or brown dwarf companions is representative of the fraction of the 464 Hipparcos stars that have exoplanet or brown dwarf companions. Thus we renormalize the planetary and brown dwarf companions which have the target sample as their mother population to the Hipparcos sample by \\(464/384=1.21\\) (\"renormalization\"). Since close-orbitting stellar companions are anti-correlated with close-orbitting sub-stellar companions and the 384 have been selected to exclude separations of \\(<2\\)\", the results from the sample of 384 may be a slight over-estimate of the relative frequency of substellar companions. However, this over-estimate will be less than \\(\\sim 11\\%\\) because this is the frequency of close-orbitting stellar secondaries. A non-overlapping sample of the 8 high precision Doppler surveys (Lineweaver & Grether 2003) is used as the exoplanet target list where the Elodie target list was kindly provided by C. Perrier (private communication) and additional information to construct the Coralie target list from the Hipparcos catalogue was obtained from S. Udry (private communication). The Keck and Lick target lists are those of Nidever _et al._ (2002), since \\(\\sim 7\\%\\) of the targets in Wright _et al._ (2004) have not been observed over the full 5 year baseline used in this analysis. For more details about the sample sizes, observational durations, selection criteria and sensitivities of the 8 surveys see Table 4 of Lineweaver & Grether (2003). ### Companion Detection and Selection Effects The companions to the above Sun-like sample of host stars have primarily been detected using the Doppler technique (but not exclusively high precision exoplanet Doppler surveys) with some of the stellar pairs also being detected as astrometric or visual binaries. Thus we need to consider the selection effects of the Doppler method in order to define a less-biased sample of companions (Lineweaver & Grether 2003). As a consequence of the exoplanet surveys' limited monitoring duration we only select those companions with an orbital period \\(P<5\\) years. To reduce the selection effect due to the Doppler sensitivity we also limit our less-biased sample to companions of mass \\(M_{2}>0.001M_{\\odot}\\). Fig. 5 shows all of the Doppler companions to the Sun-like 25 pc and 50 pc samples within the mass and period range considered here. Our less-biased companions are enclosed by the thick solid rectangle. Given a fixed number of targets, the \"Detected\" region should contain all companions that will be found for this region of mass-period space. The \"Being Detected\" region should contain some but not all companions that will be found in this region and the \"Not Detected\" region contains no companions since the current Doppler surveys are either not sensitive enough or have not been observing for a long enough duration. To avoid the incomplete \"Being Detected\" region we limit our sample of companions to \\(M_{2}>0.001M_{\\odot}\\). In Lineweaver _et al._ (2003) we describe a crude method for making a completeness correction for the lower right corner of the solid rectangle falling within the \"Being Detected\" region. The result for the \\(d<25\\) pc sample is a one planet correction to the lowest mass bin and for the \\(d<50\\) pc sample, a six planet correction to the lowest mass bin (see Table 2 - footnote b). Fig. 6 shows a projection of Fig. 5 onto the period axis. Planets are more clumped towards higher periods than are stellar companions. The Doppler planet detection method is not biased against short period planets. The Doppler stellar companion detections are not significantly biased for shorter periods or against longer periods in our samples analysis range (period \\(<5\\) years) since Doppler instruments of much lower precision than those used to detect exoplanets are able to detect any Doppler companions of stellar mass. Thus this represents a real difference in period distributions between stellar and planetary companions. The companions in Fig. 5 all have radial velocity (Doppler) solutions. Some of the companions also have additional photometric, interferometric, astrometric or visual solutions. The exoplanet Doppler orbits are taken from the Extrasolar Planets Catalog (Schneider 2005). Only the planet orbiting the star HIP 108859 (HD 209458) has an additional photomet \\begin{table} \\begin{tabular}{l c c c c c} \\hline \\hline \\multicolumn{1}{c}{ Hipparcos} & \\(B-V\\) & \\(M_{V}\\) & Distance & Exoplanet & Companion \\\\ Number & & & (pc) & Target & (\\(P<5\\) years) \\\\ & & & & & (\\(M>M_{Jup}\\)) \\\\ \\hline HIP 171 & 0.69 & 5.33 & 12.40 & Yes & \\\\ HIP 518 & 0.69 & 4.44 & 20.28 & No & Star \\\\ HIP 544 & 0.75 & 5.39 & 13.70 & Yes & \\\\ HIP 1031 & 0.78 & 5.68 & 20.33 & Yes & \\\\ HIP 1292 & 0.75 & 5.36 & 17.62 & Yes & Planet \\\\ \\hline \\end{tabular} Note. – Table 1 is published in its entirety in the electronic edition of the Astrophysical Journal. A portion is shown here for guidance regarding its form and content. \\end{table} Table 1Sun-like 25 pc Sampleric solution but this companion falls outside our less-biased region (\\(M_{2}<M_{Jup}\\)). For the stellar companion data, the single-lined (SB1) and double-lined (SB2) spectroscopic binary orbits are primarily from the Ninth Catalogue of Spectroscopic Binary Orbits (Pourbaix _et al._, 2004) with additional interferometric, astrometric or visual solutions from the 6th Catalog of Orbits of Visual Binary Stars (Washington Double Star Catalog, Hartkopf & Mason, 2004). Many additional SB1s come from Halbwachs _et al._ (2003). Stellar binaries and orbital solutions also come from Endl _et al._ (2004); Halbwachs _et al._ (2000); Mazeh _et al._ (2003); Tinney _et al._ (2001); Jones _et al._ (2002); Vogt _et al._ (2002); Zucker & Mazeh (2001a). We examine the inclination distribution for the 30 Doppler companions (\\(d<50\\) pc) with an astrometric or visual solution. We find that 24 of these 30 companions have a minimum mass larger than \\(80M_{Jup}\\) (Doppler stellar candidates) and that 6 of these 30 companions have a minimum mass between \\(13M_{Jup}\\) and \\(80M_{Jup}\\) (Doppler brown dwarf candidates). These 6 Doppler brown dwarf candidates are a subset of the 16 Doppler brown dwarf candidates in the far sample that have an astrometric orbit derived with a confidence level greater than 95% from Hipparcos measurements (Halbwachs _et al._, 2000; Zucker & Mazeh, 2001a) and are thus assumed to have an astrometric orbit. As shown in Fig. 7, the inclination distribution is approximately random for the 24 companions with a minimum mass in the stellar regime whereas it is biased towards low inclinations for the 6 companions in the brown dwarf regime. All 6 of the Doppler brown dwarf candidates with an astrometric determination of their inclination have a true mass in the stellar regime. This includes all 3 of the Doppler brown dwarf candidates that are companions to stars in our close sample (\\(d<25\\) pc) thus leaving an empty brown dwarf regime. Also shown in Fig. 7, is the distribution of the maximum values of \\(sin(i)\\) that would put the true masses of the remaining 10 Doppler brown dwarf candidates with unknown inclinations in the stellar regime. This distribution is substantially less-biased than the observed \\(sin(i)\\) distribution, strongly suggesting that the remaining 10 Doppler brown dwarf candidates will also have masses in the stellar regime. Thus astrometric corrections leave us with no solid candidates with masses in the brown dwarf regime from the 16 Doppler brown dwarf candidates in the far sample (\\(d<50\\) pc), consistent with the result obtained for the close sample. The size of the 25 pc and 50 pc samples, the extent to which they are being targeted for planets, and the number and types of companions found along with any associated corrections are summarised in Table 2. For Figure 5.— Brown Dwarf Desert in Mass and Period. Estimated companion mass \\(M_{2}\\) versus orbital period for the companions to Sun-like stars of our two samples: companions with hosts closer than 25 pc (large symbols) and those with hosts closer than 50 pc, excluding those closer than 25 pc (small symbols). The companions in the thick solid rectangle are defined by periods \\(P<5\\) years, and masses \\(10^{-3}<M_{2}\\lesssim M_{\\odot}\\), and form our less-biased sample of companions. The stellar (open circles), brown dwarf (grey circles) and planetary (filled circles) companions are separated by dashed lines at the hydrogen and deuterium burning onset masses of 80 \\(M_{Jup}\\) and 13 \\(M_{Jup}\\) respectively. This plot clearly shows the brown dwarf desert for the \\(P<5\\) year companions. Planets are more frequent at larger periods than at shorter periods (see Fig. 6). The “Detected”, “Being Detected” and “Not Detected” regions of the mass-period space show the extent to which the high precision Doppler method is currently able to find companions (Lineweaver & Grether, 2003). see Appendix for discussion of \\(M_{2}\\) mass estimates. Figure 6.— Projection of Fig. 5 onto the period axis for the 25 pc (dark grey) and 50 pc (light grey) samples. Planets are more clumped towards higher periods than are stellar companions. This would be a selection effect with no significance if the efficiency of finding short period stellar companions with the low precision Doppler technique used to find spectroscopic binaries, was much higher than the efficiency of finding exoplanets with high precision spectroscopy. Konacki _et al._ (2004) and Pont _et al._ (2004) conclude that the fact that the transit photometry method has found planets in sub 2.5 day periods (while the Doppler method has found none) is due to higher efficiency for small periods and many more target stars and thus that these two observations do not conflict. Thus there seems to be a real difference in the period distributions of stellar and planetary companions. the stars closer than 25 pc, 59 have companions in the less-biased region (rectangle circumscribed by thick line) of Fig. 5. Of these, 19 are exoplanets, 0 are brown dwarfs and 40 are of stellar mass. Of the stellar companions, 25 are SB1s and 15 are SB2s. For the stars closer than 50 pc, 198 have companions in the less-biased region. Of these, 54 are exoplanets, 1 is a brown dwarf and 143 are stars. Of the stellar companions, 90 are SB1s and 53 are SB2s. We find an asymmetry in the north/south declination distribution of the Sun-like stars with companions, probably due to undetected or unpublished stellar companions in the south. The number of hosts closer than 25 pc with planetary or brown dwarf companions are symmetric in north/south declination to within one sigma Poisson error bars, but because more follow up work has been done in the north, more of the hosts with stellar companions with orbital solutions are in the northern hemisphere (30) compared with the southern (10). A comparison of our northern sample of hosts with stellar companions to the similarly selected approximately complete sample of Halbwachs _et al._ (2003) indicates that our 25 pc northern sample of hosts with stellar companions is also approximately complete. Under this assumption, the number of stellar companions missing from the south can be estimated by making a minimal correction up to the one sigma error level below the expected number, based on the northern follow up results. Of the 464 Sun-like stars closer than 25 pc, 211 have a southern declination (Dec \\(<0^{\\circ}\\)) and 253 have a northern declination (Dec \\(\\geq 0^{\\circ}\\)) and thus \\(\\sim 25(25/211\\approx 30/253)\\) stars in the south should have a stellar companion when fully corrected or 20 if we make a minimal correction. Thus we estimate that we are missing at least \\(\\sim 10(=20-10)\\) stellar companions in the south, 7 of which have been detected by Jones _et al._ (2002) under the plausible assumption that the orbital periods of the companions detected by Jones _et al._ (2002) are less than 5 years. Although these 7 SB1 stellar companions detected by Jones _et al._ (2002) have as yet no published orbital solutions, we assume that the SB1 stellar companions detected by Jones _et al._ (2002) have \\(P<5\\) years since they have been observed as part of the high Doppler precision program at the Anglo-Australian Observatory (started in 1998) for a duration of less than 5 years before being announced. The additional estimated stellar companions are assumed to have the same mass distribution as the other stellar companions. We can similarly correct the declination asymmetry in the sample of Sun-like stars closer than 50 pc. We find that there should be, after a minimal correction, an additional 55 stars that are stellar companion hosts in the southern hemisphere. 14 of these 55 stellar companions are assumed to have been detected by Jones _et al._ (2002). An asymmetry found in the planetary companion fraction in the 50 pc sample due to the much larger number of stars being monitored less intensively for exoplanets in the south (\\(\\sim 2\\%=33/1525\\)) compared to the north (\\(\\sim 4\\%=21/476\\)) results in a correction of 19 planetary companions in the south. The results given in Table 3 are done both with and without the asymmetry corrections. Unlike the 25 pc sample for which we are confident that the small corrections made to the number of companions will result in a reliable estimate of a census, correcting the 50 pc sample for the large number of missing companions is less reliable. This is so because if it were complete, the 50 pc sample would have approximately 8 times the number of companions as the 25 pc sample, since the 50 pc sample has 8 times the volume of the 25 pc sample. However, the incomplete 50 pc sample has only \\(\\sim 7(=3296/464)\\) times the number of Hipparcos stars, \\(\\sim 5(=2001/384)\\) times as many exoplanet targets and \\(\\sim 3\\) times as many companions as the 25 pc sample. Thus rather than correcting both planetary and stellar companions by large amounts we show in Section 3 that the relative number and distribution of the observed planetary and stellar companions (plus a small completeness correction for the \"Being Detected\" region of 6 planets and an additional 14 probable stellar companions from Jones _et al._ (2002) - see Table 2) remains approximately unchanged when compared to the corrected companion distribution of the 25 pc sample. Analyses both with and without a correction for the north/south asymmetry produce similar results for the brown dwarf desert (Table \\begin{table} \\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \\hline Sample & Hipparcos & \\multicolumn{2}{c|}{Doppler} & \\multicolumn{4}{c|}{Companions} \\\\ \\cline{4-10} & \\multicolumn{2}{c|}{Number} & \\multicolumn{2}{c|}{Target} & \\multicolumn{2}{c|}{Total} & \\multicolumn{2}{c|}{Planets} & \\multicolumn{2}{c|}{BDs} & \\multicolumn{2}{c|}{Stars} \\\\ \\cline{4-10} & & \\multicolumn{2}{c|}{Number} & \\(\\%\\)1 & & & \\multicolumn{2}{c|}{Total} & \\multicolumn{2}{c|}{SB1} & \\multicolumn{2}{c|}{SB2} \\\\ \\hline \\(d<25\\) pc & 1509 & 627 & 42\\% & - & 22 & - & - & - & - & - \\\\ Sun-like & 464 & 384 & 83\\% & 59 (+15)2 & 19 (+18,+4)3 & 0 & 40 (+74,+3) & 15 (8)4 \\\\ Dec \\(<0^{\\circ}\\) & 211 & 211 & 100\\% & 20 (+10)2 & 10 & 0 & 10 (+74,+3) & 8 (3) & 2 (1)4 \\\\ Dec \\(>0^{\\circ}\\) & 253 & 173 & 68\\% & 39 & 9 & 0 & 30 & 17 (6)4 & 13 (17)5 \\\\ \\hline \\(d<50\\) pc & 6924 & 2351 & 34\\% & - & 58 & - & - & - & - \\\\ Sun-like & 3296 & 2001 & 61\\% & 198 (+80)2 & 54 (+6)3 & 19 & 143 (+144,+41) & 90 (18)5 & 53 (12)6 \\\\ Dec \\(<0^{\\circ}\\) & 1647 & 1525 & 93\\% & 72 (+74)2 & 33 (+19)3 & 0 3 (39 +144,+41) & 27 (7)4 & 12 (2)5 \\\\ Dec \\(>0^{\\circ}\\) & 1649 & 476 & 29\\% & 126 & 21 & 104 & 63 (11)4 & 44 (10)5 \\\\ \\hline \\end{tabular} \\end{table} Table 2Hipparcos Sample, Doppler Targets and Detected Companions for Near and Far Samples3). ## 3. Companion Mass Function The close companion mass function to Sun-like stars clearly shows a brown dwarf desert for both the 25 pc (Fig. 8) and the 50 pc (Fig. 9) samples. The numbers of both the planetary and stellar mass companions decrease toward the brown dwarf mass range. Both plots contain the detected Doppler companions, shown as the grey histogram, within our less-biased sample of companions (\\(P<5\\) years and \\(M_{2}>10^{-3}M_{\\odot}\\), see Section 2.2). The hatched histograms at large mass show the subset of the stellar companions that are not included in any of the exoplanet Doppler surveys. A large bias against stellar companions would have been present if we had only included companions found by the exoplanet surveys. For multiple companion systems, we select the most massive companion in our less biased sample to represent the system. We put the few companions (3 in the 25 pc sample, 6 in the 50 pc sample) that have a mass slightly larger than \\(1\\,M_{\\odot}\\) in the largest mass bin in the companion mass distributions. Fitting straight lines using a weighted least squares method to the 3 bins on the left-hand side (LHS) and right-hand side (RHS) of the brown dwarf region of the mass histograms (Figs. 8 & 9), gives us gradients of \\(-15.2\\pm 5.6\\) (LHS) and \\(22.0\\pm 8.8\\) (RHS) for the 25 pc sample and \\(-9.1\\pm 2.9\\) (LHS) and \\(24.1\\pm 4.7\\) (RHS) for the 50 pc sample. Since the slopes have opposite signs, they form a valley which is the brown dwarf desert. The presence of a valley between the negative and positive sloped lines is significant at more than the 3 sigma level. The ratio of the corrected number of companions in the less-biased sample on the LHS to the RHS along with their poisson error bars is \\((24\\pm 9)/(50\\pm 13)=0.48\\pm 0.22\\) with no companions in the middle 2 bins for the 25 pc sample. For the larger 50 pc sample the corrected less-biased LHS/RHS ratio is \\((60\\pm 14)/(157\\pm 22)=0.38\\pm 0.10\\), with 1 brown dwarf companion in the middle 2 bins. Thus the LHS and RHS slopes agree to within about 1 sigma and so do the LHS/RHS ratios, indicating that the companion mass distribution for the larger 50 pc sample is not significantly different from the more complete 25 pc sample and that the relative fraction of planetary, brown dwarf and stellar companions is approximately the same. A comparison of the relative number of companions in each bin in Fig. 8 with its corresponding bin in Fig. 9 produces a best-fit of \\(\\chi^{2}=1.9\\). To find the driest part of the desert, we fit separate straight lines to the 3 bins on either side of the brown dwarf desert (solid lines) in Figs. 8 & 9. The deepest part of the valley where the straight lines cross beneath the abscissa is at \\(M=31\\,^{+25}_{-18}M_{Jup}\\) and \\(M=43\\,^{+14}_{-23}M_{Jup}\\) for the 25 and 50 pc samples respectively. These results are summarized in Table 3. The driest part of the desert is virtually the same for both samples even though we see a bias in the stellar binarity fraction of the 50 pc sample (Fig. 3). We have done the analysis with and without the minimal declination asymmetry correction. The position of the brown dwarf minimum and the slopes are robust to this correction (see Table 3). The smaller 25 pc Sun-like sample contains 464 stars with \\(16.0\\%\\pm 5.2\\%\\) of these having companions in our corrected less-biased sample. Of these \\(\\sim 16\\%\\) with companions, \\(5.2\\%\\pm 1.9\\%\\) are of planetary mass and \\(10.8\\%\\pm 2.9\\%\\) are of stellar mass. None is of brown dwarf mass. This agrees with previous estimates of stellar binarity such as that found by Halbwachs _et al._ (2003) of 14% for a sample of G-dwarf companions with a slightly larger period range (\\(P<10\\) years). The planet fraction agrees with the fraction \\(4\\%\\pm 1\\%\\) found in Lineweaver & Grether (2003) when most of the known exoplanets are considered. The 50 pc sample has a large incompleteness due to the lower fraction of monitored stars (Fig. 4) but as shown above, the relative number of companion planets, brown dwarfs and stars is approximately the same as for the 25 pc sample. The 50 pc sample has a total companion fraction of \\(15.6\\%\\pm 2.8\\%\\), where \\(4.3\\%\\pm 1.0\\%\\) of the companions are of planetary mass, \\(0.1^{+0.2}_{-0.1}\\%\\) are of brown dwarf mass and \\(11.2\\%\\pm 1.6\\%\\) are of stellar mass. Table 4 summarizes these companion fractions. Surveys of the multiplicity of nearby Sun-like stars yield the relative numbers of single, double and multiple Figure 7.— Astrometric inclination distribution for close companions (\\(d<50\\) pc) with a minimum mass larger than \\(80M_{Jup}\\) (Doppler stellar candidates - TOP) and between \\(13M_{Jup}\\) and \\(80M_{Jup}\\) (Doppler brown dwarf candidates - BOTOM). There are 24 companions with astrometric solutions and a minimum mass in the stellar regime. The inclination distribution is approximately random for companions with a minimum mass in the stellar regime whereas it is biased towards low inclinations for companions in the brown dwarf regime. All 6 astrometric determinations of \\(sin(i)\\) for brown dwarf candidates put their true mass in the stellar regime. Also shown is the distribution of the maximum values of \\(sin(i)\\) that would place the true masses of the remaining 10 brown dwarf candidates without astrometric or visual solutions in the stellar regime. A distribution less-biased than the observed \\(sin(i)\\) distribution would be required. This strongly suggests that the 10 candidates without astrometric or visual solutions will also have masses in the stellar regime. Therefore, astrometric corrections leave us with no solid candidates with masses in the brown dwarf region. Two weak brown dwarf candidates are worth mentioning. HD 114762 has a minimum mass below \\(13M_{Jup}\\). However, to convert minimum mass to mass, we have assumed random inclinations and have used \\(<sin(i)>\\approx 0.785\\). This conversion puts the estimated mass of HD 114762 in the brown dwarf regime (\\(M\\;\\lower 2.15pt\\hbox{$\\buildrel>\\over{\\sim}$}\\;13M_{Jup}\\)). In Fig. 5, this is the only companion lying in the brown dwarf regime. Another weak brown dwarf candidate is the only candidate that requires a \\(sin(i)<0.2\\) to place its mass in the stellar regime. star systems. According to Duquennoy & Mayor (1991), 51% of star systems are single stars, 40% are double star systems, 7% are triple and 2% are quadruple or more. Of the 49%(\\(=40+7+2\\)) which are stellar binaries or multiple star systems, 11% have stellar companions with periods less than 5 years and thus we can infer that the remaining 38% have stellar companions with \\(P>5\\) years. Among the 51% without stellar companions, we find that \\(\\sim 5\\%\\) have close (\\(P<5\\) years) planetary companions with \\(1<M/M_{Jup}<13\\), while \\(<1\\%\\) have close brown dwarfs companions. The Doppler method should preferentially find planets around lower mass stars where a greater radial velocity is induced. This is the opposite of what is observed as shown in Figs. 10 and 11 where we split the 25 and 50 pc samples respectively into companions to hosts with masses above and below \\(1\\:M_{\\odot}\\). We scale these smaller samples to the size of the full 25 and 50 pc samples (Figs. 8 and 9 respectively). The Doppler technique is also a function of \\(B-V\\) color (Saar _et al._ 1998) with the level of systematic errors in the radial velocity measurements, decreasing as we move from high mass to low mass (\\(B-V=0.5\\) to \\(B-V=1.0\\)) through our two samples, peaking for late K spectral type stars before increasing for the lowest mass M type stars again. Hence again finding planets around the lower mass stars (early K spectral type) in our sample should be easier. ### Comparison with Other Results Although there are some similarities, the companion mass function found by Heacox (1999); Zucker & Mazeh (2001b); Mazeh _et al._ (2003) is different from that shown in Figs. 8 & 9. Our approach was to normalize the companion numbers to a well-defined sub-sample of Hipparcos stars whereas these authors use two different samples of stars, one to find the planetary companion mass function and another to find the stellar companion mass function, which are then normalized to each other. The different host star properties and levels of completeness of the two samples may make this method more prone than our method, to biases in the frequencies of companions. Both Heacox (1999) and Zucker & Mazeh (2001b) combined the companions of the stellar mass sample of Duquennoy & Mayor (1991) with the known substellar companions, but identified different mass functions for the planetary mass regime below 10 \\(M_{Jup}\\) and similar flat distributions in logarithmic mass for brown dwarf and stellar mass companions. Heacox (1999) found that the logarithmic mass function in the planetary regime is Figure 8.— Brown Dwarf Desert in Close Sample. Histogram of the companions to Sun-like stars closer than 25 pc plotted against mass. The grey histogram is made up of Doppler detected companions in our less-biased (\\(P<5\\) years and \\(M_{2}>10^{-3}M_{\\odot}\\)) sample. The corrected version of this less-biased sample includes an extra 7 probable SB1 stars from (Jones _et al._ 2002) (Table 2 - footnote d) and an extra 3 stars from an asymmetry in the host declination distribution (Table 2 - footnote e). The planetary mass companions are also renormalized to account for the small number of Hipparcos Sun-like stars that are not being Doppler monitored (21% renormalization, Table 2 - footnote e) and a 1 planet correction for the undersampling of the lowest mass bin due to the overlap with the “Being Detected” region (Table 2 – footnote b). The hatched histogram is the subset of detected companions to hosts that are not included on any of the exoplanet search target lists and hence shows the extent to which the exoplanet target lists are biased against the detection of stellar companions. Since instruments with a radial velocity sensitivity \\(K_{S}\\leq 40\\) m/s (see Eq. 2 of Appendix) were used for all the companions, we expect no other substantial biases to affect the relative amplitudes of the stellar companions on the right-hand side (RHS) and the planetary companions on the left-hand side (LHS). The brown dwarf mass range is empty. Figure 9.— Same as Fig. 8 but for the larger 50 pc sample renormalized to the size of the 25 pc sample. Fitting straight lines using a weighted least squares fit to the 3 bins on the LHS and RHS, gives us gradients of \\(-9.1\\pm 2.9\\) and \\(24.1\\pm 4.7\\) respectively (solid lines). Hence the brown dwarf desert is significant at more than the 3 sigma level. These LHS and RHS slopes agree to within about 1 sigma of those in Fig. 8. The ratio of the number of companions on the LHS to the RHS is also about the same for both samples. Hence the relative number and distribution of companions is approximately the same as in Fig. 8. The separate straight line fits to the 3 bins on the LHS and RHS intersect at \\(M=43^{+14}_{-23}M_{Jup}\\) beneath the abscissa. Approximately 16% of the stars have companions in our less-biased region. Of these, \\(4.3\\%\\pm 1.0\\%\\) have companions of planetary mass, \\(0.1^{+0.20}_{-0.1}\\%\\) have brown dwarf companions and \\(11.2\\%\\pm 1.6\\%\\) have companions of stellar mass. We renormalize the mass distribution in this figure by comparing each bin in this figure with its corresponding bin in Fig. 8 and scaling the vertical axis of Fig. 9 so that the difference in height between the bins is on average a minimum. We find that the optimum renormalization factor is 0.33. This plot does not include the asymmetry correction for the planetary and stellar companions discussed in Section 2.2 and shown in Table 2. best fit by a power-law (\\(dN/dlogM\\propto M^{\\Gamma}\\)) with index \\(\\Gamma\\) between 0 and -1 whereas Zucker & Mazeh (2001b) find an approximately flat distribution (power-law with index 0). Our work here and in Lineweaver & Grether (2003) suggests that neither the stellar nor the planetary companion distributions are flat (\\(\\Gamma=-0.7\\)). Rather, they both slope down towards the brown dwarf desert, more in agreement with the results of Heacox (1999). The work most similar to ours is probably (Mazeh _et al._, 2003) who looked at a sample of main sequence stars with primaries in the range \\(0.6-0.85\\,M_{\\odot}\\) and \\(P<3000\\) days using infrared spectroscopy and \\begin{table} \\begin{tabular}{|l|c|c|c|c|c|c|} \\hline Sample & Asymmetry & Figure & Total \\% & Planetary \\% & Brown Dwarf \\% & Stellar \\% \\\\ & Correction & & & & & & \\\\ & & & & & & \\\\ \\hline \\(d<25\\) pc & Yes & 8 & \\(16.0\\pm 5.2\\) & \\(5.2\\pm 1.9\\) & \\(0.0^{+0.4}_{-0.0}\\) & \\(10.8\\pm 2.9\\) \\\\ \\(d<25\\) pc & No & \\(15.3\\pm 5.0\\) & \\(5.2\\pm 1.9\\) & \\(0.0^{+0.0}_{-0.0}\\) & \\(10.1\\pm 2.7\\) \\\\ \\hline \\(d<50\\) pc & Yes & \\(15.6\\pm 2.8\\) & \\(4.4\\pm 1.0\\) & \\(0.1^{+0.2}_{-0.2}\\) & \\(11.1\\pm 1.6\\) \\\\ \\(d<50\\) pc & No & 9 & \\(15.6\\pm 2.8\\) & \\(4.3\\pm 1.0\\) & \\(0.1^{+0.2}_{-0.1}\\) & \\(11.2\\pm 1.6\\) \\\\ \\hline \\(d<25\\) pc \\& \\(M_{1}<1M_{\\odot}\\) & Yes & 10 & \\(16.0\\pm 5.8\\) & \\(4.2\\pm 1.9\\) & \\(0.0^{+0.1}_{-0.0}\\) & \\(11.8\\pm 3.5\\) \\\\ \\(d<50\\) pc \\& \\(M_{1}<1M_{\\odot}\\) & No & 11 & \\(15.6\\pm 6.0\\) & \\(2.6\\pm 1.7\\) & \\(0.2^{+0.4}_{-0.2}\\) & \\(12.8\\pm 3.9\\) \\\\ \\hline \\(d<25\\) pc \\& \\(M_{1}\\geq 1M_{\\odot}\\) & Yes & 10 & \\(16.0\\pm 7.0\\) & \\(6.6\\pm 3.1\\) & \\(0.0^{+0.0}_{-0.0}\\) & \\(9.4\\pm 3.5\\) \\\\ \\(d<50\\) pc \\& \\(M_{1}\\geq 1M_{\\odot}\\) & No & 11 & \\(15.6\\pm 6.7\\) & \\(6.2\\pm 2.9\\) & \\(0.0^{+0.4}_{-0.0}\\) & \\(9.4\\pm 3.4\\) \\\\ \\hline \\end{tabular} \\end{table} Table 4Companion Fraction Comparison Figure 11.— Same as Fig. 9 but for the 50 pc sample split into companions to lower mass hosts (\\(M_{1}<1M_{\\odot}\\)) and companions to higher mass hosts (\\(M_{1}\\geq 1M_{\\odot}\\)). Both samples are scaled such that they contain the same number of companions as the corrected less-biased 50 pc sample of Fig. 9. Also shown are the linear best-fits to the planetary and stellar companions of the two populations. \\begin{table} \\begin{tabular}{|l|c|c|c|c|c|} \\hline Sample & Asymmetry & Figure & LHS slope & RHS slope & Slope Minima\\({}^{a}\\) \\\\ & Correction & & & & [\\(M_{Jup}\\)] \\\\ \\hline \\(d<25\\) pc & Yes & 8 & \\(-15.2\\pm 5.6\\) & \\(22.0\\pm 8.8\\) & \\(31^{+18}_{-17}\\) \\\\ \\(d<25\\) pc & No & & \\(-15.2\\pm 5.6\\) & \\(20.7\\pm 8.5\\) & \\(30^{+25}_{-17}\\) \\\\ \\hline \\(d<50\\) pc & Yes & & \\(-9.4\\pm 3.0\\) & \\(24.3\\pm 4.6\\) & \\(44^{+25}_{-17}\\) \\\\ \\(d<50\\) pc & No & 9 & \\(-9.1\\pm 2.9\\) & \\(24.1\\pm 4.7\\) & \\(43^{+24}_{-29}\\) \\\\ \\hline \\(d<25\\) pc \\& \\(M_{1}<1M_{\\odot}\\) & Yes & 10 & \\(-17.5\\pm 5.4\\) & \\(19.4\\pm 10.7\\) & \\(18^{+17}_{-23}\\) \\\\ \\(d<50\\) pc \\& \\(M_{1}<1M_{\\odot}\\) & No & 11 & \\(-5.9\\pm 5.1\\) & \\(25.2\\pm 11.4\\) & \\(39^{+9}_{-23}\\) \\\\ \\hline \\(d<25\\) pc \\& \\(M_{1}\\geq 1M_{\\odot}\\) & Yes & 10 & \\(-12.4\\pm 9.2\\) & \\(20.0\\pm 10.9\\) & \\(50^{+28}_{-26}\\) \\\\ \\(d<50\\) pc \\& \\(M_{1}\\geq 1M_{\\odot}\\) & No & 11 & \\(-12.2\\pm 8.2\\) & \\(21.1\\pm 10.4\\) & \\(45^{+21}_{-21}\\) \\\\ \\hline \\end{tabular} \\({}^{a}\\) values of mass where the best-fitting lines, to the LHS and RHS, intersect. The errors given are from the range between the two intersections with the abscissa. \\end{table} Table 3Companion Slopes and Companion Desert Mass Minima Figure 10.— Same as Fig. 8 but for the 25 pc sample split into companions to lower mass hosts (\\(M_{1}<1M_{\\odot}\\)) and companions to higher mass hosts (\\(M_{1}\\geq 1M_{\\odot}\\)). The lower mass hosts have 4.2% planetary, 0.0% brown dwarf and 11.8% stellar companions. The higher mass hosts have 6.6% planetary, 0.0% brown dwarf and 9.4% stellar companions. The Doppler method should preferentially find planets around lower mass stars where a greater radial velocity is induced. This is the opposite of what we observe. To aid comparison, both samples are scaled such that they contain the same number of companions as the full corrected less-biased 25 pc sample of Fig. 8. combined them with the known substellar companions of these main sequence stars and found that in logarithmic mass the stellar companions reduce in number towards the brown dwarf mass range. This agrees with our results for the shape of the stellar mass companion function. However, they identify a flat distribution for the planetary mass companions in contrast to our non-zero slope (see Table 3). Mazeh _et al._ (2003) found the frequency of stellar and planetary companions (\\(M_{2}>1\\,M_{Jup}\\)) to be 15% (for stars below \\(0.7\\,M_{\\odot}\\)) and 3% respectively. This compares with our estimates of 8% (for stars below \\(0.7\\,M_{\\odot}\\)) and 5%. The larger period range used by Mazeh _et al._ (2003) can account for the difference in stellar companion fractions. ## 4. Comparison with the Initial Mass Function Brown dwarfs found as free-floating objects in the solar neighbourhood and as members of young star clusters have been used to extend the initial mass function (IMF) well into the brown dwarf regime. Comparing the mass function of our sample of close-orbiting companions of Sun-like stars to the IMF of single stars indicates how the environment of a host affects stellar and brown dwarf formation and/or migration. Here we quantify how different the companion mass function is from the IMF (Halbwachs _et al._, 2000). The galactic IMF appears to be remarkably universal and independent of environment and metallicity with the possible exception of the substellar mass regime. A weak empirical trend with metallicity is suggested for very low mass stars and brown dwarfs where more metal rich environments may be producing relatively more low mass objects (Kroupa, 2002). This is consistent with an extrapolation up in mass from the trend found in exoplanet hosts. The IMF is often represented as a power-law, although this only appears to be accurate for stars with masses above \\(\\sim 1M_{\\odot}\\)(Hillenbrand, 2003). The stellar IMF slope gets flatter towards lower masses and extends smoothly and continously into the substellar mass regime where it appears to turn over. Free floating brown dwarfs may be formed either as ejected stellar embryos or from low mass protostellar cores that have lost their accretion envelopes due to photo-evaporation from the chance proximity of a nearby massive star (Kroupa & Bouvier, 2003). This hypothesis may explain their occurence in relatively rich star clusters such as the Orion Nebula cluster and their virtual absence in pre-main sequence stellar groups such as Figure 12.— The mass function of companions to Sun-like stars (lower left) compared to the initial mass function (IMF) of cluster stars (upper right). Our mass function of the companions to Sun-like stars is shown by the green dots (biger dots are the \\(d<25\\) pc sample, smaller dots are the \\(d<50\\) pc sample). The linear slopes we fit to the data in Fig. 8 are also shown along with their error. Data for the number of stars and brown dwarfs in the Orion Nebula Cluster (ONC) (circles), Pleiades cluster (triangles) and M35 cluster (squares) come from Hillenbrand & Carpenter (2000); Slesnick _et al._ (2004), Moraux _et al._ (2003) and Barrado y Navascues _et al._ (2001) respectively and are normalized such that they overlap for masses larger than \\(1M_{\\odot}\\) where a single power-law slope applies. The absolute normalization of cluster stars is arbitrary, while the companion mass function is normalized to the IMF of the cluster stars by scaling the three companion points of stellar mass to be on average \\(\\sim 7\\%\\) for \\(P<5\\) years (derived from the stellar multiplicity of Duquennoy & Mayor (1991) discussed in Section 3, combined with our estimate that 11% of Sun-like stars have stellar secondaries). The average power-law IMF derived from various values of the slope of the IMF quoted in the literature (Hillenbrand, 2003) is shown as larger red dots along with two thin red lines showing the root-mean-square error. If the turn down in the number of brown dwarfs of the IMF is due to a selection effect because it is hard to detect brown dwarfs, then the two distributions are even more different from each other. For clarity the smaller green dots are shifted slightly to the right. Figure 13.— The initial mass function (IMF) for clusters represented by a series of power-law slopes (Hillenbrand, 2003). Each point represents the power-law slope claimed to apply within the mass range indicated by the horizontal lines. Although the IMF is represented by a series of power-laws, the IMF is not a power-law for masses less than \\(1M_{\\odot}\\) where the slope continually changes. The green dots show the slope of the companion mass function to Sun-like stars between the bins of Figs. 8 & 9 with the larger and smaller dots respectively. The linear fits to the data in Fig. 8 and their associated error are shown by the curves inside the grey regions. The power-law fit of Lineweaver & Grether (2003) (shown as the green dot with a horizontal line indicating the range over which the slope applies) is consistent with these fits. The larger red dots with error bars represent the average power-law IMF with a root-mean-square error. \\(\\Gamma\\) and \\(-\\alpha\\) are the respective logarithmic and linear slopes of the mass function. The logarithmic mass power-law distribution is \\(dN/dlogM\\propto M^{\\Gamma}\\) and the linear mass power-law distribution is \\(dN/dM\\propto M^{-\\alpha}\\) where \\(\\Gamma=1-\\alpha\\). The errors on the fits of Fig. 8 get smaller at \\(M\\sim 10^{-3}\\,M_{\\odot}\\) and \\(M\\sim 1\\,M_{\\odot}\\) since as \\(log(M/M_{\\odot})\\) tends to \\(\\pm\\infty\\), \\(\\Gamma\\) tends to 0. This can also be seen in Fig. 12 where the slopes of the upper and lower contours become increasingly similar. Taurus-Auriga. In Figs. 12 & 13 we compare the mass function of companions to Sun-like stars with the IMF of cluster stars. The mass function for companions to Sun-like stars is shown by the green dots from Figs. 8 and 9 (bigger dots are the \\(d<25\\) pc sample and smaller dots are the \\(d<50\\) pc sample). The linear slopes from Fig. 8 and their one sigma confidence region are also shown. Between \\(log(M/M_{\\odot})\\approx-1.0\\) and \\(-0.5\\) (\\(0.1M_{\\odot}<M<0.3M_{\\odot}\\)) the slopes are similar. However, above \\(0.3M_{\\odot}\\) and below \\(0.1M_{\\odot}\\) the slopes become inconsistent. Above \\(0.3M_{\\odot}\\) the slopes, while of similar magnitude are of opposite sign and below \\(0.1M_{\\odot}\\) the companion slope is much steeper than the IMF slope. The IMF for young clusters (yellow dots) is statistically indistinguishable from that of older stars (blue dots) and follows the average IMF. ## 5. Summary and Discussion We analyse the close-orbitting (\\(P<5\\) years) planetary, brown dwarf and stellar companions to Sun-like stars to help constrain their formation and migration scenarios. We use the same sample to extract the relative numbers of planetary, brown dwarf and stellar companions and verify the existence of a brown dwarf desert. Both planetary and stellar companions reduce in number towards the brown dwarf mass range. We fit the companion mass function over the range that we analyse (\\(0.001<M/M_{\\odot}\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}1.0\\)) by two separate straight lines fit separately to the planetary and stellar data points. The straight lines intersect in the brown dwarf regime, at \\(M=31\\,^{+25}_{-18}\\,M_{Jup}\\). This result is robust to the declination asymmetry correction (Table 3). The period distribution of close-orbitting (\\(P<5\\) years) companion stars is different from that of the planetary companions. The close-in stellar companions are fairly evenly distributed over \\(logP\\) with planets tending to be clumped towards higher periods. We compare the companion mass function to the IMF for bodies in the brown dwarf and stellar regime. We find that starting at \\(1\\,\\,M_{\\odot}\\) and decreasing in mass, stellar companions continue to reduce in number into the brown dwarf regime, while cluster stars increase in number before reaching a maximum just before the brown dwarf regime (Fig. 13). This leads to a difference of at least 1.5 orders of magnitude between the much larger number of brown dwarfs found in clusters to those found as close-orbitting companions to Sun-like stars. The period distribution of close-orbiting companions may be more a result of post-formation migration and gravitational jostling than representive of the relative number of companions that are formed at a specific distance from their hosts. The companion mass distribution is more fundamental than the period distribution and should provide better constraints on formation models, but our ability to sample the mass distribution is only for \\(P<5\\) years. We show in Figs. 10 and 11 that lower mass hosts have more stellar companions and fewer giant planet companions while higher mass hosts have fewer stellar companions but more giant planet companions. The brown dwarf desert is generally thought to exist at close separations \\(\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}3\\) AU (or equivalently \\(P\\leq 5\\) years) (Marcy & Butler 2000) but may disappear at wider separations. Gizis _et al._ (2001) suggests that at very large separations (\\(>1000\\) AU) brown dwarf companions may be more common. However, McCarthy & Zuckerman (2004) in their observation of 280 GKM stars find only 1 brown dwarf between 75 and 1200 AU. Gizis _et al._ (2003) reports that \\(15\\%\\pm 5\\%\\) of M/L dwarfs are brown dwarf binaries with separations in the range \\(1.6-16\\) AU. This falls to \\(5\\%\\pm 3\\%\\) of M/L dwarfs with separations less than 1.6 AU and none with separations greater than 16 AU. This differs greatly from the brown dwarfs orbiting Sun-like stars but is consistent with our host/minimum-companion-mass relationship, i.e., we expect no short period brown dwarf desert around M or L type stars. Three systems containing both a companion with a minimum mass in the planetary regime and a companion with a minimum mass in the brown dwarf regime are known - HD 168443 (Marcy _et al._ 2001), HD 202206 (Udry _et al._ 2002; Correia _et al._ 2004) and GJ 86 (Queloz _et al._ 2000; Els _et al._ 2001). Our analysis suggests that both the \\(Msin(i)\\)-brown dwarfs orbiting HD 168443 and HD 202206 are probably stars (see Section 2.2 for our false positive brown dwarf correction). If the \\(Msin(i)\\)-planetary companions in these 2 systems are coplanar with the larger companions then these \"planets\" may be brown dwarfs or even stars. GJ 86 contains a possible brown dwarf detected orbiting at \\(\\sim 20\\) AU (\\(P>5\\) years) and so was not part of our analysis. However this does suggest that systems containing stars, brown dwarfs and planets may be possible. We find that approximately 16% of Sun-like stars have a close companion more massive than Jupiter. Of these \\(16\\%\\), \\(11\\%\\pm 3\\%\\) are stellar, \\(<1\\%\\) are brown dwarf and \\(5\\%\\pm 2\\%\\) are planetary companions (Table 4). Although Lineweaver & Grether (2003) show that the fraction of Sun-like stars with planets is greater than 25%, this is for target stars that have been monitored the longest (\\(\\sim 15\\) years) and at optimum conditions (stars with low-level chromospheric activity or slow rotation) using the high precision Doppler method. When we limit the analysis of Lineweaver & Grether (2003) to planetary companions with periods of less than 5 years and masses larger than Jupiter, we find the same value that we calculate here. When we split our sample of companions into those with hosts above and below \\(1M_{\\odot}\\), we find that for the lower mass hosts: 11.8% have stellar, \\(<1\\%\\) have brown dwarf and 4.2% have planetary companions and that for the higher mass hosts: 9.4% have stellar, \\(<1\\%\\) have brown dwarf and 6.6% have planetary companions respectively (Table 4). More massive hosts have more planets and fewer stellar companions than less massive hosts. These are marginal results but are seen in both the 25 and 50 pc samples. The constraints that we have identified for the companions to Sun-like stars indicate that close orbiting brown dwarfs are very rare. The fact that there is a close-orbitting brown dwarf desert but no free floating brown dwarf desert suggests that post-collapse migration mechanisms may be responsible for this relative dearth of observable brown dwarfs rather than some intrinsic minimum in fragmentation and gravitational collapse in the brown dwarf mass regime (Ida & Lin 2004). Whatever migration mechanism is responsible for putting hot Jupiters in close orbits, its effectiveness may depend on the mass ratio of the object to the disk mass. Since there is evidence that disk mass is correlated to host mass, the migratory mechanism may be correlated to host mass, as proposed by Armitage & Bonnell (2002). ## 6. Acknowledgements We would like to thank Christian Perrier for providing us with the Elodie exoplanet target list, Stephane Udry for addititional information on the construction of the Coralie exoplanet target list and Lynne Hillenbrand for sharing her data collected from the literature on the power-law IMF fits to various stellar clusters. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. This research has made use of the Washington Double Star Catalog maintained at the U.S. Naval Observatory. ## 7. Appendix: Companion Mass Estimates The Doppler method for companion detection cannot give us the mass of a companion without some additional astrometric or visual solution for the system or by making certain assumptions about the unknown inclination except in the case where a host star and its stellar companion have approximately equal masses and a double-lined solution is available. Thus to find the companion mass \\(M_{2}\\) that induces a radial velocity \\(K_{1}\\) in a host star of mass \\(M_{1}\\) we use (see Heacox 1999) \\[K_{1}=(\\frac{2\\pi G}{P})^{1/3}\\frac{M_{2}sin(i)}{(M_{1}+M_{2})^{2/3}}\\frac{1}{ \\left(1-e^{2}\\right)^{1/2}} \\tag{1}\\] This equation can be expressed in terms of the mass function \\(f(m)\\) \\[f(m)=\\frac{M_{2}^{3}sin^{3}(i)}{(M_{1}+M_{2})^{2}}=\\frac{PK_{1}^{3}(1-e^{2})^{ 3/2}}{2\\pi G} \\tag{2}\\] Eq. 3 can then be expressed in terms of a cubic equation in the mass ratio \\(q=M_{2}/M_{1}\\), where \\(Y=f(m)/M_{1}\\). \\[q^{3}sin^{3}(i)-Yq^{2}-2Yq-Y=0 \\tag{3}\\] For planets (\\(M_{1}>>M_{2}\\)) we can simplify Eq. 2 and directly solve for \\(M_{2}sin(i)\\) but this is not true for larger mass companions such as brown dwarfs and stars. We use Cox (2000) to relate host mass to spectral type. When a double-lined solution is available, the companion mass can be found from \\(q=M_{2}/M_{1}=K_{1}/K_{2}\\). For all single-lined Doppler solutions, where the inclination \\(i\\) of a companion's orbit is unknown (no astrometric or visual solution), we assume a random distribution \\(P(i)\\) for the orientation of the inclination with respect to our line of sight, \\[P(i)di=sin(i)di \\tag{4}\\] From this we can find probability distributions for \\(sin(i)\\) and \\(sin^{3}(i)\\). Heacox (1995) and others suggest using either the Richardson-Lucy or Mazeh-Goldberg algorithms to approximate the inclination distribution. However, Hogeveen (1991) and Trimble (1990) argue that for low number statistics, the simple mean method produces similar results to the more complicated methods. We have large bin sizes and small number statistics, hence we use this method. The average values of the \\(sin(i)\\) and \\(sin^{3}(i)\\) distributions assuming a random inclination are \\(<sin(i)>=0.785\\) and \\(<sin^{3}(i)>=0.589\\), which are used to estimate the mass for planets and other larger single-lined spectroscopic binaries respectively. For example, in Fig. 5, of the 198 mass estimates in the 50 pc sample, 53 (27%) come from visual double-lined Doppler solutions, 6 (3%) come from infrared double-lined Doppler solutions (Mazeh _et al._ 2003), 18 (9%) come from knowing the inclination (astrometric or visual solution also available for system), 10 (5%) come from assuming that Doppler brown dwarf candidates have low inclinations, 55 (28%) come from assuming \\(<sin(i)>=0.785\\) and 56 (28%) from assuming \\(<sin^{3}(i)>=0.589\\). ## References * (1) Armitage, P.J. & Bonnell, I.A., 2002, \"The Brown Dwarf Desert as a Consequence of Orbital Migration\", MNRAS, 330:L11 * (2) Barrado V Navascues, D., Stauffer, J.R., Bouvier, J. & Martin, E.L., 2001, \"From the Top to the Bottom of the Main Sequence: A Complete Mass Function of the Young Open Cluster M35\", ApJ, 546:1006-1018 * (3) Bate, M.R., 2000, \"Predicting the Properties of Binary Stellar Systems: The Evolution of Accreting Protobinary Systems\", MNRAS, 314:33-53 * (4) Boss, A.P., 2002, \"Evolution of the Solar Nebula V: Disk Instabilities with Varied Thermodynamics\", ApJ, 576:462-472 * (5) Burrows, A., Marley, M., Hubbard, W.B., Lunine, J.I., Guillot, T., Saumon, D., Freedman, R., Sudarsky, D., & Sharp, C., 1997, 'A Nongray Theory of Extrasolar Giant Planets and Brown Dwarfs', ApJ, 491:856 * (6) Correia, A.C.M., Udry, S., Mayor, M., Laskar, J., Naef, D., Pepe, F., Queloz, D. & Santos, N.C., 2004, \"The CORALIE survey for southern extra-solar planets XIII. A pair of planets around HD 202206 or a circumbinary planet?\", A&A, 440, 751-758 * (7) Cox, A.N., 2000, 'Allen's Astrophysical Quantities', AIP Press, 4th Edition * (8) Duquennoy, A. & Mayor, M., 1991, 'Multiplicity among Solar-type Stars in the Solar Neighbourhood II', A&A, 248:485-524 * (9) Els, S.G., Sterzik, M.F., Marchis, F., Pantin, E., Endl, M. & Krster, M., 2001, 'A Second Substellar Companion in the Gliese 86 System. A Brown Dwarf in an Extrasolar Planetary System', A&A, 370:L1-L4 * (10) Endl, M., Hatzes, A.P., Cochran, W.D., McArthur, B., Allende Prieto, C., Paulson, D.B., Guenther, E. & Bedalov, A., 2004, 'HD 137510: An Oasis in the Brown Dwarf Desert', ApJ, 611:1121-1124 * (11) ESA, The Hipparcos and Tycho Catalogues, 1997, ESA SP-1200 [http://astro.esec.esa.nl/hipparcos/](http://astro.esec.esa.nl/hipparcos/) * (12) Fischer, D.A., Marcy, G.W., Butler, P.R., Vogt, S.S. & Apps, K., 1999, 'Planetary Companions around Two Solar-Type Stars: HD 195019 and HD 217107', PASP, 111:50-56 * (13) Gizis, J.E., Kirkpatrick, J.D., Burgasser, A., Reid, I.N., Monet, D.G., Liebert, J. & Wilson, J.C., 2001, 'Substellar Companions to Main-Sequence Stars: No Brown Dwarf Desert at Wide Separations', ApJ, 551:L163-L166 * (14) Gizis, J.E., Reid, I.N., Knapp, G.G., Liebert, J., Kirkpatrick, J.D., Koerner, D.W. & Burgasser, A.J., 2003, 'Hubble Space Telescope Observations of Binary Very Low Mass Stars and Brown Dwarfs', AJ, 125:330-3310 * (15) Hartkopf, W.J. & Mason, B.D., 2004, 'Sixth Catalog of Orbits of Visual Binary Stars', [http://ad.usno.navy.mil/wds/orb6.html](http://ad.usno.navy.mil/wds/orb6.html) * (16) Halbwachs, J.L., Arenou, F., Mayor, M., Udry, S. & Queloz, D., 2000, 'Exploring the Brown Dwarf Desert with Hipparcos', A&A, 355:581-594 * (17) Halbwachs, J.L., Mayor, M., Udry, S. & Arenou, F., 2003, 'Multiplicity among Solar-type Stars III', A&A, 397:159-175 * (18) Heacox, W.D., 1995, 'On the Mass Ratio Distribution of Single-Lined Spectroscopic Binaries', AJ, 109, 6:2670-2679 * (19) Heacox, W.D., 1999, 'On the Nature of Low-Mass Companions to Solar-like Stars', ApJ, 526:928-936* () Hillenbrand, L.A., 2003, 'The Mass Function of Newly Formed Stars', astro-ph/0312187 * () Hillenbrand, L.A. & Carpenter, J.M., 2000, 'Constraints on the Stellar/Substellar Mass Function in the Inner Orion Nebula Cluster', ApJ, 540:236-254 * () Hogeven, S.J., 1991, Ph.D. Thesis, University of Illinois, Urbana * () Ida, S. & Lin, D.N.C., 2004, 'Toward a Deterministic model of planetary formation. I. A desert in the mass and semimajor axis distribution of extrasolar planets', ApJ, 604:388-413 * () Jiang, I.-G., Laughlin, G. & Lin, D.N.C., 2004, On the Formation of Brown Dwarfs', ApJ, 127:455-459 * () Jones, H.R.A., Butler, P.R., Marcy, G.W., Tinney, C.G., Penny, A.J., McCarthy, C. & Carter, B.D., 2002, 'Extra-solar planets around HD 196050, HD 216437 and HD 160691', MNRAS, 337:1170-1178 * () Konacki, M., Torres, G., Sasselov, D.D., Pietrzynski, G., Udalski, A., Jha, S., Ruiz, M.T., Gieren, W. & Minniti, D., 2004, 'The Transiting Extrasolar Giant Planet Around the Star OGLE-TR-113', ApJ, 609:L37-L40 * () Kroupa, P., 2002, 'The Initial Mass Function of Stars: Evidence for Uniformity in Variable Systems', Science, 295:82-91 * () Kroupa, P. & Bouvier, J., 2003, 'On the Origin of Brown Dwarfs and Free-Floating Planetary-Mass Objects', MNRAS, 346:369-380 * () Larson, R.B., 2003, 'The Physics of Star Formation', astro-ph/0306596 * () Lineweaver, C.H., Grether, D. & Hidas, M. 2003, 'What can exoplanets tell us about our Solar System?' in the proceedings 'Scientific Frontiers in Research on Extrasolar Planets', ASP Conf. Ser. Vol. 294, edt Deming, D. & Seager, S., p 161, astro-ph/0209382 * () Lineweaver, C.H. & Grether, D., 2003, 'What Fraction of Sun-Like Stars have Planets?', ApJ, 598:1350-1360 * () Marcy, G.W. & Butler, P.R., 2000, 'Planets Orbiting Other Sun', PASP, 112:137-140 * () Marcy, G.W., Butler, P.R., Vogt, S.S., Liu, M.C., Laughlin, G., Apps, K., Graham, J.R., Lloyd, J., Luhman, K.L. & Jayawardhana, R., 2001, 'Two Substellar Companions Orbiting HD 168443', ApJ, 555:418-425 * () Matzner, C.D. & Levin, Y., 2004, 'Low-Mass Star Formation: Initial Conditions, Disk Instabilities and the Brown Dwarf Desert', astro-ph/0408525 * () Mazeh, T., Simon, M., Prato, L., Markus, B. & Zucker, S., 2003, 'The Mass Ratio Distribution in Main-Sequence Spectroscopic Binaries Measured by IR Spectroscopy', ApJ, 599:1344-1356 * () McCarthy, C. & Zuckerman, B., 2004, 'The Brown Dwarf Desert at 75-1200 AU', AJ, 127:2871-2884 * () Moraux, E., Bouvier, J., Stauffer, J.R. & Cuillandre, J.C., 2003, 'Brown Dwarfs in the Pleiades Cluster: Clues to the Substellar Mass Function', A&A, 400:891-902 * () Nidever, D.L., Marcy, G.W., Butler, P.R., Fischer, D.A. & Vogt, S.S., 2002, 'Radial Velocities for 889 Late-type Stars', ApJSS, 141:503-522 * () Pont, F., Bouchy, F., Queloz, D., Santos, N.C., Mayor, M. & Udry, S., 2004, 'The Missing Link: A 4-day Period Transiting Exoplanet around OGLE-TR-111', A&A, 426:L15-L18 * () Pourbaix D., Tokovinin A.A., Batten A.H., Fekel F.C., Hartkopf W.I., Levato H., Morrell N.I., Torres G., Udry S., 2004, 'SB9: The Ninth Catalogue of Spectroscopic Binary Orbits', A&A, 424:727-732 * () Queloz, D., Mayor, M., Weber, L., Blcha, A., Burnet, M., Confino, B., Naef, D., Pepe, F., Santos, N. & Udry, S., 2000, 'The CORALIE Survey for Southern Extra-Solar Planets. I. A planet Orbiting the Star Gliese 86', A&A, 354:99-102 * () Reid, L.N., 2002, 'On the Nature of Stars with Planets', PASP, 114:306-329 * () Rice, W.K.M., Armitage, P.J., Bonnell, I.A., Bate, M.R., Jeffers, S.V. & Vine, S.G., 2003, 'Substellar Companions and Isolated Planetary Mass Objects from Protostellar Disk Fragmentation', MNRAS, 346:L36-L40 * () Saar, S.H., Butler, P.R. & Marcy, G.W., 1998, 'Magnetic Activity Related Radial Velocity Variations in Cool Stars: First Results from Lick Extrasolar Planet Survey', ApJ, 498:L153-L157 * () Schneider, J., 2005, 'Extrasolar Planets Catalog', [http://www.obspm.fr/encycl/catalog.html](http://www.obspm.fr/encycl/catalog.html) * () Slesnick, C.L., Hillenbrand, L.A. & Carpenter, J.M., 2004, 'The Spectroscopically Determined Substellar Mass Function of the Orion Nebula Cluster', ApJ, 610:1045-1063 * () Tinney, C.G., Butler, P.R., Marcy, G.W., Jones, H.R.A., Vogt, S.S., Apps, K. & Henry, G.W., 2001, 'First Results from the Anglo-Australian Planet Search', ApJ, 551:507-511 * () Trimble, V., 1990, 'The Distributions of Binary System Mass Ratios: A Less Biased Sample', MNRAS, 242:79-87 * () Udry, S., Mayor, M., Naef, D., Pepe, F., Queloz, D., Santos, N.C. & Burnet, M., 2002, The CORALIE survey for southern extra-solar planets VIII. The very low-mass companions of HD 141937, HD 162020, HD 168443, HD 202206: Brown dwarfs or superplanets?', A&A, 390, 267:279 * Constraints for the Migration Scenario', A&A, 407:369-376 * () Vogt, S.S., Butler, P.R., Marcy, G.W., Fischer, D.A., Pourbaix, D., Apps, K. & Laughlin, G., 2002, 'Ten Low-Mass Companions from the Keck Precision Velocity Survey', ApJ, 568:352-362 * () Wright, J.T., Marcy, G.W., Butler, P.R. & Vogt, S.S., 2004, 'Chromospheric Ca II Emission in Nearby F, G, K and M stars', ApJSS, 152:261-295 * () Zucker, S. & Mazeh, T., 2001a, 'Analysis of the Hipparcos Observations of the Extrasolar Planets and the Brown Dwarf Candidates', ApJ, 562:549-557 * () Zucker, S. & Mazeh, T., 2001b, 'Derivation of the Mass Distribution of Extrasolar Planets with Maxlima, A Maximum Likelihood Algorithm', ApJ, 562:1038-1044
Sun-like stars have stellar, brown dwarf and planetary companions. To help constrain their formation and migration scenarios, we analyse the close companions (orbital period \\(<5\\) years) of nearby Sun-like stars. By using the same sample to extract the relative numbers of stellar, brown dwarf and planetary companions, we verify the existence of a very dry brown dwarf desert and describe it quantitatively. With decreasing mass, the companion mass function drops by almost two orders of magnitude from \\(1\\,M_{\\odot}\\) stellar companions to the brown dwarf desert and then rises by more than an order of magnitude from brown dwarfs to Jupiter-mass planets. The slopes of the planetary and stellar companion mass functions are of opposite sign and are incompatible at the 3 sigma level, thus yielding a brown dwarf desert. The minimum number of companions per unit interval in log mass (the driest part of the desert) is at \\(M=31\\,^{+25}_{-18}\\,M_{Jup}\\). Approximately 16% of Sun-like stars have close (\\(P<5\\) years) companions more massive than Jupiter: \\(11\\%\\pm 3\\%\\) are stellar, \\(<1\\%\\) are brown dwarf and \\(5\\%\\pm 2\\%\\) are giant planets. The steep decline in the number of companions in the brown dwarf regime, compared to the initial mass function of individual stars and free-floating brown dwarfs, suggests either a different spectrum of gravitational fragmentation in the formation environment or post-formation migratory processes disinclined to leave brown dwarfs in close orbits. Subject headings:
Provide a brief summary of the text.
arxiv-format/0412458v1.md
# Densification of the International Celestial Reference Frame: Results of EVN+ Observations P. Charlot 1Observatoire de Bordeaux (OASU) - CNRS/UMR 5804, BP 89, 33270 Floirac, France 1 A. L. Fey 2U. S. Naval Observatory, 3450 Massachusetts Avenue NW, DC 20392-5420, USA 2 C. S. Jacobs 3Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109, USA 3 C. Ma 4National Aeronautics and Space Administration, Goddard Space Flight Center, Greenbelt, MD 20771, USA 4 O. J. Sovers 5Remote Sensing Analysis Systems, 2235 N. Lake Avenue, Altadena, CA 91101, USA5 A. Baudry 1Observatoire de Bordeaux (OASU) - CNRS/UMR 5804, BP 89, 33270 Floirac, France 1 ## 1 Introduction The International Celestial Reference Frame (ICRF), the most recent realization of the VLBI celestial frame, is currently defined by the radio positions of 212 extragalactic sources observed by VLBI between August 1979 and July 1995 (Ma et al. 1998). These _defining_ sources, distributed over the entire sky, set the initial direction of the ICRF axes and were chosen based on their observing histories with the geodetic networks and the accuracy and stability of their position estimates. The accuracy of the individual source positions is as small as 0.25 milliarcsecond (mas) while the orientation of the frame is good to the 0.02 mas level. Positions for 294 less-observed _candidate_ sources and 102 _other_ sources with less-stable coordinates were also reported, primarily to densify the frame. Continued observations through May 2002 have provided positions for an additional 109 new sources and refined coordinates for candidate and \"other\" sources (Fey et al. 2004). The current ICRF with a total of 717 sources has an average of one source per \\(8^{\\circ}\\times 8^{\\circ}\\) on the sky. While this density is sufficient for geodetic applications, it is clearly too sparse for differential-VLBI applications (spacecraft navigation, phase-referencing of weak targets), which require reference calibrators within a-few-degree angular separation, or for linking other reference frames (e.g. at optical wavelengths) to the ICRF. Additionally, the frame suffers from a inhomogeneous distribution of the sources. For example, the angular distance to the nearest ICRF source for any randomly-chosen sky location can be as large as \\(13^{\\circ}\\) in the northern sky and \\(15^{\\circ}\\) in the southern sky (Charlot et al. 2000). This non-uniform source distribution makes it difficult to assess and control any local deformations in the frame. Such deformations might be caused by tropospheric propagation effects and apparent source motions due to variable intrinsic structure (see Ma et al. 1998). This paper reports results of astrometric VLBI observations of 150 new sources to densify the ICRF in the northern sky. These observations were carried out using the European VLBI Network (EVN) and additional geodetic antennas that joined the EVN for this project. The approach used in selecting the new potential ICRF sources was designed to improve the overall source distribution of the ICRF. Sources with no or limited extended emission were preferably selected to guarantee high astrometric suitably. Sections 2 and 3 below describe the source selection strategy in further details, the network and observing scheme used in these EVN+ experiments, and the data analysis. The astrometric results that have been obtained are discussed in Sect. 4, including a comparison with the Very Long Baseline Array (VLBA) Calibrator Survey astrometric positions for 129 common sources. ## 2 Strategy for Selecting New ICRF Sources The approach used for selecting new sources to densify the ICRF was to fill first the \"empty\" regions of the frame. The largest such region for the northern sky is located near \\(\\alpha\\) = 22 h 05 min, \\(\\delta=57^{\\circ}\\), where no ICRF source is to be found within \\(13^{\\circ}\\). A new source should thus be preferably added in that part of the sky. By using this approach again and repeating it many times, it is then possible to progressively fill the \"empty\" regions of the frame and improve the overall ICRF source distribution. The input catalog for selecting the new sources to observe was the Jodrell Bank-VLA Astrometric Survey (JVAS) which comprises a total 2118 compact radio sources distributed over all the northern sky (Patnaik et al. 1992, Browne et al. 1998, Wilkinson et al. 1998). Each JVASsource has a peak flux density at 8.4 GHz larger than 50 mJy at a resolution of 200 mas, contains 80% or more of the total source flux, and has a position known to an rms accuracy of 12-55 mas. For every \"empty\" ICRF region, all JVAS sources within a radius of 6\\({}^{\\circ}\\) (about 10 sources on average) were initially considered. These sources were then filtered out using the VLBA Calibrator Survey, which includes VLBI images at 8.4 and 2.3 GHz for most JVAS sources (Beasley et al. 2002), to eventually select the source with the most compact structure in each region. The results of this iterative source selection scheme show that 30 new sources are required to reduce the angular distance to the nearest ICRF source from a maximum of 13\\({}^{\\circ}\\) to a maximum of 8\\({}^{\\circ}\\). Another 40 new sources would further reduce this distance to a maximum of 7\\({}^{\\circ}\\) while for a maximum distance of 6\\({}^{\\circ}\\), approximately 150 new sources should be added. Carrying this procedure further, it is found that the number of required new sources doubles for any further decrease of this distance of 1\\({}^{\\circ}\\) (approximately 300 new sources for a maximum distance of 5\\({}^{\\circ}\\) and 600 new sources for a maximum distance of 4\\({}^{\\circ}\\)) with the limitation that the JVAS catalog is not uniform enough to fill all the regions below a distance of 6\\({}^{\\circ}\\). Based on this analysis, we have selected the first 150 sources identified through this procedure for observation with the EVN+ network described below. As shown in Fig. 1, the overall source distribution is potentially much improved with these additional 150 sources in the northern sky. ## 3 Observations and Data Analysis The observations were carried out in a standard geodetic mode during three 24-hour dual-frequency (2.3 and 8.4 GHz) VLBI experiments conducted on May 31, 2000, June 5, 2002, and October 27, 2003, using the EVN (including the Chinese and South African telescopes) and up to four additional geodetic radio telescopes (Algonquin Park in Canada, Goldstone/DSS 13 and Greenbank/NRAO20 in USA, and Ny-Alesund in Spitsbergen). There were between 10 and 12 telescopes scheduled for each experiment. Such a large network permits a geometrically-strong schedule based on sub-netting which allows tropospheric gradient effects to be estimated from the data. The inclusion of large radio telescopes (Effelsberg, Algonquin Park) in this network was essential because the new sources are much weaker than the ICRF ones (median total flux of 0.26 Jy compared to 0.83 Jy for the ICRF sources, see Charlot et al. 2000). Each experiment observed a total of 50 new sources along with 10 highly-accurate ICRF sources so that the positions of the new sources can be linked directly to the ICRF. The data were correlated with the Bonn Mark 4 correlator, fringe-fitted using the Haystack software fourfit, and exported in the standard way to geodetic data base files. All subsequent analysis employed the models implemented in the VLBI modeling and analysis software MODEST (Sovers & Jacobs 1996). Standard geodetic VLBI parameters (station clock offsets and rates with breaks when needed, zenith wet tropospheric delays every 3 hours, and Earth orientation) were estimated in each experiment along with the astrometric positions (right ascension and declination) of the new sources. The positions of the 10 ICRF link sources were held fixed as were station coordinates. Observable weighting included added baseline-dependent noise adjusted for each baseline in each experiment in order to make \\(\\chi^{2}\\) per degree of freedom approximately equal to 1. Figure 1: Northern-sky source distribution in polar coordinates. _Left:_ for the current ICRF, including defining, candidate, and “other” sources plus the additional sources published in ICRF-Ext.1 (see Fey et al. 2004). _Right:_ same plot after adding the 150 new sources identified to fill the “empty” regions of the frame. The outer circle corresponds to a declination of 0\\({}^{\\circ}\\) while the inner central point is for a declination of 90\\({}^{\\circ}\\). The intermediate circles correspond to declinations of 30\\({}^{\\circ}\\) and 60\\({}^{\\circ}\\). ## 4 Results The three EVN+ experiments described above have been very successful in observing the selected targets. All 150 new potential ICRF sources have been detected, hence indicating that the source selection strategy and observing scheme set up for these experiments were appropriate. In the first two experiments (2000 May 31 and 2002 June 5), there were generally between 20 and 60 pairs of delay and delay rates usable for each source to estimate its astrometric position. Conversely, more than half of the sources observed in the third experiment (2003 October 27) had less than 20 pairs of usable delay and delay rates because of the failure of three telescopes in that experiment. Figure 2 shows the error distribution in right ascension and declination for the 150 newly-observed sources. The distribution indicates that about 70% of the sources have position errors smaller than 1 mas, consistent with the high quality level of the ICRF. The median coordinate uncertainty is 0.37 mas in right ascension and 0.63 mas in declination. The larger declination errors are most probably caused by the predominantly East-West network used for these observations. Figure 2 also shows that a dozen sources have very large errors (\\(>\\) 3 mas). Most of these sources were observed during the 2003 October 27 experiment and have only a few available observations or data only on short intra-Europe baselines. Such sources should be re-observed to obtain improved coordinates if these are to be considered for inclusion in the next ICRF realization. Among our 150 selected targets, 129 sources were found to have astrometric positions available in the VLBA Calibrator Survey (Beasley et al. 2002). A comparison of these positions with those estimated from our analysis shows agreement within 1 mas for half of the sources and within 2 mas for 80% of the sources. While the magnitude of the differences is consistent with the reported astrometric accuracy of the VLBA Calibrator Survey, further investigation is necessary to determine whether these differences are of random nature or show systematic trends. Such trends may be caused by the limited geometry used in observing the VLBA Calibrator Survey (see Beasley et al. 2002). ## 5 Conclusion A total of 150 new potential ICRF sources have been successfully detected using the EVN and additional geodetic radio telescopes located in USA, Canada and Spitsbergen. About two-third of the sources observed with this EVN+ network have coordinate uncertainties better than 1 mas, and thus constitute valuable candidates for extending the ICRF. The inclusion of these sources would largely improve the ICRF sky distribution by naturally filling the \"empty\" regions of the current celestial frame. Extending further the ICRF will require observing weaker and weaker sources as the celestial frame fills up and hence will depend closely on how fast the sensitivity of VLBI arrays improves in the future. Charlot (2004) estimates that an extragalactic VLBI celestial frame comprising 10 000 sources may be possible by 2010 considering foreseen improvements in recording data rates (disk-based recording, modern digital videoconveters) and new radio telescopes of the 40-60 meter class that are being built, especially in Spain, Italy and China. In the even longer term, increasing the source density beyond that order of magnitude is likely to require new instruments such as the Square Kilometer Array envisioned by 2015-2020. ###### Acknowledgements. The European VLBI Network (EVN) is a joint facility of European, Chinese, South African and other radio astronomy institutes funded by their national research councils. The non-EVN radio telescopes in Algonquin Park (Canada), Ny-Alesund (Spitsbergen), Goldstone (USA), and Greenbank (USA) are sponsored by Natural Resources Canada, the Norvegian Mapping Authority, the National Aeronautics and Space Administration, and the U. S. Naval Observatory, respectively. We thank all participating observatories, Figure 2: Astrometric precision of the estimated coordinates in _a)_ right ascension and _b)_ declination for the 150 newly-observed sources. All errors larger than 3 mas are placed in a single bin marked with the label “\\(>\\) 3 mas” on each plot. with special acknowledgements to the staff of the non-EVN geodetic stations for their enthousiasm in participating in this project. We are also grateful to Nancy Vandenberg for help in scheduling, Walter Alef and Arno Mueskens for data correlation and advice in fringe-fitting, and Axel Nothnagel for export of the data to geodetic data base files. This research was supported by the European Commission's I3 Programme \"RADIONET\", under contract No. 505818. ## References * Beasley et al. (2002) Beasley, A. J., Gordon, D., Peck, A. B., Petrov, L., MacMillan, D. S., Fomalont, E. B., Ma, C. 2002, ApJS, 141, 13. * Browne et al. (1998) Browne, I. W. A., Patnaik, A. R., Wilkinson, P. N., Wrobel, J. M. 1998, MNRAS, 293, 257 * Charlot (2004) Charlot, P. 2004, in International VLBI Service for Geodesy and Astrometry 2004 General Meeting Proceedings, eds. N. R. Vandenberg & K. D. Beaver, NASA/CP-2004-212255, 12 * Charlot et al. (2000) Charlot, P., Viateau, B., Baudry, A., Ma, C., Fey, A. L., Eubanks, T. M., Jacobs, C. S., Sovers, O. J. 2000, in International VLBI Service for Geodesy and Astrometry 2000 General Meeting Proceedings, eds. N. R. Vandenberg & K. D. Beaver, NASA/CP-2000-209893, 168 * Fey et al. (2004) Fey, A. L., Ma, C., Arias, E. F., Charlot, P., Feissel-Vernier, M., Gontier, A.-M., Jacobs, C. S., Li, J., MacMillan, D. S. 2004, AJ, 127, 3587 * Ma et al. (1998) Ma, C., Arias, E. F., Eubanks, T. M., Fey, A. L., Gontier, A.-M., Jacobs, C. S., Sovers, O. J., Archinal, B. A., Charlot, P. 1998, AJ, 116, 516 * Patnaik et al. (1992) Patnaik, A. R., Browne, I. W. A., Wilkinson, P. N., Wrobel, J. M. 1992, MNRAS, 254, 655 * Sovers & Jacobs (1996) Sovers, O. J., Jacobs, C. S. 1996, Observation Model and Parameter Partials for the JPL VLBI Parameter Estimation Software \"MODEST\"-1996, JPL Publication 83-89, Rev. 6, August 1996 * Wilkinson et al. (1998) Wilkinson, P. N., Browne, I. W. A., Patnaik, A. R., Wrobel, J. M., Sorathia, B. 1998, MNRAS, 300, 790
The current realization of the International Celestial Reference Frame (ICRF) comprises a total of 717 extragalactic radio sources distributed over the entire sky. An observing program has been developed to densify the ICRF in the northern sky using the European VLBI network (EVN) and other radio telescopes in Spitsbergen, Canada and USA. Altogether, 150 new sources selected from the Jodrell Bank-VLA astrometric Survey were observed during three such EVN+ experiments conducted in 2000, 2002 and 2003. The sources were selected on the basis of their sky location in order to fill the \"empty\" regions of the frame. A secondary criterion was based on source compactness to limit structural effects in the astrometric measurements. All 150 new sources have been successfully detected and the precision of the estimated coordinates in right ascension and declination is better than 1 milliarcsecond (mas) for most of them. A comparison with the astrometric positions from the Very Long baseline Array Calibrator Survey for 129 common sources indicates agreement within 2 mas for 80% of the sources.
Provide a brief summary of the text.
arxiv-format/0502075v4.md
# Quantum Zeno and anti-Zeno effects in an Unstable System with Two Bound State Kavan Modi [email protected] Department of Physics, Center for Complex Quantum Systems The University of Texas at Austin, Austin, Texas 78712-1081 Anil Shaji Department of Physics, Center for Complex Quantum Systems The University of Texas at Austin, Austin, Texas 78712-1081 Received November 7, 2021 ## I Introduction The quantum Zeno effect, first predicted by Misra and Sudarshan [1; 2], is the hindrance of the time evolution of a quantum state when frequent measurements are performed on it. In the limit of continuous measurement the time evolution of the state, in principle, completely stops. The seminal paper by Misra and Sudarshan proves the existence of an operator corresponding to continuous measurement belonging to the Hilbert space of a generic quantum system. More recently, several authors have suggested that the opposite of quantum Zeno effect may also be true [3; 4; 5]. That is, frequent measurements can be used to accelerate the decay of an unstable state. This effect is known as the anti-Zeno effect or the inverse Zeno effect. The original formulation of the quantum Zeno effect treated the measurement process as an idealized von-Neumann type; that is an instantaneous event that induces discontinuous changes in the measured system. The anti-Zeno effect was first identified as a possibility when measurement processes that take a finite amount of time were considered. This led to the suggestion by several authors that the anti-Zeno effect should be observed more often in physical systems than the quantum Zeno effect. Experimental evidence supporting the quantum Zeno effect in particle physics experiments was first pointed out by Valanju _et al_[6; 7]. Direct experimental observation of the quantum Zeno effect was obtained by Itano _et al_[8] in a three-level oscillating system. Recently, in a set of experiments Fischer, Gutierrez-Medina, and Raizen observed, for the first time, _both_ the quantum Zeno effect and the anti-Zeno effect in an _unstable_ quantum mechanical system [9]. In this Letter we present a simple model that reproduces all of the important results of this experiment. This Letter is organized as follows: We briefly describe the experiment by Fischer _et al_ in section II. In section III, we present a simplified model of the system studied experimentally in [9]. The model is exactly solvable and the time dependence of the survival probability of the initial unstable state can be analytically calculated. In section IV, we show that the solutions reproduce all the important features of the experimental system. On the basis of our model we argue that the anti-Zeno effect is observed because of the presence of more than one unstable bound states in the system. Our conclusions are in section V. ## II Description of the experiment In the experiment by Fischer _et al_[9], sodium atoms were placed in a classical magneto-optical trap that could be moved in space. The motional states of the atoms in the trap were studied. Initially, the atoms were placed in the \"ground\" state of the moving trap so that they remained inside the trap. The stable bound states occupied by the atoms in the trap are made unstable by accelerating the atoms along with the trap at different rates. By tuning the acceleration appropriate conditions are created for the atoms to quantum mechanically tunnel through the barrier into the continuum of available free-particle states. The number of atoms that tunneled out of the bound state inside the trap as a function of time was estimated at the end of the experiment by recording the spatial distribution of the atoms. Since the trap was accelerated throughout the experiment, atoms that spent more time in the and trap had higher velocities and they moved farther in unit time. So by taking a snap-shot of the spatial distribution of all the atoms at the end of the experiment the time at which each one of them tunnel out of the trap can be estimated. To obtain the Zeno and the anti-Zeno effects, the tunneling of the atoms out of the trap has to be interrupted by a measurement that estimates the number of atoms still inside the trap. Such a measurement was implemented in the experiment by abruptly changing the acceleration of the trap so that tunneling from the ground state is temporarily halted. These interruption periods were long enough (\\(40\\mu s\\)) to separate out the atoms that tunnel out before and after each interruption into resoluable groups. By measuring the number of atoms in each group and knowing the total number of atoms that were initially in the trap, the number of atoms in the trap at the beginning of each interruption period was estimated. Using this data, the time dependence of the survival probability of the bound motional states of an atom was reconstructed. Their observations are summarized in Fig. 1. The frequency with which interruption periods were applied determined whether the Zeno or anti-Zeno effects were obtained. Repeatedly interrupting the system once every micro-second led to the quantum Zeno effect. With an interruption rate of \\(5\\mu s\\) the anti-Zeno effect was obtained. In Fig1, the zero slope for the survival probability at \\(t=0\\) is expected for the time evolution of a generic unstable quantum state on the basis of the analyticity of the survival probability and its time reversal symmetry [10]. This non-exponential behavior at short times leads to the quantum Zeno effect. The shape of survival probability as a function of time of the \"unmeasured system\" has an inflection point at \\(t\\approx 7\\mu s\\). (see Fig. 1). We show that it is the presence of a second unstable bound state that is responsible for this inflection point. The anti-Zeno effect is obtained by taking advantage of that inflection point. In an earlier experiment that led to this experiment, Bharucha et al. observed tunneling of sodium atoms from an accelerated trap [11]. Our analysis of the experiment in [9] is motivated by the following passage from [11]: When the standing wave is accelerated, the wave number changes in time and the atoms undergo Bloch oscillations across the first Brillouin zone. As the atoms approach the band gap, they can make Landau-Zener transitions to the next band. Once the atoms are in the second band, they rapidly undergo transitions to the higher bands and are effectively free particles. The last sentence above suggests that higher energy bound states are present in their system. An atom in the ground state might have to go through these intermediate bound states before it can tunnel out to the continuum of available free-particle states. We investigate the effect of these intermediate states on survival probability of the ground state. ## III Model We consider an interacting field theory of four fields labeled \\(A\\), \\(B\\), \\(C\\), and \\(\\Theta\\), with the following commutation relations: \\[[a,a^{\\dagger}]=[b,b^{\\dagger}]=[c,c^{\\dagger}]=1,\\] \\[\\left[\\theta(\\omega),\\theta^{\\dagger}(\\omega^{\\prime})\\right] \\right]=\\delta(\\omega-\\omega^{\\prime}).\\] All other commutators being zero. \\(a^{\\dagger}\\) (\\(a\\)) etc. represent the creation (annihilation) operators corresponding to the four fields. Only \\(\\Theta\\) is labeled by a continuous index \\(\\omega\\), while other fields are assumed to only have discrete modes. The allowed processes in the model are \\[A\\leftrightarrows B\\;{\\rm and}\\;B\\leftrightarrows C\\Theta.\\] The Hamiltonian for the model with these allowed processes can be written down as \\[H=H_{0}+V \\tag{1}\\] where, \\[H_{0}=E_{A}a^{\\dagger}a+E_{B}b^{\\dagger}b+\\int_{0}^{\\infty}d\\omega\\;\\omega\\; \\theta^{\\dagger}(\\omega)\\theta(\\omega) \\tag{2}\\] and \\[V = \\Omega\\;a^{\\dagger}\\;b+\\Omega^{*}\\;b^{\\dagger}\\;a \\tag{3}\\] \\[+ \\int_{0}^{\\infty}d\\omega\\left[f(\\omega)\\;b^{\\dagger}\\;c\\;\\theta( \\omega)+f(\\omega)^{*}\\;c^{\\dagger}\\theta^{\\dagger}(\\omega)\\;b\\right].\\] The two discrete energy levels are denoted by \\(E_{A}\\) and \\(E_{B}\\). The Hamiltonian in Eq. (1) is obtained by modifying the Hamiltonian for the Friedrichs-Lee model [12; 13]. It is instructive to look at the spectrum of \\(H\\) and \\(H_{0}\\) in the complex plane at this point. We call the eigenstates of \\(H\\) physical states, while the eigenstates of \\(H_{0}\\) are referred to as bare states. We choose the zero point of energy so that the continuum eigenstates of \\(H_{0}\\) have positive energy (\\(0\\leq\\omega<\\infty\\)). If \\(E_{A}\\) and \\(E_{B}\\) are negative and if the shift in these energies due to the perturbation \\(V\\) is small then the spectrum of the physical Hamiltonian, \\(H\\), will contain two stable bound states with negative energies \\(\\Lambda_{A}\\) and \\(\\Lambda_{B}\\) in addition to continuum of states with positive energies. This is illustrated in Fig. 2. Figure 1: On the left, the lower line is the “unmeasured” decay curve corresponding to the case where the trapped atoms are accelerated with no interruptions so that tunneling out of the trap is always present. The upper line corresponds to the case where the tunneling is interrupted every 1 \\(\\mu\\)s leading to the quantum Zeno effect. On the right, the upper line is the “unmeasured” decay and the lower line corresponds to interruptions every 5 \\(\\mu\\)s leading to the anti-Zeno effect in the experiment by Fischer et al. We are interested in studying the temporal evolution of an unstable system. So we choose \\(E_{A}\\) and \\(E_{B}\\) to be positive so that they lie embedded in the physical continuum spectrum. The spectrum of the physical Hamiltonian will no longer include bound states. The eigenstates of \\(H\\) belonging to the continuum, corresponding to eigenvalues \\(0\\leq\\lambda<\\infty\\), will form a complete set of states. To see what happened to the two bound states of the bare Hamiltonian \\(H_{0}\\) when the perturbation \\(V\\) is introduced, it is instructive to look at the resolvent of the physical Hamiltonian, \\((E-H)^{-1}\\). The resolvent, in this case, will have two complex poles indicating two transient or unstable states. The location of these poles is fixed by choosing appropriate boundary conditions to make sure that the unstable states decay (rather than grow) in time. This is illustrated in Fig. 3. ### The Continuum States We are interested in the time evolution of the eigenstates of \\(H_{0}\\), namely the two bound bare states \\(|A\\rangle\\) and \\(|B\\rangle\\), and the continuum states \\(|C\\Theta(\\omega)\\rangle\\). The state \\(|A\\rangle\\) in our model corresponds to the unstable bound state occupied by the atoms inside the trap in the experiment by Fischer _et al_ The states \\(|C\\Theta(\\omega)\\rangle\\) represent the continuum outside the trap into which the bound state can decay. We have introduced an additional unstable bound state \\(|B\\rangle\\) which represents a second bound motional state of the trap. The state \\(|A\\rangle\\) is directly coupled to only \\(|B\\rangle\\) and the decay of \\(|A\\rangle\\) into \\(|C\\Theta\\rangle\\) is mediated by the new state \\(|B\\rangle\\). We will show that the presence of the additional bound state can explain several of the key features of the experiment by Fischer _et al_ We begin by writing the full Hamiltonian, \\(H\\), in matrix form using eigenstates of \\(H_{0}\\) as basis [14], \\[H=\\left(\\begin{array}{ccc}E_{A}&\\Omega^{*}&0\\\\ \\Omega&E_{B}&f^{*}(\\omega^{\\prime})\\\\ 0&f(\\omega)&\\omega\\delta(\\omega-\\omega^{\\prime})\\end{array}\\right). \\tag{4}\\] Let \\(|\\psi_{\\lambda}\\rangle\\) represent an eigenstate of \\(H\\) with eigenvalue \\(\\lambda\\), satisfying the eigenvalue equation \\[H\\psi_{\\lambda}=\\lambda\\psi_{\\lambda}. \\tag{5}\\] We express \\(\\psi_{\\lambda}\\) also in terms the eigenstates of \\(H_{0}\\), \\[\\psi_{\\lambda}=\\left(\\begin{array}{c}\\langle A|\\psi_{\\lambda} \\rangle\\\\ \\langle B|\\psi_{\\lambda}\\rangle\\\\ \\langle C\\Theta(\\omega)|\\psi_{\\lambda}\\rangle\\end{array}\\right)\\equiv\\left( \\begin{array}{c}\\mu_{\\lambda}^{A}\\\\ \\mu_{\\lambda}^{B}\\\\ \\phi_{\\lambda}(\\omega)\\end{array}\\right). \\tag{6}\\] \\(H\\) can have three possible classes of eigenstates; a maximum of two bound states and a set of continuum eigenstates. We first look at the continuum eigenstates of \\(H\\) followed by the remaining physical bound states in next section. Using the Eq. (5), we get a system of three coupled integral equations: \\[\\mu_{\\lambda}^{A} = \\frac{\\Omega^{*}}{\\lambda-E_{A}}\\mu_{\\lambda}^{B}\\ \\ \\ \\ (\\lambda \ eq E_{A}), \\tag{7}\\] \\[\\mu_{\\lambda}^{B} = \\frac{\\Omega\\mu_{\\lambda}^{A}+\\int_{0}^{\\infty}d\\omega^{\\prime}f ^{*}(\\omega^{\\prime})\\phi_{\\lambda}(\\omega^{\\prime})}{\\lambda-E_{B}}\\ \\ \\ \\ \\ ( \\lambda\ eq E_{B}),\\] (8) \\[\\phi_{\\lambda} = \\frac{f(\\omega)}{\\lambda-\\omega+i\\epsilon}\\mu_{\\lambda}^{B}+ \\delta(\\lambda-\\omega). \\tag{9}\\] The continuum wave function in Eq. (9) is singular in the energy (momentum) space, therefore it extends to infinity in the configuration space. We choose the \"in\" solution by choosing the sign of the imaginary part. Eqs. (7-9) can be solved simultaneously to get \\[\\psi_{\\lambda} = \\left(\\begin{array}{c}\\mu_{\\lambda}^{A}\\\\ \\mu_{\\lambda}^{B}\\\\ \\phi_{\\lambda}(\\omega)\\end{array}\\right)=\\left(\\begin{array}{c}\\frac{f( \\lambda)}{\\beta^{+}(\\lambda)}\\frac{\\Omega^{*}}{\\lambda-E_{A}}\\\\ \\frac{f(\\lambda)}{\\beta^{+}(\\lambda)}\\\\ \\frac{f(\\omega)}{\\beta^{+}(\\lambda)}\\frac{f(\\omega)}{\\lambda-\\omega+i\\epsilon} +\\delta(\\lambda-\\omega)\\end{array}\\right) \\tag{10}\\] Figure 3: The discrete eigenvalues of \\(H_{0}\\), \\(E_{A,B}\\) are chosen to be real and positive. The full Hamiltonian has no real negative eigenvalues corresponding to bound states. The resolvent of \\(H\\) has two complex poles. The two poles represent unstable states and they could have either positive or negative imaginary parts. These four possibilities are indicated by \\(\\Lambda_{1,2,3,4}\\). The sign of the imaginary part is fixed by the boundary conditions. \\(\\Lambda_{2}\\) and \\(\\Lambda_{4}\\) correspond to unstable states that decay in time. The continuum of physical states lies along positive real-axis. Figure 2: \\(E_{A,B}\\) are the energies of bare bound states. We choose \\(E_{A}\\) and \\(E_{B}\\) to be real and negative. The full Hamiltonian, \\(H\\), then has two real negative eigenvalues indicated by \\(\\Lambda_{A}\\) and \\(\\Lambda_{B}\\). \\(H\\) also has a continuum of eigenstates along the positive real-axis. where \\[\\beta(z)=z-E_{B}-\\frac{\\Omega^{2}}{z-E_{A}}-\\int_{0}^{\\infty}\\frac{|f(\\omega)|^{2} }{z-\\omega}d\\omega, \\tag{11}\\] is a real analytic function and \\[\\beta^{\\pm}(\\lambda)=\\beta(\\lambda\\pm i\\epsilon)\\quad\\mbox{and}\\quad\\beta^{\\pm }(\\lambda)^{*}=\\beta^{\\mp}(\\lambda). \\tag{12}\\] ### The Physical Bound States Now we look at the case when \\(\\lambda\ eq\\omega\\) for all \\(\\lambda\\). Since \\(0\\leq\\omega<\\infty\\), the physical states in this case are restricted to eigenvalues on the negative real-axis. The delta function is no longer present in Eq. (9), meaning that the physical states do not extend to infinity. Such solutions of the eigenvalue problem correspond to the stationary eigenstates of \\(H\\). Eq. (9) now becomes \\[\\phi_{\\lambda}=\\frac{f(\\omega)}{\\lambda-\\omega}\\mu_{\\lambda}^{B}. \\tag{13}\\] Solving for \\(\\mu_{\\lambda}^{B}\\), we get \\[\\beta(\\lambda)\\mu_{\\lambda}^{B}=0. \\tag{14}\\] We choose \\(\\mu_{\\lambda}^{B}\ eq 0\\), to obtain a non-trivial solution, then \\(\\beta(\\lambda)=0\\), which leads to the following expression, with zeroes at \\(\\lambda=\\Lambda_{j}\\). \\[\\lambda^{2}-\\lambda\\left(E_{A}+E_{B}+\\int_{0}^{\\infty}\\frac{|f( \\omega)|^{2}}{\\lambda-\\omega}d\\omega\\right)\\] \\[-\\Omega^{2}+\\left.E_{A}\\left(E_{B}+\\int_{0}^{\\infty}\\frac{|f( \\omega)|^{2}}{\\lambda-\\omega}d\\omega\\right)\\right|_{\\Lambda_{j}}=0 \\tag{15}\\] The wave functions belonging to this class of solutions are just complex numbers that are fixed by the normalization condition. The solutions are the following: \\[\\psi_{\\Lambda_{j}}=\\left(\\begin{array}{c}\\mu_{\\Lambda_{j}}^{A}\\\\ \\mu_{\\Lambda_{j}}^{B}\\\\ \\phi_{\\Lambda_{j}}\\end{array}\\right)=\\left(\\begin{array}{c}\\frac{1}{\\sqrt{ \\beta^{\\prime}(\\Lambda_{j})}}\\frac{\\Omega^{*}}{\\Lambda_{j}-E_{A}}\\\\ \\frac{1}{\\sqrt{\\beta^{\\prime}(\\Lambda_{j})}}\\\\ \\frac{1}{\\sqrt{\\beta^{\\prime}(\\Lambda_{j})}}\\frac{f(\\omega)}{\\Lambda_{j}- \\omega}\\end{array}\\right), \\tag{16}\\] where \\[\\beta^{\\prime}(\\Lambda_{j}) = \\left.\\frac{d\\beta}{d\\lambda}\\right|_{\\lambda=\\Lambda_{j}}\\] \\[= 1+\\frac{|\\Omega|^{2}}{(\\lambda-E_{A})^{2}}+\\int_{0}^{\\infty} \\frac{|f(\\omega)|^{2}}{(\\lambda-\\omega)^{2}}d\\omega\\Bigg{|}_{\\lambda=\\Lambda_ {j}}\\,.\\] We can find \\(\\Lambda_{j}\\) by numerically solving Eq. (15). For a weak potential \\(V\\), we can roughly approximate \\[\\Lambda_{j}\\approx E_{A,B}\\pm i\\,\\mbox{Im}\\left[\\int_{0}^{\\infty}\\frac{\\left|f (\\omega)\\right|^{2}}{\\lambda-\\omega+i\\epsilon}\\Bigg{|}_{\\lambda=E_{A,B}} \\right]. \\tag{18}\\] In general, Eq. (15) has four possible roots, given by Eq. (18). Only the bound states with real eigenvalues \\(\\Lambda_{j}\\) are physically relevant. Once again this due to the fact that the physical spectrum is confined to the real-axis. The imaginary part of the integrals in Eq. (18) vanishes everywhere except near \\(\\omega_{0}\\). Hence for \\(E_{A}\\) and \\(E_{B}\\) with sufficiently negative energies there are two physical bound states with real eigenvalues. Once again this will not be the case of interest since no decay takes place here. For sake of completeness it should be noticed that the physical bound states will be present wherever the imaginary part of the integral in Eq. (18) vanishes. ### Survival Amplitude We are now in a position to calculate the survival amplitude of the bare state as a function of time. If \\(|A\\rangle\\) is occupied at t=0, then its survival amplitude is obtained by computing the matrix element \\(\\mathscr{A}_{A}(t)=\\langle A|e^{-iHt}|A\\rangle\\). We can compute this matrix element by inserting a complete set of physical states, i.e. \\[\\int_{0}^{\\infty}\\left|\\psi_{\\lambda}\\right\\rangle\\left\\langle\\psi_{\\lambda} \\right|d\\lambda+\\left|\\psi_{\\Lambda_{A}}\\right\\rangle\\left\\langle\\psi_{ \\Lambda_{A}}\\right|+\\left|\\psi_{\\Lambda_{B}}\\right\\rangle\\left\\langle\\psi_{ \\Lambda_{B}}\\right|=1.\\] Here we are considering the possibility that stable bound states \\(\\left|\\psi_{\\Lambda_{A,B}}\\right\\rangle\\) with real energies \\(\\Lambda_{A,B}\\) may exist. Let's continuing with the calculation of the survival amplitude. \\[\\mathscr{A}_{A}(t) = \\int_{0}^{\\infty}\\left\\langle A|e^{-iHt}|\\psi_{\\lambda}\\right\\rangle \\left\\langle\\psi_{\\lambda}|A\\right\\rangle d\\lambda\\] \\[\\qquad\\quad+\\left\\langle A|e^{-iHt}|\\psi_{\\Lambda_{A}}\\right\\rangle \\left\\langle\\psi_{\\Lambda_{A}}|A\\right\\rangle\\] \\[\\qquad\\quad+\\left\\langle A|e^{-iHt}|\\psi_{\\Lambda_{B}}\\right\\rangle \\left\\langle\\psi_{\\Lambda_{B}}|A\\right\\rangle\\] \\[= \\int_{0}^{\\infty}e^{-i\\lambda t}\\left\\langle A|\\psi_{\\lambda} \\right\\rangle\\left\\langle\\psi_{\\lambda}|A\\right\\rangle d\\lambda\\] \\[\\qquad\\quad+e^{-i\\Lambda_{A}t}\\left\\langle A|\\psi_{\\Lambda_{A}} \\right\\rangle\\left\\langle\\psi_{\\Lambda_{A}}|A\\right\\rangle\\] \\[\\qquad\\quad\\quad+e^{-i\\Lambda_{B}t}\\left\\langle A|\\psi_{\\Lambda_ {B}}\\right\\rangle\\left\\langle\\psi_{\\Lambda_{B}}|A\\right\\rangle\\] \\[= \\int_{0}^{\\infty}e^{-i\\lambda t}\\left|\\mu_{\\lambda}^{A}\\right|^{2}d \\lambda+e^{-i\\Lambda_{A}t}\\left|\\mu_{\\Lambda_{A}}^{A}\\right|^{2} \\tag{19}\\] \\[\\qquad\\quad\\quad+e^{-i\\Lambda_{B}t}\\left|\\mu_{\\Lambda_{B}}^{A} \\right|^{2}.\\] We have used the completeness and orthonormality of the physical states in the above equation. Orthonormality of these states is shown in Appendic A, while completeness can be demonstrated by using the techniques described in Appendix A and in reference [14]. The survival probability of \\(|A\\rangle\\) is simply the square of the amplitude, \\[P_{A}(t) = \\left|\\mathscr{A}_{A}(t)\\right|^{2}\\] (20) \\[= \\left|\\int_{0}^{\\infty}e^{-i\\lambda t}\\left|\\mu_{\\lambda}^{A} \\right|^{2}d\\lambda+e^{-i\\Lambda_{A}t}\\left|\\mu_{\\Lambda_{A}}^{A}\\right|^{2}\\right.\\] \\[\\qquad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad \\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad \\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\Similarly, if state \\(|B\\rangle\\) is occupied at t=0, then the survival probability of \\(|B\\rangle\\) will take the form \\[P_{B}(t) = |\\mathscr{A}_{B}(t)|^{2} \\tag{21}\\] \\[= \\bigg{|}\\int_{0}^{\\infty}e^{-i\\lambda t}\\left|\\mu_{\\lambda}^{B} \\right|^{2}d\\lambda+e^{-i\\Lambda_{A}t}|\\mu_{\\Lambda_{A}}^{B}|^{2}\\] \\[+e^{-i\\Lambda_{B}t}|\\mu_{\\Lambda_{B}}^{B}|^{2}\\bigg{|}^{2}.\\] ## IV Numerical calculations and results In this section, we discuss the numerical calculations of the survival probabilities \\(P_{A}(t)\\) and \\(P_{B}(t)\\). We choose the form factor to be \\[f(\\omega)=\\frac{\\sigma\\mu^{2}\\sqrt{\\omega}}{(\\omega-\\omega_{0})^{2}+\\mu^{2}}. \\tag{22}\\] The factor of \\(\\sqrt{\\omega}\\) in the numerator is a phase-space contribution. The rest of \\(f(\\omega)\\) is just the Lorentzian line shape. The function \\(f(\\omega)\\) peaks near \\(\\omega_{0}\\) and its width is controlled by \\(\\mu\\), and its strength by \\(\\sigma\\). We require the strength of the perturbation to be weak relative to the eigenvalues of \\(H_{0}\\). In other words, we don't want the original system to change drastically. So we choose small values for the parameters \\(\\mu\\), \\(\\sigma\\) and \\(\\Omega\\) and set \\(\\mu=0.30\\), \\(\\sigma=0.11\\), and \\(\\Omega=0.04\\). We also want the bare bound states to become unstable in the presence of the perturbation \\(V\\). This can be achieved by setting their energies to lie in the physical continuum sufficiently above the threshold. The physical continuum ranges from zero to infinity, and so we choose numerical values \\(E_{A}=2.00\\) and \\(E_{B}=2.10\\). For the choice of \\(E_{A}\\) and \\(E_{B}\\) here, Eq. (15) yields complex eigenvalues for the physical bound states. The condition on the physical bound states' stability requires that they have real-negative eigenvalues and so here they are unstable and show up only as a spectral density. Since the physical spectrum is confined to the real-axis and since there are no stable physical bound states, the last two terms in the Eqs. (20) and (21) are zero. The physical continuum states \\(|\\psi_{\\lambda}\\rangle\\) form a complete set of states by themselves. The equations for survival probability then reduce to \\[P_{A}(t)=\\left|\\int_{0}^{\\infty}e^{-i\\lambda t}\\left|\\mu_{\\lambda}^{A}\\right| ^{2}d\\lambda\\right|^{2} \\tag{23}\\] and \\[P_{B}(t)=\\left|\\int_{0}^{\\infty}e^{-i\\lambda t}\\left|\\mu_{\\lambda}^{B}\\right| ^{2}d\\lambda\\right|^{2}. \\tag{24}\\] ### \"Unmeasured\" Evolution First, we study the survival probability as a function of time of \\(|A\\rangle\\) (\\(P_{A}\\)). We then compare it to the survival probability of \\(|B\\rangle\\) (\\(P_{B}\\)) with \\(\\Omega\\to 0\\). In the second case the first bound state \\(|A\\rangle\\) is completely cutoff from the rest of the system as seen from Eq. (4). Then we are dealing with a system with only one bound state coupled to the continuum. The differences between these two cases show precisely the effects that the second bound state has on the survival probability of \\(|A\\rangle\\). Note that the second case is nothing more than the well known Fredrichs-Lee Model. The form factor \\(f(\\omega)\\) is peaked around the \\(E_{B}\\) and so we have \\(\\omega_{0}=2.10\\). The \"unmeasured\" survival probability of the states \\(|A\\rangle\\) and \\(|B\\rangle\\) are the dashed curves in Figs. 4 and 5 respectively. Notice that in both cases for long times the decay is exponential. Figure 5: Survival probability of \\(|B\\rangle\\) given by Eq. (24) with (\\(\\Omega=0\\)). The dashed line shows the “unmeasured” evolution of \\(|B\\rangle\\). The solid line shows the quantum Zeno effect appearing due to repeated measurements. The parameter values that were used are \\(E_{B}=2.10\\), \\(\\omega_{0}=2.10\\), \\(\\mu=0.30\\), \\(\\sigma=0.11\\), and \\(\\Omega=0\\). Figure 4: Survival probability of \\(|A\\rangle\\) given by Eq. (23). The dashed line shows the “unmeasured” evolution of \\(|A\\rangle\\). The solid line shows the effect of repeated measurements made at high frequencies leading to the Zeno effect. The dotted line show the effect of repeated measurements made at lower frequencies leading to the anti-Zeno effect. The parameter values used are \\(E_{A}=2.00\\), \\(E_{B}=2.10\\), \\(\\omega_{0}=2.10\\), \\(\\mu=0.30\\), \\(\\sigma=0.11\\), and \\(\\Omega=0.04\\). ### Effective Evolution: Zeno and anti-Zeno Interrupting the system by changing the acceleration is a necessary step in the experiment by Fischer _et al_ to create the quantum Zeno effect or the anti-Zeno effect. The interruption resets the system and the tunneling process has to restart after the interruption period. In our model the shifting of \\(\\Omega\\) is analogous to changing the acceleration in the experiment. The time evolution of the system (when \\(\\Omega=0.04\\)) can be hindered by shifting \\(\\Omega\\) to a value much smaller than the difference between \\(E_{A}\\) and \\(E_{B}\\). This effectively cuts off the oscillations between \\(|A\\rangle\\) and \\(|B\\rangle\\). During the interruptions, \\(|B\\rangle\\) is still connected to the continuum, just as in the experiment. The population of \\(|A\\rangle\\) remains constant during the interruption periods. The length of the interruption period is important for resolving the different momentum states in the experiment by Fisher _et al_ as shown in the last figure in [9]. We do not have this constraint in our model and our measurements can treated as instantaneous (von-Neumann type). The experimental results presented in [9] show the effective evolution of the atoms in the trap with the interruption periods removed. We also look at the effective evolution, with the interruption periods filtered out. Numerically, the procedure for obtaining the time evolution with interruptions is simple. We start by fixing the interval of time, \\(\\tau\\), between the measurement induced interruptions. We start with a bare state with unit amplitude and compute its survival probability till time \\(\\tau\\). At this point the measurement is assumed to reset the system. The initial bare state wave function had only one non-zero component when expressed in the basis of bare states. Time evolution of this state under the full Hamiltonian makes all three components non-zero in general. Resetting the system correponds to setting the two new components that appeared as a result of the evolution back to zero. This new (un-normalized) state is the starting point for further evolution until the next interruption. This process is repeated several times to obtain the graph of the survival probability of the initial unstable state when it is subject to frequent interruptions. The solid line and the dotted line in Fig. 4 shows how the effective time evolution of \\(|A\\rangle\\) can be hindered or accelerated by repeatedly interrupting the system. The second inflection point in the unmeasured evolution of the state \\(|A\\rangle\\) that allows us to obtain the anti-Zeno effect is present due to the second unstable state \\(|B\\rangle\\). In the second scenario when we have set \\(\\Omega=0\\) there is only one unstable state in the system. From the graph of the survival probabilty of the unstable state without interruptions shown by the dashed line in Fig. 5 we see that there is no way we can choose an interruption frequency that will lead to the anti-Zeno effect. On the other hand, interruptions made very frequently can lead to the quantum Zeno effect as shown by the solid line in Fig. 5. ## V Conclusion We present a simple theoretical model of the experiment done by Fischer _et al_ In our model state \\(|A\\rangle\\) represents the motional ground state in their experiment. \\(|B\\rangle\\) represents a higher energy motional bound state, while \\(|C\\Theta(\\omega)\\rangle\\) represents the continuum of available free-particle states in their experiment. The presence of the intermediate states is clearly stated in reference [11]. We study the survival probability of the ground state as it decays into the set of continuum states via an intermediate bound state. Our results are in agreement with the results of the experiment. We have shown how repeated interruptions of the time evolution of an unstable state can lead to the Zeno effect in our model. The anti-Zeno effect is also obtained in a similar fashion but we find that the system must have additional peculiarities for obtaining this effect. In our model there is an additional unstable state that mediates the decay of the original unstable state. The presence of this new state is shown to produce an inflection point in the graph of the survival probability of the unstable state that is of interest. This inflection point defines a time scale at which repeated measurements may be done on the system to effectively speed up its decay and thus obtain the anti-Zeno effect. We also point out that the second bound state is not needed to obtain the Zeno effect. A generic decaying system can exhibit the Zeno effect if its time evolution is very frequently interrupted by measurements. We note here that a similar analysis has been done for oscillating systems by Panov [15] and on the Friedrichs Lee model by Antoniou _et al_ in [16]. ###### Acknowledgements. We thank Prof. E.C.G. Sudarshan for his support in this project. One of us (K.M.) thanks Shawn Rice and Laura Speck for proof reading the manuscript. A.S. acknowledges the support of US Navy - Office of Naval research through grant Nos. N00014-04-1-0336 and N00014-03-1-0639. ## Appendix A Orthonormality Here we want to show that the continuum states are orthonormal. That is \\[\\psi_{\\eta}^{*}\\psi_{\\lambda} = \\left(\\begin{array}{c}\\mu_{\\eta}^{A*}\\\\ \\mu_{\\eta}^{B*}\\\\ \\phi_{\\eta}^{*}(\\omega)\\end{array}\\right)\\left(\\mu_{\\lambda}^{A},\\mu_{\\lambda }^{B},\\phi_{\\lambda}(\\omega)\\right) \\tag{10}\\] \\[= \\mu_{\\eta}^{A*}\\mu_{\\lambda}^{A}+\\mu_{\\eta}^{B*}\\mu_{\\lambda}^{ B}+\\phi_{\\eta}^{*}(\\omega)\\phi_{\\lambda}(\\omega)\\] \\[= \\delta(\\lambda-\\eta).\\]We start by looking at the last term in Eq. (16). \\[\\phi_{\\eta}^{*}(\\omega)\\phi_{\\lambda}(\\omega) = \\frac{f^{*}(\\eta)f(\\lambda)}{\\beta^{-}(\\eta)(\\eta-i\\epsilon-\\lambda )}+\\frac{f^{*}(\\eta)f(\\lambda)}{\\beta^{+}(\\lambda)(\\lambda+i\\epsilon-\\eta)} \\tag{17}\\] \\[\\quad+\\frac{f^{*}(\\eta)f(\\lambda)}{\\beta^{-}(\\eta)\\beta^{+}( \\lambda)}\\times\\] \\[\\qquad\\int_{0}^{\\infty}d\\omega\\frac{|f(\\omega)|^{2}}{(\\lambda+i \\epsilon-\\omega)(\\eta-i\\epsilon-\\omega)}\\] \\[\\qquad\\qquad+\\delta(\\lambda-\\eta)\\] We can break up the integral in two integrals using partial fractions: \\[\\frac{1}{(\\lambda+i\\epsilon-\\omega)(\\eta-i\\epsilon-\\omega)}=\\frac {1}{\\eta-\\lambda-2i\\epsilon}\\times\\] \\[\\left(\\frac{1}{\\lambda+i\\epsilon-\\omega}-\\frac{1}{\\eta-i \\epsilon-\\omega}\\right). \\tag{18}\\] Using Eq. (18), we re-write the integral term in Eq. (17) as, \\[\\int_{0}^{\\infty}d\\omega\\frac{|f(\\omega)|^{2}}{(\\lambda+i\\epsilon -\\omega)(\\eta-i\\epsilon-\\omega)}=\\frac{1}{\\eta-\\lambda-2i\\epsilon}\\times\\] \\[\\left(\\int_{0}^{\\infty}d\\omega\\frac{|f(\\omega)|^{2}}{\\lambda+i \\epsilon-\\omega}-\\int_{0}^{\\infty}d\\omega\\frac{|f(\\omega)|^{2}}{\\eta-i \\epsilon-\\omega}\\right). \\tag{19}\\] Using Eqs. (11) and (12), we can re-write Eq. (19) in the following manner. \\[\\frac{1}{\\eta-\\lambda-2i\\epsilon}\\left(\\lambda+i\\epsilon-\\beta^ {+}(\\lambda)-E_{B}-\\frac{|\\Omega|^{2}}{\\lambda-E_{A}}\\right.\\] \\[\\qquad\\qquad\\left.-\\eta+i\\epsilon+\\beta^{-}(\\eta)+E_{B}+\\frac{| \\Omega|^{2}}{\\eta-E_{A}}\\right) \\tag{20}\\] which then reduces to \\[-1\\!-\\!\\frac{|\\Omega|^{2}}{(\\lambda-E_{A})(\\eta-E_{A})}\\!-\\!\\frac{\\beta^{-}( \\eta)}{\\lambda+i\\epsilon-\\eta}\\!-\\!\\frac{\\beta^{+}(\\lambda)}{\\eta-i\\epsilon- \\lambda}. \\tag{21}\\] The last two terms in Eq. (21) cancel the first two terms in Eq. (17), and it reduces to \\[\\phi_{\\eta}^{*}\\phi_{\\lambda}= - \\frac{f^{*}(\\eta)f(\\lambda)}{\\beta^{-}(\\eta)\\beta^{+}(\\lambda)} \\frac{|\\Omega|^{2}}{(\\eta-E_{A})(\\lambda-E_{A})} \\tag{22}\\] \\[\\qquad-\\frac{f^{*}(\\eta)f(\\lambda)}{\\beta^{-}(\\eta)\\beta^{+}( \\lambda)}+\\delta(\\lambda-\\eta).\\] Notice, the first two terms in the last equation are the negatives of \\({\\mu_{\\eta}^{A}}^{*}\\mu_{\\lambda}^{A}+{\\mu_{\\eta}^{B}}^{*}\\mu_{\\lambda}^{B}\\), and so the final result is \\[\\psi_{\\eta}^{*}\\psi_{\\lambda}=\\delta(\\lambda-\\eta). \\tag{23}\\] We have now shown that the set of continuum states is orthonormal. Using similar techniques, we can also show that the physical eigenstates form a complete set. ## References * (1) B. Misra and E. C. G. Sudarshan, J. Math Phys. **18**, 756 (1977). * (2) C. B. Chiu, E. C. G. Sudarshan, and B. Misra, Phys. Rev. D **16**, 520 (1977). * (3) B. Kaulakys and V. Gontis, Phys. Rev. A **56**, 1131 (1997). * (4) A. G. Kofman and G. Kurizki, Nature **405**, 546 (2000). * (5) P. Facchi, H. Nakazato, and S. Pascazio, Phys. Rev. Lett. **86**, 2699 (2001). * (6) P. Valanju, E. C. G. Sudarshan, and C. B. Chiu, Phys. Rev. D **21**, 1304 (1980). * (7) P. Valanju, Ph.D. thesis, The University of Texas at Austin (1980). * (8) W. M. Itano, D. J. Heinzen, J. J. Bollinger, and D. J. Wineland, Phys. Rev. A **41**, 2295 (1990). * (9) M. C. Fischer, B. Gutierrez-Medina, and M. G. Raizen, Phys. Rev. Lett. **87**, 040402 (2001). * (10) R. G. Winter, Phys. Rev. **123**, 1503 (1961). * (11) C. F. Bharucha, K. W. Madison, P. R. Morrow, S. R. Wilkinson, B. Sundaram, and M. G. Raizen, Phys. Rev. A **55**, R857 (1997). * (12) K. Friedrichs, Commun. Pure Appl. Math. **1**, 361 (1948). * (13) T. D. Lee, Phys. Rev. **95**, 1329 (1954). * (14) E. C. G. Sudarshan, in _Proc. Brandeis Summer Institute on Theoretical Physics_ (W. A. Benjamin, Inc., New York, 1962). * (15) A. D. Panov, Physics Lett. A **298**, 295 (2002). * (16) I. Antoniou, E. Karpov, G. Pronko, and E. Yarevsky, Phys. Rev. A **63**, 062110 (2001).
We analyze the experimental observations reported by Fischer _et al_ [in Phys. Rev. Lett. **87**, 040402 (2001)] by considering a system of coupled unstable bound quantum states \\(|A\\rangle\\) and \\(|B\\rangle\\). The state \\(|B\\rangle\\) is coupled to a set of continuum states \\(|C\\Theta(\\omega)\\rangle\\). We investigate the time evolution of \\(|A\\rangle\\) when it decays into \\(|C\\Theta(\\omega)\\rangle\\) via \\(|B\\rangle\\), and find that frequent measurements on \\(|A\\rangle\\) leads to both the quantum Zeno effect and the anti-Zeno effects depending on the frequency of measurements. We show that it is the presence of \\(|B\\rangle\\) which allows for the anti-Zeno effect. pacs: 03.65.Xp, 03.67.Lx
Condense the content of the following passage.
arxiv-format/0502476v2.md
# Dynamic Meteorology at the Photosphere of HD 209458b Curtis S. Cooper1, Adam P. Showman1 Footnote 1: affiliation: Department of Planetary Sciences and Lunar and Planetary Laboratory, The University of Arizona, 1629 University Blvd., Tucson, AZ 85721 USA; [email protected], [email protected] ## 1. Introduction The transiting planet HD 209458b orbits very closely (0.046 AU) to its parent star with a period of 3.5257 days (Charbonneau et al., 2000; Henry et al., 2000). From transit depth measurements, the mass and radius of HD 209458b are known fairly accurately: \\(0.69\\pm 0.05\\,M_{\\rm Jupiter}\\) and \\(1.32\\pm 0.05\\,R_{\\rm Jupiter}\\)(Laughlin et al., 2005). The age of the system is estimated to be 5.2 Gyr, with uncertainties of \\(\\sim\\) 10%. Furthermore, owing to careful measurements of the stellar spectrum during the planet's transit, much is now known about HD 209458b's atmospheric properties (Brown et al., 2001; Charbonneau et al., 2002; Vidal-Madjar et al., 2003, 2004). Considerable work has been done to model the spectra, physical structure, and time evolution of extrasolar giant planets (EGPs) (Burrows et al., 2004; Chabrier et al., 2004; Iro et al., 2005). Relatively less effort, however, has been spent on EGP meteorologies; i.e., global temperature and pressure fluctuations, wind velocities, and cloud properties. Preliminary simulations of the circulation by Showman & Guillot (2002) and the shallow-water calculations of Cho et al. (2003) suggest that close-in systems like HD 209458b--with strong day-night heating contrasts and modest rotation rates--occupy a dynamically interesting regime. In this article, we report on the results from a multi-layer global atmospheric dynamics model of HD 209458b. Our results--the existence of a fast superrotating equatorial jet at the photosphere that blows the hottest regions downwind--qualitatively agree with previous three-dimensional numerical simulations by Showman & Guillot (2002). The simulation presented here, however, adopts more realistic radiative-equilibrium temperature profiles and timescales, superior resolution, and a domain that extends deeper into the interior. In particular, the simulations of Showman & Guillot (2002) could not accurately predict the day-night temperature difference at the photosphere. We here predict this temperature difference to be \\(\\sim\\) 500 K, in agreement with order of magnitude estimates by Showman & Guillot (2002). Our calculations have implications for the planet's infrared (IR) light curve. With the first infrared detections of the transiting EGPs TrES-1 and HD 209458b (Charbonneau, D., et al., 2005; Deming et al., 2005), observational constraints on the meteorologies of close-in giant planets will likely be possible over the next two years. ## 2. Model Our model of the general circulation of HD 209458b integrates the primitive equations of dynamical meteorology using Version 2 of the ARIES/GEOS Dynamical Core (Suarez & Takacs, 1995), which we hereafter abbreviate as AGDC2. The primitive equations filter vertically propagating sound waves but retain horizontally propagating external sound waves (called Lamb waves) (Kalnay, 2003). The grid spacing is \\(5^{\\circ}\\times 4^{\\circ}\\) in longitude and latitude, respectively (\\(\\sim\\) 7000 km near the equator). We use 40 vertical levels spaced evenly in log-pressure between 1 mbar and 3 kbar. This spacing implies that we resolve each pressure scale height with \\(\\sim\\) 2 model layers. The scale height ranges from 500-1500 km over the domain of integration. Following Guillot et al. (1996) and Showman & Guillot (2002), we assume synchronous rotation. The acceleration of gravity, which does not vary significantly over the pressure range considered, is set to \\(g\\) = 9.42 m s\\({}^{-2}\\). We take the mean molecular weight and heat capacity to be constant: \\(c_{p}\\) = \\(1.43\\times 10^{4}\\) J kg\\({}^{-1}\\) K\\({}^{-1}\\) and \\(\\mu\\) = \\(1.81\\times 10^{-3}\\) kg mol\\({}^{-1}\\), which neglects the \\(\\sim\\) 30% variations in these parameters caused by dissociation of molecular hydrogen at the deepest pressures in the model. The intense stellar irradiation extends the radiative zone of HD 209458b all the way down to \\(\\sim\\) 1 kbar pressure, with a depth \\(\\sim\\) 5-10% of the planetary radius (Burrows et al., 2003; Chabrier et al., 2004). Our integrations do not solve the equation of radiative transfer directly. Rather, we treat the effects of the strong stellar insolation using a Newtonian radiative scheme in which the thermodynamic heating rate \\(q\\) [W kg\\({}^{-1}\\)] is given by \\[\\frac{q}{c_{p}}=-\\frac{T(\\lambda,\\phi,p,t)-T_{\\rm eq}(\\lambda,\\phi,p)}{\\tau_{ \\rm rad}(p)}, \\tag{1}\\] which relaxes the model temperature \\(T\\) toward a prescribed radiative-equilibrium temperature \\(T_{\\rm eq}\\). In Equation 1, \\(\\lambda\\), \\(\\phi\\), \\(p\\)and \\(t\\) are longitude, latitude, pressure, and time. The radiative-equilibrium temperature \\(T_{\\rm eq}\\) and the timescale for relaxation to radiative equilibrium \\(\\tau_{\\rm rad}\\) are inputs of the AGDC2. We rely on the radiative-equilibrium calculations of Iro et al. (2005) to specify \\(T_{\\rm eq}(\\lambda,\\phi,p)\\) and \\(\\tau_{\\rm rad}(p)\\). Iro et al. (2005) use the multi-wavelength atmosphere code of Goukenleuque et al. (2000) to calculate the radiative-equilibrium temperature structure of HD 209458b for a single vertical column, hereafter denoted as \\(T_{\\rm hor}(p)\\). Iro et al. (2005) assume globally averaged insolation conditions (i.e., they re-distribute the incident solar flux over the entire globe). Their calculation includes the opacities appropriate for a solar-abundance distribution of gas (Anders & Grevesse, 1989), including the neutral alkali metals Na and K. Iro et al. (2005) do not consider condensation or the scattering and absorption of radiation by silicate clouds, which can conceivably form near the photosphere (see e.g., Fortney et al., 2003). Condensates can potentially have a significant effect on the planet's radiation balance, depending on the depth at which they form, the particle sizes, and the vertical extent of cloud layers (Cooper et al., 2003). The net direction of this effect (i.e., to warm the atmosphere or to cool it) is as yet unclear and remains a subject for future work. Iro et al. (2005) compute radiative-relaxation timescales as a function of \\(p\\) from 0.01 mbar down to 10 bar by applying a Gaussian perturbation to the radiative-equilibrium temperature profile at each vertical level. We use their radiative-relaxation timescales for \\(\\tau_{\\rm rad}(p)\\) in Equation 1. At pressures exceeding 10 bars, radiative relaxation is negligible compared to the dynamical timescales considered here. We simply assume \\(q=0\\) on all layers from 10 bar to 3 kbar. To account for the longitude-latitude dependence of \\(T_{\\rm eq}\\) in Equation 1, we use a simple prescription. We choose the substellar point to be at \\((\\lambda,\\phi)=(0,0)\\). On the dayside, we set \\[T_{\\rm eq}^{4}(\\lambda,\\phi,p)=T_{\\rm night,eq}^{4}(p)+[T_{\\rm ss,eq}^{4}(p)-T _{\\rm night,eq}^{4}(p)]\\cdot\\cos(\\lambda)\\cos(\\phi), \\tag{2}\\] where \\(T_{\\rm ss,eq}(p)\\) and \\(T_{\\rm night,eq}(p)\\) are the radiative-equilibrium temperature profiles of the substellar point and night side (assumed to be uniform over the dark hemisphere), respectively. Equation 2 implies that the hottest \\(T_{\\rm eq}\\) profile is at the substellar point. On the nightside, we set \\(T_{\\rm eq}(\\lambda,\\phi,p)\\) equal to \\(T_{\\rm night,eq}(p)\\). We treat the radiative-equilibrium temperature difference between the substellar point and the nightside, \\(\\Delta T_{\\rm eq}(p)=T_{\\rm ss,eq}(p)-T_{\\rm night,eq}(p)\\), as a free parameter that is a specified function of pressure. To determine \\(T_{\\rm ss,eq}(p)\\) and \\(T_{\\rm night,eq}(p)\\) from Iro et al. (2005)'s profile and our specified \\(\\Delta T_{\\rm eq}\\), we horizontally average \\(T_{\\rm eq}^{4}\\) on the top layer of our model over the sphere and set it equal to \\(T_{\\rm hor}^{4}\\) at that pressure. Based on the \\(\\sim\\) 1000 K day-night temperature differences from Iro et al. (2005), we use \\(\\Delta T_{\\rm eq}=1000\\) K for pressures less than 100 mb and decrease it logarithmically with pressure down to 530 K at the base of the heated region (10 bar). Newtonian cooling is a crude approximation to the true radiative transfer, but the scheme is computationally fast--hence allowing extensive explorations of parameter space--and gives us direct control over the model's diabatic heating. The model's initial temperature was set to \\(T_{\\rm night,eq}(\\rho)\\) everywhere over the globe; there were no initial winds. We set the time step equal to 50 s, which is much smaller than the time step required for numerical stability according to the Courant-Friedrichs-Lewy (CFL) criterion (Kalnay, 2003). ## 3. Results & Discussion By 5000 days of simulation time, the simulation has reached a statistical steady-state, at least down to the 3 bar level, which is the level above which 99% of the stellar photons are absorbed (Iro et al., 2005). Deeper than 3 bars, the kinetic energy continues to increase with time as these layers respond to the intense irradiation on relatively long timescales \\(\\tau_{\\rm rad}\\sim\\) 1 yr. At pressures less than 10 bars, the model rapidly develops strong winds and temperature variability in response to the imposed day-night heating contrast. The upper atmosphere is nearly in radiative equilibrium (Figure 1a), with temperature contrasts of \\(\\sim\\) 1000 K. This results from the fact that the radiative-equilibrium time constant \\(\\tau_{\\rm rad}\\) at 2 mbar is only \\(\\sim\\) 1 hour, which is much shorter than the timescale for winds to advect heat across a hemisphere. At 2 mbar pressures, supersonic winds exceeding 9 km s\\({}^{-1}\\) appear at high latitudes, with strong north-south as well as east-west flow. Supersonic winds are plausible in the dynami Figure 1.— Snapshot at 5000 Earth days of stimulated temperature (grayscale) and winds (arrows) on three isobars: 2.5 mbar, 220 mbar, and 19.6 bars in (a), (b), and (c), respectively. The substellar point is at \\((0,0)\\) in longitude and latitude. Peak winds are 9.2, 4.1, and 2.8 km sec\\({}^{-1}\\) in (a), (b), and (c). The simulated temperature difference of \\(\\sim\\) 500 K at 220 mbar is less than the assumed temperature difference of 920 K in radiative-equilibrium due to advection of hot material from the dayside to the nightside by eastward winds of \\(\\sim\\) 4 km s\\({}^{-1}\\) on this layer. cal regime of HD 209458b due to the immense radiative forcing from the parent star. For comparison, the planet Neptune has high-velocity zonal jets with wind speeds approaching the local speed of sound (Limaye & Sromovsky, 1991), but the radiative forcing is a million times weaker than it is for HD 209458b. In contrast, the flow near the photosphere near 220 mbar is dominated by an eastward jet extending from the equator to mid-lattitudes (Figure 1b). The temperature contrasts reach \\(\\sim\\) 500 K at this level, with the hottest part of the atmosphere advected \\(\\sim\\) 60\\({}^{\\circ}\\) downstream from the substellar point by the 4 km s\\({}^{-1}\\) eastward jet. Here, the time constant for relaxation to the radiative equilibrium \\(\\tau_{\\rm rad}\\) is several Earth days. The downstream advection of the hottest regions results from the fact that the radiative and advection timescales are comparable at this level. The small-scale bar-like features visible in the middle frame of Figure 1 propagate nearly horizontally westward relative to the flow at \\(\\sim\\) 3 km s\\({}^{-1}\\), which is close to the speed of sound. These features are most consistent with Lamb waves (Kalnay, 2003, p. 42). At 19.6 bar, the equatorial winds remain extremely fast--2.8 km s\\({}^{-1}\\)--but the temperature structure exhibits little longitudinal variability (Figure 1c). Our model's circulation results entirely from radiative heating occuring at pressures less than 10 bars. Therefore, any winds or temperature variability that develops at pressures exceeding 10 bars results solely from downward transport of energy from the overlying heated layers by vertical advection or wave transport. The layers between 3-30 bars contain \\(\\sim\\) 70% of the atmosphere's kinetic energy. Lower pressures have fast winds but little mass, while the winds drop rapidly to zero at pressures exceeding \\(\\sim\\) 30 bars. The essential assumption of the primitive equations is hydrostatic balance (Holton, 1992; Kalnay, 2003), an approximation generally valid for shallow flow. This applies to the case of HD 209458b's radiative zone, which has a horizontal to vertical aspect ratio of \\(\\sim\\) 100. Analysis of the terms in the full Navier-Stokes vertical momentum equation reveals that, even in the supersonic flow regime obtained here, the vertical acceleration and curvature terms would have magnitudes only \\(\\sim\\) 1% and \\(\\sim\\) 10% that of the hydrostatic terms at the photosphere and top of the model, respectively. The caveat to this result is that vertical accelerations can still be important for sub-grid scale structures. These simulations also do not include the possible effects of vertically propagating shocks, which can conceivably dissipate atmospheric kinetic energy. To confirm the effects shown in Figure 1, we have run additional simulations using 1D radiative-equilibrium temperature profiles from Burrows et al. (2003) and Chabrier et al. (2004), which are significantly hotter than that of Iro et al. (2005) due to differing assumptions about the heat redistribution. We have also experimented with changing the value of the free parameter \\(\\Delta T_{\\rm eq}\\), which controls the strength of the radiative forcing in our Newtonian cooling scheme (Equation 1). We have run simulations at \\(\\Delta T_{\\rm eq}\\) of 100 K, 250 K, 500 K, 750 K, and 1000 K. The essential features of the simulation presented here are representative of the results of the other simulations performed: they all develop a stable superrotating jet at the equator extending to the mid-latitudes and a hot region of atmosphere downwind of the substellar point. But for a given \\(\\Delta T_{\\rm eq}\\), the simulated temperature _differences_ are not affected by which group's radiative-equilibrium temperature profile is used. For values of \\(\\Delta T_{\\rm eq}\\) between 500-1000 K, the simulations all produce temperature contrasts at the photosphere ranging from 300-600 K, with peak equatorial winds speeds in the range 2-5 km s\\({}^{-1}\\). The simulations employing the atmospheric profiles of Burrows et al. (2003) and Chabrier et al. (2004) do, however, produce hotter (by \\(\\sim\\) 500 K) mean photospheric temperatures than our nominal case. We have also run the model with a different initial condition. We started this alternate simulation with identical input parameters to the simulation shown in Figure 1 but with a retrograde zonal wind profile: \\(u=-\\)3 km s\\({}^{-1}\\) cos\\({}^{4}(\\phi)\\) tan\\({}^{-1}\\)(2 bar\\(/p\\)). We set the meridional wind \\(v\\) to zero and set the pressures to be in gradient-wind balance with the initial winds (Holton, 1992). After 5000 days of integration time, a strong superrotating jet at the equator at 220 mbar develops in the simulation, with maximum wind speeds of 3.4 km s\\({}^{-1}\\) and temperature contrasts of \\(\\sim\\)430 K. The similarity with the simulation presented in Figure 1 shows that the overall flow geometry is not strongly sensitive to the initial conditions. Our results differ from the one-layer shallow-water simulations of Cho et al. (2003), who find that the mean-equatorial flow for HD 209458b is westward. Shallow-water turbulence simulations consistently produce westward equatorial flow, even for planets such as Jupiter and Saturn whose equatorial jets are eastward (Cho & Polvani, 1996; Iacono et al., 1999a,b; Peltier & Stuhne, 2001; Showman, 2004). This effect may result from the exclusion of three-dimensional momentum-transport processes in one-layer models. For example, three-dimensional effects are important in allowing equatorial superrotation in numerical models of Venus, Earth, and Jupiter (Del Genio & Zhou, 1996; Saravanan, 1993; Williams, 2003). Nevertheless, the shallow-water turbulence models successfully capture essential aspects of mid-latitude jets for giant Figure 2.— Views of the predicted flux emitted from HD 209458b as it would appear from Earth at four orbital phases: (a) in transit, (b) one-quarter after the transit, (c) in secondary eclipse, and (d) one-quarter before the next transit. The planetary rotation axes are vertical, with the superrotating jet seen in Figure 1 going from left to right in each panel. The temperature on this layer ranges from 1011 K (dark) to 1526 K (bright). In radiative equilibrium, panel (a) would be darkest and (c) would be brightest. The globes show that a difference in observed flux from the planet between the leading and trailing phases of its orbit—panels (b) and (d)—is a signature of winds. planets in our solar system (Cho & Polvani 1996). ## 4. Orbital phases and predicted lightcurve To illustrate the observable effects of the circulation, Figure 2 shows orthographic projections (Snyder 1987) of the blackbody flux of HD 209458b on the 220-mbar level. The key feature is the hot region downstream from the substellar point, which faces Earth after the transit and before the secondary eclipse. This pattern differs drastically from that expected in the absence of winds, in which case the hottest regions would be at the substellar point. We show the infrared light curve of the planet as predicted by our simulations in Figure 3. The circles represent the total flux received at Earth from the planet every 10\\({}^{\\circ}\\) in its orbit. The points are derived by integration of the radiation intensity emitted into a solid angle projected in the direction toward Earth. We assume in this calculation that each column of the atmosphere at the photosphere emits radiation with the intensity of a blackbody, \\(\\sigma T^{4}/\\pi\\), with temperatures varying as shown in Figure 1(b). We use 223 mbar for the photospheric pressure, which is the layer closest to where the effective temperature in Iro et al. (2005)'s model equals their computed actual temperature. The model predicts a phase lead of 60\\({}^{\\circ}\\)--or 14 hours--between peak radiation from the planet and the secondary eclipse, when the illuminated hemisphere faces Earth. Based on the \\(\\sim\\) 500 K temperature contrasts shown in Figure 3, we derive a ratio of 2.2 between the maximum and minimum flux. To confirm the validity of this prediction, we performed analogous light curve calculations assuming the IR flux is emitted from a single pressure level ranging from 150-450 mbar, where the photosphere conceivably could be. The magnitude of the flux ratio ranges from 3.2 to 1.4 over this range of pressures. Our predicted phase lead of 60\\({}^{\\circ}\\) shifts by \\(\\pm\\)20\\({}^{\\circ}\\) over this range of pressures. These effects are inversely correlated: if the photosphere pressure is less than 220 mbar, then the flux ratio increases but the phase shift decreases and vice-versa. Nevertheless, the possible formation of clouds high in the atmosphere of HD 209458b (\\(p<200\\) mbar)--which we have not treated here--may significantly alter the radiation budget. Additionally, uncertainties in the input parameters and the Newtonian cooling approximation limit the precision with which we can predict the magnitude of the effects described above. A major future advance in the characterization of close-in EGP atmospheres will come from coupling dynamics and radiative transfer directly. We note, however, that the inputs to such a model are likely to be somewhat hypothetical as well, given the limited constraints available from observations. Special thanks to T. Guillot, N. Iro, and B. Bezard for valuable guidance on the radiative time constant prior to publication of their results. Thanks also to the referee, J.W. Barnes, J.J. Fortney, P.J. Gierasch, and many others for advice on the project and manuscript. Figures created using the free Python Numarray and PLplot libraries. This research was supported by NSF grant AST-0307664 and NASA GSRP NGT5-50462. ## References * (1) * (2) Anders, E., & Grevesse, N. 1989, Geochim. Cosmochim. Acta, 53, 197 * (3) * (4) Brown, T. M., Charbonneau, D., Gilliland, R. L., Noyes, R. W., & Burrows, A. 2001, ApJ, 552, 699 * (5) * (6) Burrows, A., Hubeny, L., Hubbard, W. B., Sudarsky, D., & Fortney, J. J. 2004, ApJ, 610, L53 * (7) * (8) Burrows, A., Sudarsky, D., & Hubbard, W. B. 2003, ApJ, 594, 545 * (9) * (10) Chabrier, G., Barman, T., Baraffe, I., Allard, F., & Hauschildt, P. H. 2004, ApJ, 603, L53 * (11) * (12) Charbonneau et al. 2005, ApJ, 626, 523 * (13) * (14) Charbonneau, D., Brown, T. M., Latham, D. W., & Mayor, M. 2000, ApJ, 529, L45 * (15) * (16) Charbonneau, D., Brown, T. M., Noyes, R. W., & Gilliland, R. L. 2002, ApJ, 568, 377 * (17) * (18) Cho, J. Y-K., Menou, K., Hansen, B. M. S., & Seager, S. 2003, ApJ, 587, L117 * (19) * (20) Cho, J. Y-K., & Polvani, L. M. 1996, Science, 273, 335 * (21) * (22) Cooper, C. S., Sudarsky, D., Milsom, J. A., Lunine, J. I., & Burrows, A. 2003, ApJ, 586, 1320 * (23) * (24) Del Genio, A. D., & Zhou, W. 1996, Icarus, 120, 332 * (25) * (26) Deming, D., Seager, S., Richardson, L., & Harrington, J. 2005, Nature, 434, 740 * (27) * (28) Fortney, J. J., Sudarsky, D., Hubeny, I., Cooper, C. S., Hubbard, W. B., Burrows, A., & Lunine, J. I. 2003, ApJ, 589, 615 * (29) * (30) Goukleuque, C., Bezard, B., Joguet, B., Lellouch, E., & Freedman, R. 2000, Icarus, 143, 308 * (31) * (32) Guillot, T., Burrows, A., Hubbard, W. B., Lunine, J. I., & Saumon, D. 1996, ApJ, 459, L35 * (33) * (34) Henry, G. W., Marcy, G. W., Butler, R. P., & Vogt, S. S. 2000, ApJ, 529, L41 * (35) * (36) Holton, J. R. 1992, An introduction to dynamic meteorology (International geophysics series, San Diego, New York: Academic Press,!1992, 3rd ed.) * (37) * (38) Iacono, R., Struglia, M. V., & Ronchi, C. 1999a, Physics of Fluids, 11, 1272 * (39) * (40) Iacono, R., Struglia, M. V., Ronchi, C., & Nicastro, S. 1999b, Nuovo Cimento & Geophysics Space Physics C. 22, 813 * (41) * (42) Ino, N., Bezard, B., & Guillot, T. 2005, A&A, 436, 719 * (43) * (44) Kalnay, E. 2003, Atmospheric modeling, data assimilation and predictability (Cambridge: Cambridge University Press, 2003) * (45) * (46) Laughlin, G., Wolf, A., Vannumster, T., Bodenheimer, P., Fischer, D., Marcy, G., Butler, P., & Vogt, S. 2005, ApJ, 621, 1072 * (47) * (48) Limaye, S. S., & Stornovsky, L. A., J., Geophys. Res., 96, 18941. * (49) * (50) Peltier, W., & Stuthe, G. 2001, Meteorology at the Millennium, \"The upscale turbulent cascade: shear layers, cyclones, and gas giant bands\", p. 43-61, (New York: Academic Press, 2001) * (51) * (52) Sarawanna, R. 1993, Journal of Atmospheric Sciences, 50, 1211 * (53) * (54) Showman, A. P. 2004, Bull. Amer. Astron. Soc., 36, 1135 * (55) * (56) Showman, A. P., & Guillot, T. 2002, A&A, 385, 166 * (57) * (58) Snyder, J. P. 1987, Map Projections--A Working Manual (U.S. Geological Survey Professional Paper 1395, Washington, D.C.: U.S. Government Printing Office) * (59) * (60) Suarez, M. J., & Tkacs, L. L. 1995, Technical report series on global modeling and data assimilation. Vol. 5: Documentation of the AIRES/GEOS dynamical core, v. 2 * (61) * (62) Vidal-Madjar et al. 2004, ApJ, 604, L69 * (63) * (64) Vidal-Madjar, A., Lecavcier des Etangs, A., Desert, J.-M., Ballester, G. E., Ferlet, R., Hebrard, G., & Mayor, M. 2003, Nature, 422, 143 * (65) * (66) Williams, G. P. 2003, Journal of Atmospheric Sciences, 60, 1270 Figure 3.— Synthetic lightcurve of the 220 mbar layer of our model atmosphere for HD 209458b. This simulation predicts peak emission from the planet 14 hours _before_ the time of the secondary eclipse.
We calculate the meteorology of the close-in transiting extrasolar planet HD 209458b using a global, three-dimensional atmospheric circulation model. Dynamics are driven by perpetual irradiation of one hemisphere of this tidally locked planet. The simulation predicts global temperature contrasts of \\(\\sim\\) 500 K at the photosphere and the development of a steady superrotating jet. The jet extends from the equator to mid-latitudes and from the top model layer at 1 mbar down to 10 bars at the base of the heated region. Wind velocities near the equator exceed 4 km s\\({}^{-1}\\) at 300 mbar. The hottest regions of the atmosphere are blown downstream from the substellar point by \\(\\sim\\) 60\\({}^{\\circ}\\) of longitude. We predict from these results a factor of \\(\\sim\\) 2 ratio between the maximum and minimum observed radiation from the planet over a full orbital period, with peak infrared emission preceding the time of the secondary eclipse by \\(\\sim\\) 14 hours. Subject headings:planets and satellites: general--planets and satellites: individual (HD 209458b)--methods: numerical--atmospheric effects + Footnote †: slugcomment: Accepted by ApJ Letters
Provide a brief summary of the text.
arxiv-format/0503037v1.md
**Effect of the troposphere on surface neutron counter measurements** **K.L. Aplin (a), R.G. Harrison (b) and A.J. Bennett (b)** _(a) Space Science and Technology Department, Rutherford Appleton Laboratory, Chilton, Didcot, Oxon, OX11 0QX UK ([email protected]) (b) Department of Meteorology, The University of Reading, PO Box 243, Earley Gate, Reading, Berks, RG6 6BB UK ([email protected], [email protected])_ # Introduction The possible effect of cosmic rays on Earth's climate system is controversial, as many physical mechanisms have been proposed, but none proven. In the last few years, evidence for two principal mechanisms by which atmospheric ion formation by cosmic rays could affect Earth's climate has emerged. These are described in e.g. Harrison and Carslaw (2003) and are briefly summarised below: 1. ion-mediated particle formation, which may lead to the growth of cloud condensation nucleii (e.g, Svensmark and Friis-Christensen, 1997; Yu and Turco, 2001) 2. enhanced removal of charged droplets from clouds (e.g., Tinsley, 2000; Tripathi and Harrison, 2002) Theoretical work, model predictions and laboratory experiments exist supporting these hypotheses, but results from relevant atmospheric experiments are still relatively sparse. Accumulation of further atmospheric data to corroborate or refute the existence of any of these effects requires quantification of ion concentrations in Earth's troposphere and stratosphere. Ideally, direct investigation of ionisation effects on climate would requirefrequent vertical profiles of meteorological and ion measurements over a wide spatial area. Whilst new atmospheric ion instrumentation is becoming available (Aplin and Harrison, 2001; Holden, 2003), ion data are still relatively rare. Routine ion measurements at the surface exist at very few locations (e.g. Horrak _et al_, 2000), and ion data outside the atmospheric boundary layer are even sparser. This is the reason why the cosmic ray flux, responsible for almost all atmospheric ionisation except close to the continental surface, is often used as a proxy for atmospheric ion concentrations. Cosmic ray fluxes have been routinely monitored by a surface neutron counter network since the International Geophysical Year in 1956/7. There are now \\(\\sim\\)50 monitors well-distributed worldwide, and the data is readily available on the Internet (Pyle, 2000). Many studies of cosmic ray effects on meteorological parameters have been based on data from this neutron monitor network. These studies rely on the assumptions that neutron monitor data are, first, a good proxy for the ionisation rate from cosmic rays and, secondly, independent of atmospheric parameters. Whilst the validity of these assumptions was established for the purposes needed at the time during the development of the neutron counter fifty years ago, investigations have not been extended using modern techniques. Many of the possible cosmic ray-climate signals appear to be at the few percent level (e.g. Svensmark and Friis-Christensen, 1997), so there is a need to verify that such signals are not spuriously generated by residual atmospheric effects on the cosmic ray measurements, incompletely removed by the atmospheric corrections. In this paper, historical and modern data are used to develop the early work relating atmospheric processes to surface cosmic ray intensities. Background into cosmic ray propagation in the atmosphere is given in Section 2, followed by a description of the geophysical and meteorological parameters that were understood in the 1950s to modulate cosmic radiation. In Section 4 the development of cosmic ray neutron monitoring and the routine correction for atmospheric pressure are briefly discussed. Finally, modern meteorological data is used to show that there are residual tropospheric effects on pressure corrected neutron counts, which may cause correlations between cosmic ray fluxes and other atmospheric parameters. ## 2 Cosmic rays in the atmosphere Most atmospheric ions originate from cosmic ray ionisation. Near the continental surface, natural radioisotopes contribute about 80% of the ionisation rate, defined as the number of ions formed per unit volume per second, but this contribution decreases with height and is negligible outside the atmospheric boundary layer. Cosmic rays are high-energy ionising radiation, entering Earth's atmosphere from space; most are now thought to come from supernovae (Shaviv, 2002; Wolfendale, 2003). Primary cosmic rays are energetic particles, which interact with atmospheric air molecules when the air becomes sufficiently dense, at a pressure surface of \\(\\sim\\)100-200 hPa (\\(\\sim\\)100-200 gcm\\({}^{\\text{-2}}\\)). These pressures are found at \\(\\sim\\)11-16 km in the upper troposphere or lower stratosphere, depending on season and latitude. When cosmic rays interact with other air molecules, secondary subatomic particles, usually mesons, are produced, which interact with more air molecules, and so on as the air density (pressure) increases down to the surface, to produce a \"nucleonic cascade\" of particles including high-energy and thermal neutrons, protons, muons and electrons (Simpson _et al._, 1953). Thermal neutrons interact strongly with atmospheric water vapour, whereas the protons, muons and electrons lose energy by ionisation when they interact with atmospheric air molecules. The ionisation rate therefore decreases from a maximum at \\(\\sim\\)15 km, where the flux of secondary particles first exceeds the primary particle flux, to the top of the boundary layer where ionisation from natural radioactivity starts to dominate over land. Cosmic rays remain the dominant source of ionisation in the oceanic boundary layer. ## 3 Modulation of atmospheric ionisation ### Geophysical effects Simpson (2000) identified the selection and modulation of cosmic ray energies by geomagnetic latitude from measurements in the 1950s. The cosmic ray energy spectrum extends over \\(\\sim\\)1-10\\({}^{21}\\)eV, with the particle flux dropping off as energy increases (Bazilevskaya, 2000). The lower-energy cosmic rays are selectively screened by Earth's magnetic field at the mid-latitudes and near the equator. Above geomagnetic latitudes of \\(\\sim\\)50\\({}^{\\circ}\\), the cosmic ray screening is insensitive to latitude (Bazilevskaya, 2000). Solar activity affects cosmic rays, as the magnetic field irregularities in the solar wind deflect cosmic rays away from Earth, so that primary cosmic ray penetration into the atmosphere is highest at solar minimum. Tropospheric ionisation rates are greatest at high latitudes during solar minimum, and lowest at equatorial latitudes at solar maximum (Gringel _et al._, 1986). ### Meteorological effects (1940s view) Meteorological effects on cosmic rays in the atmosphere were discovered during fundamental studies of atmospheric structure and properties in the first half of the twentieth century. For example, Loughridge and Gast (1939, 1940) observed weather fronts affecting surface ionisation chamber measurements (see Section 4) on a cruise in the North Pacific. Their expedition was intended to investigate latitudinal variations of cosmic ray intensity between Seattle and Alaska, but a relationship between cosmic ray ionisation and the passage of fronts was also detected. Cold fronts cause a 1% decrease, and warm fronts a 0.5% increase in ionisation over the 30 hours it took for the fronts to pass. This effect was robust, even after the data had been corrected for surface air pressure changes. Blackett (1938) had predicted the existence of a meteorological effect on cosmic rays due to variations in the average height, and therefore temperature, of the atmospheric layer where the primary particles interact to produce secondary ionising particles. This was referred to as the \"mesotron producing layer\" in Loughridge and Gast (1939), before the term meson was universally used. Changes in temperature affect meson range in air, and influence the propagation of the nucleonic cascade, which modulates the tropospheric ionisation rate. Before the advent of widespread regular meteorological soundings, weather data above the surface were rare. Loughridge and Gast used assumptions based on early sounding data to infer pressure and temperature aloft during their experiment. Changes in the ionisation rate based on the thickness of the \"mesotron producing layer\" were estimated from the changing height of the tropopause,affected by weather fronts. These predictions fitted observed ionisation rate variations. Although ionisation chambers for cosmic ray monitoring are now obsolete, meteorological effects on modern cosmic ray data remain. Modern cosmic ray instrumentation, and meteorological effects upon contemporary cosmic ray data, will now be outlined. **4. Surface cosmic ray detection instrumentation** Cosmic rays were initially detected using ionisation chambers: sealed containers containing gas at atmospheric pressure, with a collecting electrode biased to a fixed potential, and connected to an electrometer. Radiation passing through the chamber creates ion pairs, one polarity of which is attracted to the collecting electrode. The current detected is proportional to the number of ion pairs created (e.g. Smith, 1966). This was the first technique to measure radioactivity, and early measurement units such as the Roentgen were related to the number of ions formed in air. Hess, who is credited with discovering cosmic rays, defined a unit for cosmic ray intensity, \\(I\\), the volumetric cosmic ray ion production rate in nitrogen at standard temperature and pressure (Hess, 1939). An example of the use of ionisation chambers was on the geophysical and atmospheric electrical research ship, the _Carnegie_. The ionisation chamber measured \"penetrating radiation\", i.e the surface ionisation rate from cosmic rays, by counting the number of ions produced per unit volume per second. The ionisation chamber used on the _Carnegie_ was a copper chamber of about 22 litres in volume, larger than those commonly used at the time (Ault and Mauchly, 1926). Penetrating radiation measurements on cruises IV and VI, between 1915 and 1921, have been digitised here, and the geographic coordinates recorded in the measurement log converted into geomagnetic coordinates. This calculation required the location of the geomagnetic North Pole. The location of this pole, which hardly varied between 1915-1921, was calculated using altitude adjusted corrected geomagnetic coordinates (see [http://www.wdc.rl.ac.uk/cgi-bin/wdc1/coordcnv.pl](http://www.wdc.rl.ac.uk/cgi-bin/wdc1/coordcnv.pl)). Coordinate conversion was achieved by modelling the Earth's magnetic field as a dipole in a spherical shell (Ziegler, 1996). The variation of the ionisation rate from \"penetrating radiation\" with geomagnetic latitude is shown in Figure 1. The ionisation rate increases with geomagnetic latitude, as expected from the Earth's magnetic field screening out lower energy cosmic rays towards the geomagnetic equator. Cruise VI made measurements over a wider geomagnetic latitude range than Cruise IV, and a flattening of the trace can be seen at geomagnetic latitudes \\(>\\)\\(\\sim\\)50\\({}^{\\circ}\\), where the cosmic ray atmospheric penetration is no longer geomagnetically screened (see Section 3.1). As Figure 1 demonstrates, the ionisation chamber was a simple and effective instrument for measuring the ions formed by cosmic rays. Its disadvantages were that it excluded some of the lower energy particles (Simpson, 2000), and could also be subject to contamination from radioactivity in the walls of the ionisation chamber. Simpson invented the neutron counter, which responds to the fast neutrons produced by the atmospheric interactions of cosmic rays (Simpson, 2000). In summary, the neutron counter is a boron trifluoride proportional counter with an enriched \\({}^{10}\\)B component. \\({}^{4}\\)He\\({}^{2+}\\) is produced when a neutron interacts with \\({}^{10}\\)B, and the doubly charged helium atoms are detected in the proportional counter. In an early comparison, ionisation chambers and the new neutron counters were flown together on an aircraft to calibrate the neutron counter (Simpson, 2000; Biehl _et al._, 1949). Figure 2a shows ionisation rate and neutron counts at 9km, over a range of geomagnetic latitudes (Simpson, 2000). This indicates that there is an approximately linear relationship between ionisation rate and neutron counts in the troposphere over a geomagnetic latitude range of 10-50\\({}^{\\rm o}\\). The relationship is not expected to differ at geomagnetic latitudes \\(>\\)\\(\\sim\\)50\\({}^{\\rm o}\\), for the reasons described in Section 3.1. The close relationship between ionisation chamber and neutron counter data is corroborated by results from the overlapping period 1953-1957 when an ionisation chamber was run at Cheltenham, Massachusetts (Ahluwalia, 1997) at the same time as the neutron monitor at Climax, Colorado, Figure 2b. Figure 2c shows a time series for the overlapping period and up to 2000, in which the solar cycle is clearly visible. As suggested in Section 3.2, a pressure correction was needed to compensate for the changes in ambient air pressure affecting cosmic ray propagation. With this pressure correction applied routinely, the neutron counter has become the standard surface instrumentation for cosmic ray monitoring. ### Neutron counter pressure correction The properties of the nucleonic cascade in the atmosphere depend on the interaction cross-section of atomic nucleii in air per unit volume. Surface cosmic ray intensity detected by any instrument is therefore affected by the integrated air density, or air pressure, in the column of air above it. The level at which primary particles interact with air is an especially important factor. The number of subatomic particles produced by the nucleonic cascade is a function of the distance travelled by the secondary mesons, and the meson range and lifetime is related to the temperature and pressure of the atmospheric layer where the primary particles interact (Sandstrom, 1965). Early pressure corrections were based on linear regression using surface pressure measurement, the coefficients of which differed slightly for each station due to geomagnetic latitude variations. Use of the station surface pressure for the correction initially appears to have been a pragmatic choice based on the data available at the time. Sandstrom (1965) argued that the pressure correction could be improved by including data for the pressure and temperature of many atmospheric layers. This would take account of subtle variations in atmospheric structure affecting subatomic particle interactions (Sandstrom, 1965; Clem and Dorman, 2000). More recently, sophisticated techniques have been developed to compute the pressure correction from full Monte Carlo simulations of the nucleonic cascade (Clem and Dorman, 2000). Despite this advance, the neutron monitor stations retain a simple linear correction using surface pressure readings. Figure 3 shows the effect of the pressure correction at the Oulu neutron monitor (65.05\\({}^{\\rm o}\\)N, 25.47\\({}^{\\rm o}\\)E) for a sample year of neutron data. It is evident that the correction removes much of the pressure dependence of the neutron counter data, but there is still a small residual effect (for 2001) of \\(\\sim\\) -2 counts/min/hPa or \\(\\sim\\) -0.3% neutrons/% pressure change, with \\(r\\)=0.13. This may not be significant for some analyses, but, in the recent studies of cosmic rays and climate, the effects observed in data are often at the few percent level. In this context it is important to eliminate any residual effects of the atmosphere on the data used to represent cosmic ray intensity. ## 5 Atmospheric modulation of modern surface neutron count data Meteorologists use numerical models of the atmosphere for data assimilation: this is the generation of globally gridded data coverage of past atmospheric properties derived as the best estimate of the atmospheric state from all observations. This reanalysis data gives atmospheric parameters available in height profiles from the surface to 10hPa, in 2.5\\({}^{\\rm o}\\) grid squares. The NCEP/NCAR reanalysis geopotential height (height of a pressure surface) data have been used to investigate meteorological effects on surface neutron counts. As the primary particles interact at a constant atmospheric pressure, the 100hPa geopotential height can be used as an indicator of the height of the meson producing layer (Sandstrom, 1965). More information about the spatial variation of the relationship between geopotential height and surface neutron counts can be obtained from vertical meridional cross-sections. Neutron data from a midlatitude station, Climax (39.37\\({}^{\\rm o}\\)N, 253.82\\({}^{\\rm o}\\)E) were chosen, as it could be approximately centred on a latitudinal cross-section from 10\\({}^{\\rm o}\\)S to 60\\({}^{\\rm o}\\)N, covering the geomagnetic equator to the latitudes where the cosmic ray energies are no longer a function of latitude. Correlations between geopotential height and surface neutrons were computed from the surface (1000hPa) to the meson producing layer (100hPa). The correlation contours for 47 years of monthly average data are shown in Figure 4, indicating an anticorrelation at \\(\\sim\\)100hPa, at its highest over the tropics, with a maximum of \\(r=\\) -0.43. The anticorrelation is expected for the following reasons. If the 100hPa geopotential height increases, the meson producing layer is likely to be colder, and the mesons have a shorter lifetime, which results in fewer interactions with air and a lower surface neutron count (Sandstrom, 1965). Whilst the ratio of meson interactions to meson decays is dominant (Clem and Dorman, 2000), there are several competing mechanisms related to the lifetime of unstable species in the atmosphere, and the lower troposphere temperature can also affect meson losses due to ionisation (Olbert, 1953). The region of highest anticorrelation is close to the geomagnetic equator, where only the highest energy primary cosmic rays can enter the atmosphere. This suggests that the neutron production from high-energy nuclear disintegrations may be more sensitive to atmospheric effects than lower energy interactions. The meson producing layer is usually in the troposphere near the equator, and in the stratosphere in the mid to high latitudes, so the spatial variation of the anticorrelation may also be related to dynamical processes in the atmosphere. This could be linked to the persistent relationship between the 10.7cm solar flux, a solar activity indicator, and the 30hPa geopotential height (Labitzke, 2001). However, it is difficult to separate the effects of cosmic ray and solar flux variations using purely statistical approaches. As mentioned in Section 3.1, cosmic ray intensity in the atmosphere is in antiphase with solar activity, but the 10.7cm solar flux is an indicator of total solar irradiance, and is greater at solar maximum. Cosmic ray intensity at Earth and solar activity indicators are therefore closely inversely correlated. This is illustrated in Figure 5, in which, following van Loon and Labitzke (2000), correlations with the 30hPa geopotential height are compared for 1958-1998. Figure 5a) reproduces the results in Figure 3 of van Loon and Labitzke (2000), for the spatial correlation variation between the 10.7cm solar flux and the 30hPa height. In Figure 5b), the pressure corrected monthlysurface neutron counts from the University of Chicago's monitor at Climax are used instead. Figure 5a) and Figure 5b) are, as expected, inversely correlated, although there are small differences in position of the regions of high correlation. The magnitude of the correlation between neutron counts and geopotential height is also slightly greater. In each case there are physical reasons for expecting a correlation: the UV component of the solar irradiance is thought to modulate stratospheric dynamic processes through ozone changes (Haigh, 2003). Detailed theoretical predictions for both postulated mechanisms are needed to distinguish between the solar flux and cosmic ray effects. ## 6 Conclusions Two assumptions about cosmic rays are frequently made in considering relationships between atmospheric ionisation and meteorological processes. The first assumption is that surface neutron counts are a good proxy for the atmospheric ionisation rate. Analysis of historical data comparing the neutron counter and ionisation chamber responses on a plane flying over geomagnetic latitudes of 10-50\\({}^{\\circ}\\) indicated an approximately linear relationship between them. This readily confirms that surface neutron counts can be used as a proxy for ionisation rate. The 1950s data are useful because this is the only period where there is an overlap between regular cosmic ray measurements by both neutron counters and ionisation chambers. It could be valuable for contemporary studies of cosmic ray effects in the atmosphere to be able to calibrate surface neutron counts to the integrated ionisation rate. This would be possible by comparing, for example, balloon ascents measuring the ionisation rate (Bazilevskaya, 2000) with the colocated surface neutron count rate. The second assumption is that the modulation of cosmic rays detected at the surface by atmospheric properties can be ignored. Physical mechanisms of atmospheric effects on cosmic ray propagation, modulated by meteorological factors, were established by the 1940s, but are perhaps not well known to contemporary climate scientists. It has been shown in Sections 4 and 5, using both simple surface measurements, and reanalysis geopotential height data, that the pressure correction used at the Oulu neutron monitors does not completely remove meteorological effects on surface neutron counts. Some of the variance in pressure corrected neutron data can be attributed to geopotential height variations affecting the properties of the meson producing layer. The residual effect is small, \\(<\\)1%, but may be highly spatially variable; it is likely to be greater at some locations. This is supported by spatial correlations between the 30hPa and 100hPa geopotential heights, and cosmic rays measured at the Climax neutron monitor station. Any mechanisms linking cosmic rays and climate may be similarly subtle and variable in magnitude; it is therefore necessary to understand what fraction of the variability remains from the effect of the atmosphere on cosmic rays, before quantifying the effects of cosmic rays in the atmosphere. Globally gridded meteorological reanalysis data are now routinely and freely available, and would benefit from broader use within the geophysical sciences. It would be relatively straightforward to retrieve pressure and temperature profiles for specific neutron monitor stations. These could be used in conjunction with Monte Carlo simulations of the nucleonic cascade to correct for atmospheric effects on neutron countsmore accurately. Without this further theoretical work, a small possibility exists that correlations found between solar and atmospheric parameters could include a component of the effects of the atmosphere on cosmic rays. However, the physical arguments based on ion-aerosol mechanisms appear generally persuasive. ## References * [1] Ahluwalia H.S., Galactic cosmic ray intensity variations at a high latitude sea level site 1937-1994, J. Geophys. Res. 102, A11, pp24229-24236, 1997. * [2] Aplin K.L. and Harrison R.G., A self-calibrating programable mobility spectrometer for atmospheric ion measurements, Rev. Sci. Inst., 72, 8, pp3467-3469, 2001. * [3] Ault, J.P. and Mauchly, S.J. Atmospheric-electric results obtained aboard the Carnegie, 1915-1921, Carnegie Inst. Wash. 175, vol 3, 1926. * [4] Bazilevskaya G.A. Observations of variability in cosmic rays, Space Sci. Rev. 94, pp25-38, 2000. * [5] Biehl A.T., Neher H.V. and Roesch W.C., Cosmic-ray experiments at high altitudes over a wide range of latitudes, Phys. Rev., 76, pp914-933, 1949. * [6] Blackett P.M.S., On the instability of the baryon and the temperature effect of cosmic rays, Phys. Rev. 54, 9, pp73-974, 1938. * [7] Clem J.M. and Dorman L.I., Neutron monitor response functions, Space Sci. Rev., 93, pp335-359, 2000. * [8] Gringel W., Rosen J.M. and Hoffman D.J., Electrical structure from 0 to 30 kilometers, In: The Earth's Electrical Environment, National Academy Press, Washington, USA, 1986. * [9] Harrison R.G. and Carslaw K.S. Ion-aerosol-cloud processes in the lower atmosphere Rev. Geophys. 41, 3, p1012, doi: 10.1029/2002RG000114, 2003. * [10] Haigh J.D. The effects of solar variability on the Earth's climate, Phil. Trans. Roy. Soc. Lond. A., 361, 1802 pp95-111 doi: 10.1098/rsta.2002.1111, 2003. * [11] Hess V.F., On the seasonal and the atmospheric temperature effect in cosmic radiation, Phys. Rev., 57, pp781-785, 1939. * [12] Holden N.K. Atmospheric ion measurements using novel high resolution ion mobility spectrometers, PhD thesis, University of Bristol, UK, 2003. * [13] Horrak U., Salm J., and Tammet H., Statistical characterization of air ion mobility spectra at Tahkuse Observatory: Classification of air ions J. Geophys. Res. 105, D7, pp9291-9302, 2000. * [14] Labitzke K. The global signal of the 11-year sunspot cycle in the stratosphere: differences between solar maxima and minima, Meteorologische Zeitschrift, 10, 2, pp83-90, 2001. * [15] Loughridge D.H. and Gast P.F., Air mass effect on cosmic ray intensity, Phys. Rev., 56, pp1169-1170, 1939. * [16] Loughridge D.H. and Gast P.F., Further investigations of the air mass effect on cosmic-ray density, Phys. Rev., 58, pp583-585, 1940. * [17] Olbert S., Atmospheric effects on cosmic ray intensity near sea level, Phys. Rev., 92, pp454-461, 1953. * [18] Pyle R., Public access to neutron monitor datasets, Space Sci. Rev., 93, pp381-400, 2000. * [19] Sandstrom A.E., Cosmic ray physics, North-Holland. Amsterdam, The Netherlands, 1965. Shaviv N.J., Cosmic ray diffusion from the galactic spiral arms, iron meteorites and a possible climatic connection, Phys. Rev. Lett., 89, 051102, 2002. Simpson J.A., The cosmic ray nucleonic component: the invention and scientific uses of the neutron monitor, Space Sci. Rev., 93, pp11-32, 2000. Simpson J.A., Fonger W. and Treiman, S.B., Cosmic radiation intensity-time variations and their origin, I. Neutron intensity variation method and meteorological factors, Phys. Rev. 90, pp934, 1953 Smith C.M.H., A textbook of nuclear physics, Pergamon Press, Oxford, UK, 1966. Svensmark H., Friis-Christensen E., Variation of cosmic ray flux and global cloud coverage - A missing link in solar-climate relationships J. Atmos. Sol.-Terr. Phys. 59, 11, pp1225-1232, 1997. Tinsley B.A., Influence of solar wind on the global electric circuit, and inferred effects on cloud microphysics, temperature, and dynamics in the troposphere, Space Sci. Rev. 94,1-2, pp231-258, 2000. Tripathi S.N. and Harrison R.G., Enhancement of contact nucleation by scavenging of charged aerosol, Atmos. Res. 62, pp57-70, 2002. van Loon H. and Labitzke K., The influence of the 11-year solar cycle on the stratosphere below 30 km: a review, Space Sci. Rev., 94, pp259-278, 2000. Wolfendale A.W., Cosmic ray origin: the way ahead, J. Phys. G: Nucl. Part. Phys. 29 pp787-800, 2003. Yu F. and Turco R. P., From molecular clusters to nanoparticles: The role of ambient ionization in tropospheric aerosol formation, J. Geophys. Res., 106, pp4797-4814. 2001. Ziegler J.F., Terrestrial cosmic rays, IBM J. Res. Development, 40, 1, pp19-39, 1996. **Acknowledgements** KLA acknowledges partial funding from the UK Natural Environment Research Council (NERC New Investigators' Award NER/M/S/2003/00062), and the UK Particle Physics and Astronomy Research Council (PPA/G/O/2003/00025). AJB acknowledges a NERC studentship. RGH acknowledges a Visiting Fellowship at Mansfield College, Oxford. NCEP Reanalysis data were provided by the NOAA-CIRES Climate Diagnostics Center, Boulder, Colorado, USA, from their Web site at [http://www.cdc.noaa.gov/](http://www.cdc.noaa.gov/). Cosmic ray data were obtained from the Oulu ([http://cosmicrays.oulu.fi/](http://cosmicrays.oulu.fi/)) and Climax ([http://ulysses.sr.unh.edu/NeutronMonitor/neutron](http://ulysses.sr.unh.edu/NeutronMonitor/neutron) mon.html) neutron monitor stations. The Climax station is funded by National Science Foundation Grant ATM-9912341. **Figure Captions** Figure 1. Variation of the ion production rate due to \"penetrating radiation\" with geomagnetic latitude in the Southern Hemisphere. Measurements were made on the Carnegie cruises IV (1915) and VI (1921). Figure 2. Neutron counter-ionisation chamber comparisons a) Calibration of one of the first neutron counters at 9km, over 10-50\\({}^{\\rm o}\\) geomagnetic latitude, after Simpson (2000) and Biehl _et al._ (1949) b) Calibration of neutron monitor to ionisation chamber (IC) for 1953-1957 c) Time series of Cheltenham (USA) ionisation chamber (Ahluwalia, 1997) and Climax neutron monitor. Figure 3. Comparison of pressure corrected and uncorrected neutron count data at Oulu for 2001, as a function of surface atmospheric pressure. Figure 4. Correlations between monthly averaged Climax corrected neutron data and geopotential height over -10\\({}^{\\rm o}\\)S to 60\\({}^{\\rm o}\\)N and for 1000-10hPa, for longitudes 252.5-255\\({}^{\\rm o}\\)E. Figure 5. Spatial correlations between 30hPa geopotential height, 1958-1998, and a) the 10.7cm solar flux, as in van Loon and Labitzke (2000), b) with the monthly surface neutron counts from the Climax neutron counter. **Neutron count rate and ion production rate at 9km data from Biehl et al.**, 1949 and Simpson, 2000 **Calibration of IC to Neutrons (monthly values, 1953-1957 overlap)**
Surface neutron counter data are often used as a proxy for atmospheric ionisation from cosmic rays in studies of extraterrestrial effects on climate. Neutron counter instrumentation was developed in the 1950s and relationships between neutron counts, ionisation and meteorological conditions were investigated thoroughly using the techniques available at the time; the analysis can now be extended using modern data. Whilst surface neutron counts are shown to be a good proxy for ionisation rate, the usual meteorological correction applied to surface neutron measurements, using surface atmospheric pressure, does not completely compensate for tropospheric effects on neutron data. Residual correlations remain between neutron counts, atmospheric pressure and geopotential height, obtained from meteorological reanalysis data. These correlations may be caused by variations in the height and temperature of the atmospheric layer at ~100hPa. This is where the primary cosmic rays interact with atmospheric air, producing a cascade of secondary ionising particles. **Keywords** Cosmic rays, neutron counter, atmospheric ionisation, geopotential height
Condense the content of the following passage.
arxiv-format/0503306v1.md
# Ice Age Epochs and the Sun's Path Through the Galaxy D. R. Gies and J. W. Helsel Center for High Angular Resolution Astronomy and Department of Physics and Astronomy, Georgia State University, P. O. Box 4106, Atlanta, GA 30302-4106; [email protected], [email protected] ## 1 Introduction Since its birth the Sun has made about 20 cycles around the Galaxy, and during this time the Sun has made many passages through the spiral arms of the disk. There is a growing interest in determining how these passages may have affected Earth's environment. Shaviv (2002, 2003) makes a persuasive argument that there is a correlation between extended cold periods on Earth and Earth's exposure to a varying cosmic ray flux (CRF). Shaviv proposes that the CRF varies as the Sun moves through Galactic spiral arms, regions with enhanced star formation and supernova rates that create more intense exposure to cosmicrays. The CRF experienced by Earth may affect the atmospheric ionization rate and, in turn, the formation of charged aerosols that promote cloud condensation nuclei (Harrison & Aplin, 2001; Eichkorn et al., 2002). Marsh & Svensmark (2000) show that there is a close correlation between the CRF and low altitude cloud cover over a 15 year time span. Thus, we might expect that extended periods of high CRF lead to increased cloud cover and surface cooling that result in long term (Myr) ice ages. Spiral arm transits may affect Earth in other ways as well. Yeghikyan & Fahr (2004) suggest that during some spiral passages the Earth may encounter interstellar clouds of sufficient density to alter the chemistry of the upper atmosphere and trigger an ice age of relatively long duration. The higher stellar density in the arms may more effectively perturb the Oort cloud of comets and lead to a greater chance of large impacts on Earth, and this combined with the possible lethal effects of nearby supernova explosions could cause mass extinctions during during passages through the spiral arms (Leitch & Vasisht, 1998). On the other hand, the record of terrestrial impact craters suggests a variation on a time scale shorter than the interarm crossing time, but possibly related to the Sun's oscillations above and below the disk plane (Stothers, 1998). A comparison of the geological record of temperature variations with estimates of the Sun's position relative to the spiral arms of the Galaxy is difficult for a number of reasons. First, our location within the disk makes it hard to discern the spiral structure of the Galaxy, particularly in more distant regions. Nevertheless, there is now good evidence that a four-arm spiral pattern is successful in explaining the emissions from the star-forming complexes of the Galaxy (Russeil, 2003). Second, the angular rotation speed of the Galactic spiral pattern is still poorly known, with estimates ranging from 11.5 (Gordon, 1978) to 30 km s\\({}^{-1}\\) kpc\\({}^{-1}\\)(Fernandez, Figueras, & Torra, 2001) (see reviews in Shaviv 2003, Bissantz, Englmaier, & Gerhard, 2003, and Martos et al., 2004). Finally, the Sun's orbit in the Galaxy is not circular, and we need to account for the Sun's variation in distance from Galactic center and in orbital speed to make an accurate estimate of the Sun's position in the past. Here we present such a calculation of the Sun's path through the Galaxy over the last 500 Myr. It is based upon the Sun's current motion relative to the local standard of rest as determined from parallaxes and proper motions from the _Hipparcos Satellite_(Dehnen & Binney, 1998a) and on a realistic model of the Galactic gravitational potential (Dehnen & Binney, 1998b). We discuss how the spiral pattern speed is critical to the estimates of the times of passage through the spiral arms, and we show a plausible example that is consistent with the occurrence of ice ages during spiral arm crossings. Integration of the Sun's Motion An integration of the Sun's motion was made using a cylindrical coordinate system for the Galaxy of \\((R,\\phi,Z)\\). We first determined the position and resolved velocity components of the Sun in this system using the velocity of the Sun with respect to the local standard of rest (Dehnen & Binney 1998a) and the Sun's position relative to the plane (Holmberg, Flynn, & Lindegren 1997). We then performed integrations backward in time using a fourth-order Runge-Kutta method and a model for the Galactic potential from Dehnen & Binney (1998b). We adopted the model (#2) from Dehnen & Binney (1998b) that uses a Galactocentric distance of \\(R_{o}=8.0\\) kpc and a disk stellar density exponential scale length of \\(R_{d\\star}=2.4\\) kpc. This model has a circular velocity at \\(R_{o}=8.0\\) kpc of 217.4 km s\\({}^{-1}\\). We used time steps of 0.01 Myr over a time span of 500 Myr. Note that the model potential is axisymmetric and does not account for the minor variations in the field near spiral arms. We also ignore accelerations due to encounters with giant molecular clouds, since their effect is small over periods less than 1 Gyr (at least in a statistical sense; Jenkins 1992). The full set of coordinates as a function of time is not included here, but interested readers can obtain the digital data from our web site1. Footnote 1: [http://www.chara.gsu.edu/](http://www.chara.gsu.edu/)\\({}^{\\sim}\\)gies/solarmotion.dat The Sun's journey in cylindrical coordinates is illustrated in Figure 1. The top panel shows the temporal variation in distance from Galactic center, and we see the radial oscillation that is expected from the \"epicycle approximation\" for nearly circular orbits (Binney & Tremaine 1987). The period is 170 Myr and the corresponding frequency is 36.9 km s\\({}^{-1}\\) kpc\\({}^{-1}\\), which is close to the expected value of \\(36.7\\pm 2.4\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\) based upon the local Oort constants (Feast & Whitelock 1997). The middle panel shows the advance in azimuthal position with the orbit (small departures from linearity reflect speed variations that conserve angular momentum). The Sun has completed just over two circuits of the Galaxy over this time span. The lower panel shows the oscillations above and below the Galactic plane. The period is approximately 63.6 Myr, but there are cycle to cycle variations caused by the varying radial density in the model. This period \\(P\\) is approximately related to the mid-plane density at the average radius, \\(\\rho=(26.43\\) Myr\\(/P)^{2}\\)\\(M_{\\odot}\\) pc\\({}^{-3}\\) (Binney & Tremaine 1987). The period for our model of the solar motion corresponds to a mid-plane density of 0.17 \\(M_{\\odot}\\) pc\\({}^{-3}\\), which is close to current estimates of the Oort limit of \\(0.15\\pm 0.01\\)\\(M_{\\odot}\\) pc\\({}^{-3}\\) (Stothers 1998). Thus, while the estimates of motion of the Sun in the \\(Z\\) direction are secure for the recent past, probable errors in the period of approximately 7% may accumulate to as much as half a cycle error in the timing of the oscillations 500 Myr ago. The errors in the estimates of the Sun's current Galactic motions (Dehnen & Binney 1998a)have only a minor impact on these trajectories. For example, the error in the \\(V\\) component of motion amounts to a difference of only \\(3^{\\circ}\\) in \\(\\phi\\) over this 500 Myr time span. We next consider the motion of the Sun in the plane of the Galaxy relative to the spiral arm pattern. The disk of the Galaxy from the solar circle out-wards appears to display a four-arm spiral structure as seen in the emission of atomic hydrogen (Blitz, Fich, & Kulkarni 1983) and molecular CO (Dame, Hartmann, & Thaddeus 2001) and in the distribution of star forming regions (Russeil 2003). We show in Figure 2 the appearance of the Galactic spiral arm patterns based on the model of Wainscoat et al. (1992) but with some revisions introduced by Cordes & Lazio (2003)2. This representation is very similar to the pattern adopted by Russeil (2003). We have rescaled the pattern from a solar Galactocentric radius of 8.5 kpc to a value of 8.0 kpc for consistency with our model of Galactic potential from Dehnen & Binney (1998b). Each arm is plotted with an assumed width of 0.75 kpc (Wainscoat et al. 1992) and each is named in accordance with the scheme of Russeil (2003). The dotted line through the center of the Galaxy indicates the current location of the central bar according to Bissantz et al. (2003). The pattern speed of the bar may be similar to that of the arms (Ibata & Gilmore 1995) or it may be faster than that of the arms (Bissantz et al. 2003), in which case the bar - arm relative orientation will be different in the past. Footnote 2: [http://astrosun2.astro.cornell.edu/](http://astrosun2.astro.cornell.edu/)\\(\\sim\\)cordes/NE2001/ The placement of the Sun's trajectory in this diagram depends critically on the relative angular pattern speeds of the Sun and the spiral arms. The mean advance in azimuth in our model of the Sun's motion corresponds to a solar angular motion of \\(\\Omega_{\\odot}=26.3\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\). If the difference in the solar and spiral arm pattern speeds, \\(\\Omega_{\\odot}-\\Omega_{p}\\), is greater than zero, then the Sun overtakes the spiral pattern and progresses in a clockwise direction in our depiction of the Galactic plane. Unfortunately, the spiral pattern speed is not well established and may in fact be different in the inner and outer parts of the Galaxy (Shaviv 2003). Several recent studies (Amaral & Lepine 1997; Bissantz et al. 2003; Martos et al. 2004) advocate a spiral pattern speed of \\(\\Omega_{p}=20\\pm 5\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\), and we show in Figure 2 the Sun's trajectory projected onto the plane for this value (\\(\\Omega_{\\odot}-\\Omega_{p}=6.3\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\)). Diamonds along the Sun's track indicate its placement at intervals of 100 Myr. We see that for this assumed pattern speed the Sun has passed through only two arms over the last 500 Myr. However, if we assume a lower but still acceptable pattern speed of \\(\\Omega_{p}=14.4\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\) (shown in Fig. 3 for \\(\\Omega_{\\odot}-\\Omega_{p}=11.9\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\)), then the Sun has crossed four spiral arms in the past 500 Myr and has nearly completed a full rotation ahead of the spiral pattern. Thus, the choice of the spiral pattern speed dramatically influences any conclusions about the number and timing of Sun's passages through the spiral arms over this time interval. The duration of a coherent spiral pattern is an open question, but there is some evidence that long-lived spiral patterns may be more prevalent in galaxies with a central bar. For example, numerical simulations of the evolution of barred spirals by Rautiainen & Salo (1999) suggest that spiral patterns may last several gigayears. Their work suggests that the shortest time scale for the appearance or disappearance of a spiral arm is about 1 Gyr. Therefore, it is reasonable to assume that the present day spiral structure has probably been more or less intact over the last 500 Myr (at least in the region of the solar circle). ## 3 Discussion Shaviv (2003) argues that the Earth has experienced four large scale cycles in the CRF over the last 500 Myr (with similar cycle times back to 1 Gyr before the present). Shaviv shows that the CRF exposure ages of iron meteorites indicate a periodicity of \\(143\\pm 10\\) Myr in the CRF rate. Since the cosmic ray production is related to supernovae and since Type II supernovae will be more prevalent in the young star forming regions of the spiral arms, Shaviv suggests that the periodicity corresponds to the mean time between arm crossings (so that Earth has made four arm crossings over the last 500 Myr). Shaviv (2003) and Shaviv & Veizer (2003) show how the epochs of enhanced CRF are associated with cold periods on Earth. The geological record of climate-sensitive sedimentary layers (glacial deposits) and the paleolatitudinal distribution of ice rafted debris (Frakes, Francis, & Syktus, 1992; Crowell, 1999) indicate that the Earth has experienced periods of extended cold (\"icehouses\") and hot temperatures (\"greenhouses\") lasting tens of million years (Frakes et al., 1992). The long periods of cold may be punctuated by much more rapid episodes of ice age advances and declines (Imbrie et al., 1992). The climate variations indicated by the geological evidence of glaciation are confirmed by measurements of ancient tropical sea temperatures through oxygen isotope levels in biochemical sediments (Veizer et al., 2000). All of these studies lead to a generally coherent picture in which four periods of extended cold have occurred over the last 500 Myr, and the midpoints of these ice age epochs (IAE) are summarized in Table 1 (see Shaviv, 2003). The icehouse times according to Frakes et al. (1992) are indicated by the thick line segments in each of Figures 1, 2, and 3. If these IAE do correspond to the Sun's passages through spiral arms, then it is worthwhile considering what spiral pattern speeds lead to crossing times during ice ages. We calculated the crossing times for a grid of assumed values of \\(\\Omega_{\\odot}-\\Omega_{p}\\) and found the value that minimized the \\(\\chi^{2}_{\ u}\\) residuals of the differences between the crossing times and IAE. There are two major error sources in the estimation of the timing differences. First, the calculated arm crossing times depend sensitively on the placement of the spiral arms, and we made a comparison between the crossing times for our adopted model and that of Russeil (2003) to estimate the timing error related to uncertainties in the position of the spiral arms (approximately \\(\\pm 8\\) Myr except in the case of the crossing of the Scutum-Crux arm on the far side of the Galaxy where the difference is \\(\\approx 40\\) Myr). Secondly, there are errors associated with the estimated mid-times of the IAE, and we used the scatter between the various estimates in columns 2 - 5 of Table 1 to set this error (approximately \\(\\pm 14\\) Myr). We adopted the quadratic sum of these two errors in evaluating the \\(\\chi^{2}_{\ u}\\) statistic of each fit. The results of the fitting procedure for various model and sample assumptions are listed in Table 2. The first trial fit was made by finding the \\(\\chi^{2}_{\ u}\\) minimum that best matched the crossing times with the IAE midpoints from Shaviv (2003) (given in column 5 of Table 1 and noted as \"Midpoint\" in column 2 of Table 2). All four arm crossings were included in the calculation (indicated as 1 - 4 in column 3 of Table 2) that used the adopted model for the Galactic potential with a Galactocentric distance \\(R_{o}=8.0\\) kpc and and a stellar disk exponential scale length of \\(R_{d\\star}=2.4\\) kpc (model #2 from Dehnen & Binney 1998b; see columns 4 and 5 of Table 2). The best fit difference (column 6 of Table 2) is obtained with \\(\\Omega_{\\odot}-\\Omega_{p}=12.3\\pm 0.8\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\), where the error was estimated by finding the limits for which \\(\\chi^{2}_{\ u}\\) increased by 1. This fit gave reasonable agreement between the IAE and crossing times for all but the most recent crossing of the Sagittarius - Carina arm. Thus, we made a second fit (#2 in Table 2) using only the crossings associated with IAE 2 - 4, and this solution (with \\(\\Omega_{\\odot}-\\Omega_{p}=11.9\\pm 0.7\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\)) is the one illustrated in Figure 3. The crossing times (given in the final column of Table 1) agree well with the adopted IAE midpoints. Our results are similar to the estimate of \\(\\Omega_{\\odot}-\\Omega_{p}=10.4\\pm 1.5\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\) from Shaviv (2003) who assumed a circular orbit for the Sun in the Galaxy. We also computed orbits using two other models for the Galactic potential from Dehnen & Binney (1998b) and determined the best fit spiral speeds for these as well. Fit #3 in Table 2 was made assuming a larger Galactocentric distance \\(R_{o}=8.5\\) kpc but with the same ratio of \\(R_{d\\star}/R_{o}\\) (model #2b in Dehnen & Binney 1998b), and the resulting best fit spiral speed is the same within errors as that for our adopted model. We also computed an orbit for a potential with a larger value of disk exponential scale length \\(R_{d\\star}/R_{o}\\) (model #3 in Dehnen & Binney 1998b), but again the best fit spiral speed (fit #4 in Table 2) is the same within errors as that for our adopted model. Thus, the details of the adopted Galactic potential model have little influence on the derived spiral pattern speed needed to match the IAE times. We might expect that the IAE midpoint occurs somewhat after the central crossing of the arm. For example, Shaviv (2003) suggests that the IAE midpoint may occur some 21 - 35 Myr after the central arm crossing due to the difference in the stellar and pattern speeds (so that the cosmic rays move ahead of arms as the stellar population does) and to the time delay between stellar birth and supernova explosion of the SN II cosmic ray sources. Furthermore, if ice ages are triggered by encounters with dense clouds as suggested by Yeghikyan & Fahr (2004), then the ice age may not begin until the Sun reaches the gas density maximum at the center of the arm. Thus, we calculated a second set of best fit spiral speeds to match the mean crossing and icehouse starting times (Frakes et al., 1992), and these are listed as fits #5 and #6 in Table 2. This assumption leads to somewhat smaller values of \\(\\Omega_{\\odot}-\\Omega_{p}\\), but ones that agree within errors with all the other estimates. We offer a few cautionary notes about possible systematic errors in this analysis. First, the fit of the IAE and arm crossing times depends on the difference \\(\\Omega_{\\odot}-\\Omega_{p}\\), and if our assumed value of \\(\\Omega_{\\odot}\\) eventually needs revision, then so too will the spiral pattern speed \\(\\Omega_{p}\\) need adjustment. For example, Reid & Brunthaler (2004) derive an angular rotation speed of \\(\\Omega_{LSR}=29.5\\pm 1.9\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\) for the local standard of rest based upon Very Long Baseline Array observations of the proper motion of Sgr A\\({}^{\\star}\\) with respect to two extragalactic radio sources. If we suppose the local Galactic rotation curve is flat, then \\(\\Omega_{\\odot}=\\Omega_{LSR}\\ 8.0/R_{g}=28.7\\pm 1.8\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\), where \\(R_{g}=8.23\\) kpc is the Sun's mean Galactocentric distance. Adopting this value results in a spiral pattern speed of \\(\\Omega_{p}=16.8\\pm 2.0\\). Second, our calculation ignores any orbital perturbations caused by close encounters with giant molecular clouds that cause an increase in the Sun's motion with respect to a circularly rotating frame of reference. Nordstrom et al. (2004) present of a study of the ages and velocities of Galactic disk stars that indicates a net increase in the random component of motion proportional to time raised to the exponent 0.34. Thus, we would expect that the Sun's random speed has increased through encounters by only \\(\\approx 4\\%\\) over the last 500 Myr, too small to change the orbit or the arm crossing times estimates significantly. Third, we have ignored the deviations in the gravitational potential caused by the arms themselves. The Sun presumably slows somewhat during the arm crossings so that the duration of the passage is longer than indicated in our model, but since our model of the gravitational potential represents an azimuthal average, the derived orbital period and interarm crossing times should be reliable. Leitch & Vasisht (1998) argue that mass extinctions may also preferentially occur during spiral arm crossings. However, they proposed that a spiral pattern speed of \\(\\Omega_{p}=19\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\) is required to find consistency between times of mass extinctions and spiral arm crossings, and if correct, then the relationship between ice ages and arm crossings would apparently be ruled out because \\(\\Omega_{p}=19\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\) is too large for the inter-arm crossing time to match the intervals between IAE (see Fig. 2 and Fig. 3). We show the times of the five major mass extinctions as X signs in Figures 1 - 3 (Raup & Sepkoski, 1986; Benton, 1995; Matsumoto & Kubotani, 1996). We see that in fact the lower value of \\(\\Omega_{p}=14.4\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\) (\\(\\Omega_{\\odot}-\\Omega_{p}=11.9\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\), as shown in Fig. 3) also leads to a distribution of mass extinction times that fall close to or within a spiral arm passage, so the association of mass extinctions with arm crossings may also be viable in models with pattern speeds that are consistent with the ice age predictions. Our calculation of the Sun's motion in the Galaxy appears to be consistent with the suggestion that ice age epochs occur around the times of spiral arm passages as long as the spiral pattern speed is close to \\(\\Omega_{p}=14-17\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\). However, this value is somewhat slower than the \\(20\\pm 5\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\) preferred in recent dynamical models of the Galaxy (Amral & Lepine, 1997; Bissantz et al., 2003; Martos et al., 2004). The resolution of this dilemma may require more advanced dynamical models that can accommodate differences between pattern speeds in the inner and outer parts of the Galaxy (for example, a possible resonance between the four-armed spiral pattern moving with \\(\\Omega_{p}=15\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\) and a \"two-armed\" inner bar moving with \\(\\Omega_{p}=60\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\); Bissantz et al., 2003). We thank Walter Dehnen for sending us his code describing the Galactic gravitational potential. We also thank the referee and our colleagues Beth Christensen, Crawford Elliott, and Paul Wiita for comments on this work. Financial support was provided by the National Science Foundation through grant AST\\(-\\)0205297 (DRG). Institutional support has been provided from the GSU College of Arts and Sciences and from the Research Program Enhancement fund of the Board of Regents of the University System of Georgia, administered through the GSU Office of the Vice President for Research. ## References * (1) Amaral, L. H., & Lepine, J. R. D. 1997, MNRAS, 286, 885 * (2) Benton, M. J. 1995, Science, 268, 52 * (3) Binney, J., & Tremaine, S. 1987, Galactic Dynamics (Princeton: Princeton Univ. Press) * (4) Bissantz, N., Englmaier, P., & Gerhard, O. 2003, MNRAS, 340, 949 * (5) Blitz, L., Fich, M., & Kulkarni, S. 1983, Science, 220, 1233 * (6) Cordes, J. M., & Lazio, T. J. W. 2003, preprint (astro-ph/0301598) * (7) Crowell, J. C. 1999, Pre-Mesozoic Ice Ages: Their Bearing on Understanding the Climate System, Mem. Geological Soc. Am., 192 * (8) Dame, T. M., Hartmann, D., & Thaddeus, P. 2001, ApJ, 547, 792 * (9) Dehnen, W., & Binney, J. J. 1998a, MNRAS, 298, 387 * (10) Dehnen, W., & Binney, J. 1998b, MNRAS, 294, 429 * (11) Eichkorn, S., Wilhelm, S., Aufmhoff, H., Wohlfrom, K. H., & Arnold, F. 2002, Geophysical Research Lett., 29, 10.1029/2002GL015044 * (12) Feast, M., & Whitelock, P. 1997, MNRAS, 291, 683 * (13) Fernandez, D., Figueras, F., & Torra, J. 2001, A&A, 372, 833 * (14) Frakes, L. A., Francis, J. E., & Syktus, J. I. 1992, Climate modes of the phanerozoic: the history of the earth's climate over the past 600 million years (Cambridge: Cambridge Univ. Press) * (15) Gordon, M. A. 1978, ApJ, 222, 100 * (16) Harrison, R. G., & Aplin, K. L. 2001, J. Atmospheric Terrestrial Physics, 63, 1811 * (17) Holmberg, J., Flynn, C., & Lindegren, L. 1997, in Proceedings of the ESA Symposium Hipparcos Venice '97 (ESA SP-402), ed. B. Battrick (Noordwijk: ESA/ESTEC), 721 * (18) Ibata, R. A., & Gilmore, G. F. 1995, MNRAS, 275, 605 * (19) Imbrie, J., et al. 1992, Paleoceanography, 7 (#6), 701 * (20) Jenkins, A. 1992, MNRAS, 257, 620 * (21)Leitch, E. M., & Vasisht, G. 1998, New A, 3, 51 * () Marsh, N. D., & Svensmark, H. 2000, Phys. Rev. Lett., 85, 5004 * () Martos, M., Hernandez, X., Yanez, M., Moreno, E., & Pichardo, B. 2004, MNRAS, 350, L47 * () Matsumoto, M., & Kubotani, H. 1996, MNRAS, 282, 1407 * () Nordstrom, B., et al. 2004, A&A, 418, 989 * () Raup, D. M., & Sepkoski, J. J. 1986, Science, 231, 833 * () Rautiainen, P., & Salo, H. 1999, A&A, 348, 737 * () Reid, M. J., & Brunthaler, A. 2004, ApJ, 616, 872 * () Russeil, D. 2003, A&A, 397, 133 * () Shaviv, N. J. 2002, Phys. Rev. Lett., 89, 051102 * () Shaviv, N. J. 2003, New A, 8, 39 * () Shaviv, N. J., & Veizer, J. 2003, GSA Today, 13, #7, 4 * () Stothers, R. B. 1998, MNRAS, 300, 1098 * () Veizer, J., Godderis, Y., & Francois, L. M. 2000, Nature, 408, 698 * () Wainscoat, R. J., Cohen, M., Volk, K., Walker, H. J., & Schwartz, D. E. 1992, ApJS, 83, 111 * () Yeghikyan, A., & Fahr, H. 2004, A&A, 425, 1113Figure 1: The Sun’s position in the Galaxy over the last 500 Myr expressed in cylindrical coordinates, \\(R\\) the distance from Galactic center (_top_), \\(\\phi\\) the azimuthal position in the disk relative to \\(\\phi=0^{\\circ}\\) at present (_middle_), and \\(Z\\) the distance from the plane (_bottom_). Thick line portions mark icehouse epochs on Earth (Frakes et al., 1992), and X signs indicate times of large mass extinctions on Earth. The names of the geological eras and periods over this time span are noted at top. Figure 2: A depiction of the spiral arm pattern of the Galaxy as viewed from above the plane. The plus sign marks the center of the Galaxy while the main four arms plus the local (Orion) spur are indicated as gray shaded regions. The dotted line through the center of the Galaxy indicates the location of the central bar (Bissantz et al., 2003). The Sun’s path in the reference frame of the spiral arms is indicated with a solid line (for \\(\\Omega_{p}=20\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\)), and diamonds mark time intervals of 100 Myr back in time from the present (_top diamond_). The thick portions correspond to icehouse times and the X signs indicate times of large mass extinctions. Figure 3: A depiction of the Sun’s motion relative to the spiral arm pattern in the same format as Fig. 2, but this time for a smaller spiral pattern speed (\\(\\Omega_{p}=14.4\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\)). \\begin{table} \\begin{tabular}{l c c c c c} \\hline \\hline \\multicolumn{1}{c}{ Fit} & IAE & IAE & \\(R_{o}\\) & \\(R_{d\\star}\\) & \\(\\Omega_{\\odot}-\\Omega_{p}\\) \\\\ Number & Times & Sample & (kpc) & (kpc) & (km s\\({}^{-1}\\) kpc\\({}^{-1}\\)) \\\\ \\hline 1 & Midpoint & 1 – 4 & 8.0 & 2.40 & \\(12.3\\pm 0.8\\) \\\\ 2 & Midpoint & 2 – 4 & 8.0 & 2.40 & \\(11.9\\pm 0.7\\) \\\\ 3 & Midpoint & 2 – 4 & 8.5 & 2.55 & \\(11.8\\pm 0.6\\) \\\\ 4 & Midpoint & 2 – 4 & 8.0 & 2.80 & \\(11.8\\pm 0.7\\) \\\\ 5 & Starting & 1 – 4 & 8.0 & 2.40 & \\(11.6\\pm 0.8\\) \\\\ 6 & Starting & 2 – 4 & 8.0 & 2.40 & \\(11.4\\pm 0.6\\) \\\\ \\hline \\end{tabular} \\end{table} Table 2: Fits of Spiral Arm Pattern Speed
We present a calculation of the Sun's motion through the Milky Way Galaxy over the last 500 million years. The integration is based upon estimates of the Sun's current position and speed from measurements with _Hipparcos_ and upon a realistic model for the Galactic gravitational potential. We estimate the times of the Sun's past spiral arm crossings for a range in assumed values of the spiral pattern angular speed. We find that for a difference between the mean solar and pattern speed of \\(\\Omega_{\\odot}-\\Omega_{p}=11.9\\pm 0.7\\) km s\\({}^{-1}\\) kpc\\({}^{-1}\\) the Sun has traversed four spiral arms at times that appear to correspond well with long duration cold periods on Earth. This supports the idea that extended exposure to the higher cosmic ray flux associated with spiral arms can lead to increased cloud cover and long ice age epochs on Earth. Sun: general -- Earth -- cosmic rays -- Galaxy: kinematics and dynamics + Footnote †: slugcomment: ApJ, in press
Summarize the following text.
arxiv-format/0503635v1.md
# Generic Galilean invariant exchange correlation functionals with quantum memory Yair Kurzweil and Roi Baer Corresponding author: FAX: +972-2-6513742, [email protected] Department of Physical Chemistry and the Lise Meitner Center for Quantum Chemistry, the Hebrew University of Jerusalem, Jerusalem 91904 Israel. ## I Introduction Time dependent density functional theory (TDDFT) [1] is routinely used in many calculations of electronic processes in molecular systems. Almost all applications use \"adiabatic\" potentials describing an immediate response of the Kohn-Sham potential to the temporal variations of the electron density. The shortcomings of these potentials were studied by several authors [2, 3, 4, 5]. Some of the problems are associated with self interaction, an alliment inherited from ground-state density functional theory [6]. Other deficiencies are known or suspected to be associated with the adiabatic assumption. The first attempt to include non-adiabatic effects [7] was based on a simple form of the exchange-correlation (XC) potential in the linear response limit. Studying an exactly solvable system, this form was shown to lead to spurious time-dependent evolution [8]. The failure was traced back to violation of a general rule: the XC force density, derived from the potential, should integrate to zero [9]. Convincing arguments were then presented [10], demonstrating that non-adiabatic effects cannot be easily described within TDDFT and instead a _current density_ based theory must be used. Vignale and Kohn [10] gave an expression for the XC potentials applicable for linear response and long wave lengths. That the total XC force is zero is a valid fact not only in TDDFT but also in TDDFT. It stems from the basic requirement that the total force on the non-interacting particles must be equal to the total force on the interacting particles. This is so otherwise a different total acceleration results and the two densities or current densities will be at variance. In the interacting system the total (Ehrenfest) force can only result from an external potential: because of Newton's third law the electrons cannot exert a net force upon themselves. In TDDFT the total force equals the sum of the external force, the Hartree force and the XC force. Since the Hartree force integrates to zero (Newton's third law again) the total XC force does so as well. A similar general argument can be applied to the total torque, showing that the net XC torque must be zero. These requirements then have to be imposed on the approximate XC potentials [9]. The question we deal with in this paper is the how to construct approximations to the XC potentials that manifestly obey the zero XC force and torque condition. For this purpose, we develop the concept of a XC \"action\", obtaining the potentials as functional derivatives of such an action. This is similar to the concepts of DFT, where the XC potential is functionally derived from an energy functional. Our action is a functional \\(S\\left[\\mathbf{u}\\right]\\) of the electron fluid velocity field \\(\\mathbf{u}\\left(\\mathbf in section IV. The result of that section is a reasonably general form of XC potential that conforms to the zero torque and force conditions. In section V we connect the general theory to known results for linear response of the homogeneous electron gas. Arriving at a plausible form of the action functional that is compatible with the HEG longitudinal and transverse response functions. ## II Galilean invariant action As noted above, Galilean invariance of the action means that observers in different Galilean frames report the same value for the XC action. To further explain this point, let us consider two types of relative motion: translational and rotational. One observer, using \"unprimed\" coordinates, denotes the current density as \\(\\,\\mathbf{j}(\\mathbf{R},t)\\,\\) and particle density as \\(\\,n\\left(\\mathbf{R},t\\right).\\,\\) A second observer is using primed coordinates and its coordinate origin is accelerating with respect to the first observer's origin. A given point in space designated as \\(\\,\\mathbf{R}\\,\\) by the first observer is designated by \\[\\mathbf{R}^{\\prime}=\\mathbf{R}+\\mathbf{x}\\left(t\\right) \\tag{1}\\] by the second observer. Let us assume for simplicity that the axes of the two coordinate systems remain parallel (rotations are considered next). Since both observers are studying the same electronic system, the density and velocity functions must be related by: \\[\\begin{split} n^{\\prime}\\!\\left(\\mathbf{R}^{\\prime}\\!,t\\right)& =n\\left(\\mathbf{R},t\\right)=n\\!\\left(\\mathbf{R}^{\\prime}-\\mathbf{x }\\left(t\\right),t\\right)\\\\ \\mathbf{u}^{\\prime}\\!\\left(\\mathbf{R}^{\\prime}\\!,t\\right)& =\\mathbf{u}\\left(\\mathbf{R},t\\right)+\\dot{\\mathbf{x}}\\left(t\\right)= \\mathbf{u}\\!\\left(\\mathbf{R}^{\\prime}-\\mathbf{x}\\left(t\\right),t\\right)+\\dot {\\mathbf{x}}\\left(t\\right)\\end{split} \\tag{2}\\] Following ideas put forth by Vignale[12], we showed in ref.[11] that in order to obtain zero XC force, we demand translational invariance i.e. \\[S\\left[\\mathbf{u}\\right]=S\\!\\left[\\mathbf{u}^{\\prime}\\right] \\tag{3}\\] Now, let us turn our attention to zero torque condition. Again, we refer to two observers. The first is the unprimed observer using coordinates \\(\\,\\mathbf{R}\\,\\) while the second is the double-primed observer. Both observers agree on the origin of coordinate systems, but not on the directions of the axes. At time \\(\\,t\\,\\) a point in space labeled by the unprimed observer as \\(\\,\\mathbf{R}\\,\\) is labeled by the double-primed observer as: \\[\\mathbf{R}^{\\prime\\prime}=M\\left(t\\right)\\mathbf{R} \\tag{4}\\] where \\(\\,M\\left(t\\right)\\,\\) is some instantaneous orthogonal matrix (with unit determinant) describing the mutual rotation of the axes (for convenience, we assume that \\(\\,M\\equiv 1\\,\\) when \\(\\,t=0\\,\\)). The density and velocity fields as defined by this third observer are: \\[\\begin{split} n^{\\prime\\prime}\\!\\left(\\mathbf{R}^{\\prime\\prime} \\!,t\\right)&=n\\left(\\mathbf{R},t\\right)=n\\!\\left(M\\left(t\\right)^ {-1}\\mathbf{R}^{\\prime\\prime}\\!,t\\right)\\\\ \\mathbf{u}^{\\prime\\prime}\\!\\left(\\mathbf{R}^{\\prime\\prime}\\!,t\\right)& =M\\left(t\\right)\\mathbf{u}\\!\\left(\\mathbf{R},t\\right)+\\dot{M} \\left(t\\right)\\mathbf{R}\\\\ &=M\\left(t\\right)\\mathbf{u}\\!\\left(M\\left(t\\right)^{-1}\\mathbf{R }^{\\prime\\prime}\\!,t\\right)+\\dot{M}\\left(t\\right)M\\left(t\\right)^{-1}\\mathbf{R }^{\\prime\\prime}\\end{split} \\tag{5}\\] As for the zero XC force condition, zero total XC-torque is guaranteed when the XC action is RI, \\[S\\left[\\mathbf{u}\\right]=S\\!\\left[\\mathbf{u}^{\\prime\\prime}\\right] \\tag{6}\\] An action which obeys both eqs. (3) and (6) is called a GI action. ## III A generic GI action Since the TDDFT action is unknown, we are focusing in this article on generic forms that guarantee GI, but are otherwise arbitrary. These are _plausible_ forms for the action, which can serve templates for constructing approximate actions with prescribed properties. Being practical, we want to concentrate on relatively simple non-trivial generic forms. The basic idea is to first identify GI quantities accessible by TDCDFT and then writing the action _in terms of them_. ### GI Action depending on the Lagrangian density What are the simply accessible GI quantities? We follow previous works[13, 8, 11] and consider the Lagrangian coordinates, \\(\\,\\mathbf{R}\\left(\\mathbf{r},t\\right)\\,\\) defined by: \\[\\dot{\\mathbf{R}}\\left(\\mathbf{r},t\\right)=\\mathbf{u}\\!\\left(\\mathbf{R}\\left( \\mathbf{r},t\\right),t\\right)\\qquad\\mathbf{R}\\left(\\mathbf{r},0\\right)=\\mathbf{r} \\tag{7}\\] \\(\\,\\mathbf{R}\\left(\\mathbf{r},t\\right)\\,\\) is the position at time \\(\\,t\\,\\) of a fluid element originating at a point labeled \\(\\,\\mathbf{r}\\,\\); in other words, \\(\\,\\mathbf{R}\\left(\\mathbf{r},t\\right)\\,\\) is the _trajectory_ of the fluid element \\(\\,\\mathbf{r}\\,\\). The coordinate \\(\\,\\mathbf{r}\\,\\) can be viewed as a Eulerian coordinate, thus \\(\\,\\mathbf{R}\\left(\\mathbf{r},t\\right)\\,\\) is the Eularian-Lagrangian transformation (ELT). Inventing memory functionals in the Lagrangian frame is easier because local memory is naturally described _within_ a fluid element. The Lagrangian density \\[N\\left(\\mathbf{r},t\\right)=n\\!\\left(\\mathbf{R}\\left(\\mathbf{r},t\\right),t\\right) \\tag{8}\\] is GI, i.e. it is invariant with respect to both linear and rotational accelerating observers. Let us show this explicitly. Consider first accelerations. We assume both observers label the different fluid elements according to Eq. (1). Thus from Eq. (2) we see that \\(\\,N\\left(\\mathbf{r},t\\right)\\,\\) is the same by both observers, or explicitly: \\[\\begin{split} N^{\\prime}\\!\\left(\\mathbf{r},t\\right)& =n^{\\prime}\\!\\left(\\mathbf{R}^{\\prime}\\left(\\mathbf{r},t\\right),t \\right)=n^{\\prime}\\!\\left(\\mathbf{R}\\left(\\mathbf{r},t\\right)+\\mathbf{x} \\left(t\\right),t\\right)\\\\ &=n\\!\\left(\\mathbf{R}\\left(\\mathbf{r},t\\right),t\\right)=N\\left( \\mathbf{r},t\\right).\\end{split} \\tag{9}\\] Note that the primed (unprimed) quantities refer to the measurements of the primed (unprimed) observer. Now consider a rotating (double-primed) frame with fluid parcel labeling conventions given by Eq. (4). From Eq. (5), we find that \\(\\,N\\left(\\mathbf{r},t\\right)\\,\\) is RI by the following consideration: \\[N^{\\prime\\prime}\\!\\left(\\mathbf{r},t\\right)=n^{\\prime\\prime}\\!\\left(\\mathbf{R} ^{\\prime\\prime}\\!\\left(\\mathbf{r},t\\right),t\\right)\\!=n\\!\\left(\\mathbf{R}\\left( \\mathbf{r},t\\right),t\\right)\\!=N\\left(\\mathbf{r},t\\right), \\tag{10}\\] Eqs. (9) and (10) show that \\(\\,N\\left(\\mathbf{r},t\\right)\\,\\) is indeed GI. Thus, a simple generic form for the action functional can be immediately written down as:\\[S^{(\\!1\\!)}\\left[\\mathbf{u}\\right]=s_{{}_{1}}\\!\\left[N\\left[\\mathbf{u}\\right]\\right]\\!. \\tag{3.5}\\] Note that what this equation tells us is that the action, being a functional of the velocity field \\(\\mathbf{u}\\), must depend on it only _through_ the functional dependence of \\(N\\) on \\(\\mathbf{u}\\). This latter dependence is explicit, given by Eqs (3.1) and (3.2): \\[n_{{}_{0}},\\mathbf{u}\\rightarrow\\!\\left[\\begin{matrix}\\mathbf{R}\\left(\\mathbf{ r},t\\right)\\\\ n\\left(\\mathbf{r},t\\right)\\end{matrix}\\right]\\to N\\left(\\mathbf{r},t\\right) \\tag{3.6}\\] The functional \\(s_{{}_{1}}\\) in Eq. (3.5) is _any_ functional of its argument. Once Eq. (3.5) is adopted, the question shifts to producing an appropriate form for \\(s_{{}_{1}}\\). This form is chosen so as to produce specific known physical properties of a general electronic system. Since we will generalize Eq. (3.5) in the next subsection, we will not dwell upon the form of \\(s_{{}_{1}}\\). We will discuss the form of the more general case. ### GI Action depending on the ELT metric tensor Looking for a more general form, we now consider the Jacobian matrix of the ELT: \\[\\Im_{{}_{\\bar{\\imath}}}=\\partial_{{}_{j}}R_{{}_{i}}\\left(\\mathbf{r},t\\right) \\tag{3.7}\\] Here and henceforth we use the notation \\(\\partial_{{}_{i}}\\equiv\\partial/\\partial_{{}_{\\bar{\\imath}}}\\), \\(i=\\!1\\!,\\!2\\!,3\\). This matrix is TI, as can be straightforwardly verified [11]. However, \\(\\Im\\) is _not_ RI. Indeed, the following transformation, derived from the definition of the rotation, \\(\\mathbf{R}^{{}^{\\prime\\prime}}=M\\left(t\\right)\\mathbf{R}\\), must hold: \\[\\Im^{{}^{\\prime\\prime}}(\\mathbf{r},t)=M\\left(t\\right)\\Im\\left(\\mathbf{r},t\\right) \\tag{3.8}\\] While \\(\\Im\\) is not RI, its _determinant_ is: since \\(\\det\\Im^{{}^{\\prime\\prime}}=\\det M\\det\\Im\\) and \\(\\det M=1\\) (since it is a proper rotation matrix). One can then suggest a generic functional of the form: \\[S^{(\\!2\\!)}\\left[\\mathbf{u}\\right]=s_{{}_{2}}\\!\\left[\\det\\Im\\left[\\mathbf{u} \\right]\\right]\\!. \\tag{3.9}\\] Comparing with \\(S^{(\\!1\\!)}\\) though, we find \\(S^{(\\!2\\!)}\\) contains nothing new! This is because the function \\(N\\left(\\mathbf{r},t\\right)\\) is directly related to the Jacobian determinant. Indeed, the number of particles in a fluid element must be constant so \\(n\\!\\left(\\mathbf{R}\\left(\\mathbf{r},t\\right),t\\right)\\!d^{{}_{2}}R=n\\left( \\mathbf{r},0\\right)\\!d^{{}_{2}}r\\), and thus the ratio of volume elements, which is the Jacobian is given by the ratio of densities: \\[J\\left(\\mathbf{r},t\\right)^{{}_{1}}\\equiv\\left|\\!\\det\\!\\left[\\Im\\left(\\mathbf{ r},t\\right)\\right]\\!\\right|^{{}_{1}}=N\\left(\\mathbf{r},t\\right)\\!\\!\\left/n_{{}_{0}} \\left(\\mathbf{r}\\right)\\!,\\right. \\tag{3.10}\\] where \\(n_{{}_{0}}\\left(\\mathbf{r}\\right)=n\\left(\\mathbf{r},0\\right)\\). Thus, the functional \\(s_{{}_{1}}\\) in Eq. (3.5) can also be thought of as a functional of \\(\\det\\!\\left[\\Im\\right]\\). Our first attempt to introduce an action in terms of the Jacobian \\(\\Im\\) yielded nothing new (beyond Eq. (3.5)). So, let us return to Eq. (3.8), which describes how the Jacobian changes under rotations and search for another quantity which is rotationally invariant. This leads us to consider the \\(3\\!\\times\\!3\\) symmetric positive-definite ELT metric tensor: \\[g\\left(\\mathbf{r},t\\right)=\\Im\\left(\\mathbf{r},t\\right)^{T}\\Im\\left(\\mathbf{r },t\\right)\\!. \\tag{3.11}\\] It is immediately obvious from Eq. (3.8) and the orthogonality of \\(M\\left(t\\right)\\)that \\(g\\left(\\mathbf{r},t\\right)=g^{{}^{\\prime\\prime}}\\!\\left(\\mathbf{r},t\\right)\\). Thus \\(g\\) is RI. And since \\(\\Im\\) is TI [11], it is also TI. We conclude that the metric tensor \\(g\\) is GI. One can also see that \\(g\\) must be GI because of its geometric content: the tensor \\(g\\) essentially tells us how to compute the distance \\(dS\\) between two infinitesimally adjacent fluid elements, that originally started at \\(\\mathbf{r}\\) and \\(\\mathbf{r}+d\\mathbf{r}\\) respectively: \\[dS^{2}=\\left(\\mathbf{R}\\left(\\mathbf{r}+d\\mathbf{r},t\\right)-\\mathbf{R}\\left( \\mathbf{r},t\\right)\\right)^{2}=d\\mathbf{r}^{{}^{T}}\\cdot g\\cdot d\\mathbf{r}. \\tag{3.12}\\] Any two observers (rotated or accelerated) will agree upon such a distance between any two electron-fluid parcels. This is because the observers, while changing position of their coordinate axes origins and directions, still preserve the distance scale. Thus we conclude again that \\(g\\left(\\mathbf{r},t\\right)\\)_must_ be GI. The metric tensor is a natural quantity on which a generic action can defined. Thus, we consider the following class of metric-tensor actions: \\[S\\!\\left[n_{{}_{0}},\\mathbf{u}\\right]\\!=\\!s\\!\\left[n_{{}_{0}},g\\left[\\mathbf{ u}\\right]\\!\\right]\\!. \\tag{3.13}\\] Here, \\(n_{{}_{0}}\\) is the initial ground-state density (assuming the system starts from its ground-state). It is comforting to note that in view of Eq. (3.10) and the fact that \\(\\left|\\!\\det\\Im\\right|=\\sqrt{\\det g}\\), the generic action in Eq. (3.13) includes \\(S^{(\\!1\\!)}\\) in Eq. (3.5) as a special case. Eq. (3.13) tells us that the action depends on \\(\\mathbf{u}\\) only through the dependence of \\(g\\) on \\(\\mathbf{u}\\). This is an explicit dependence that we designate by: \\[\\mathbf{u}\\rightarrow\\mathbf{R}\\rightarrow\\Im\\to g \\tag{3.14}\\] Writing down Eq. (3.13) still tells us nothing about what the form of \\(s\\!\\left[n_{{}_{0}},g\\right]\\) is. Before we discuss this issue, we first discuss the method by which the GI potentials can be derived from the generic action in Eq. (3.13). ## IV The XC vector potential In this section we set out to derive the XC potential that is obtained from the generic metric tensor action by functional derivation. It is important to realize, following van-Leeuwen's work [14], that the action and the vector potentials derived from it must be defined using a Keldysh contour. The Keldysh contour allows us to avoid causality problems inherent in any \"usual\" action formulated in terms of the density or current density. The Keldysh contour is a closed loop \\(t\\left(\\tau\\right)\\) in time domain, parameterized by \\(\\tau\\in\\!\\left[0,\\tau_{{}_{f}}\\right]\\), called \"pseudo-time\". A closed loop, means that \\(t\\left(0\\right)=t\\left(\\tau_{{}_{f}}\\right)\\). The use of Keldysh contours in the context of memory functionals is explained in detail in references [11, 14]. The action is formulated in terms of pseudo-timedependence \\(\\tau\\). Any physical quantity is assumed to depend on \\(\\tau\\). The potentials are then obtained as functional derivatives at the physical time dependence. By that we mean that _after_ the functional derivative is taken all quantities are evaluated on the contour, replacing the variable \\(\\tau\\) by \\(t(\\tau)\\). The potential derived from Eq. (3.13) is obtained from a chain-rule functional derivation. Defining the symmetric tensor \\[Q_{{}_{\\beta}}\\left({\\bf r}^{\\prime},t^{\\prime}\\right)\\equiv 2\\,\\frac{\\delta s \\big{[}n_{{}_{0}},g\\big{]}}{\\delta g_{{}_{\\beta}}\\left({\\bf r}^{\\prime},t^{ \\prime}\\right)}, \\tag{4.1}\\] where the factor of 2 appears for later convenience. The form of the TDCDFT vector potential is obtained by considering the action change resulting from perturbing the velocity field at time \\(t\\) and position \\({\\bf R}\\equiv{\\bf R}\\left({\\bf r},t\\right)\\): \\[a_{{}_{k}}\\left({\\bf R},t(\\tau)\\right)=\\frac{1}{2}\\int_{{}_{C}}\\dot{t}\\big{(} \\tau^{\\prime}\\big{)}d\\tau^{\\prime}\\!\\!\\int\\!Q_{{}_{\\beta}}\\left({\\bf r}^{ \\prime},\\tau^{\\prime}\\right)\\frac{\\delta g_{{}_{\\beta}}\\left({\\bf r}^{\\prime},\\tau^{\\prime}\\right)}{\\delta u_{{}_{k}}\\left({\\bf R},\\tau\\right)}d^{3}r^{ \\prime}, \\tag{4.2}\\] (here we use the convention that repeated indices are summed over). We note that the integration on time here is performed as an integration over the Keldysh contour [14], described fully in ref [11]. All quantities that depend on time become quantities that depend on pseudo-time \\(\\tau\\). Physical quantities however are obtained after the functional differentiation by evaluating the expressions on the Keldysh contour (i.e. taking them to depend on \\(t(\\tau)\\) instead of directly on \\(\\tau\\)). The derivative appearing in Eq. (4.2) is given by: \\[\\frac{\\delta g_{{}_{\\beta}}\\left(x^{\\prime}\\right)}{\\delta u_{{}_{k}}\\left(X \\right)}= \\left[\\Im_{{}_{k}}\\left(x^{\\prime}\\right)\\!\\partial^{\\prime}_{{}_{j}}+ \\Im_{{}_{b}}\\left(x^{\\prime}\\right)\\!\\partial^{\\prime}_{{}_{i}}\\right]\\!G_{{}_ {\\!k}}\\left(x^{\\prime};X\\right) \\tag{4.3}\\] Where \\(x^{\\prime}\\!\\equiv\\!\\left({\\bf r}^{\\prime},\\tau^{\\prime}\\right)\\), \\(X\\!\\equiv\\!\\left({\\bf R},\\tau\\right)\\) and \\(G_{{}_{\\!ij}}\\) is derived in ref. [11], given by: \\[G_{{}_{\\!k}}\\left(x^{\\prime};X\\right) = \\left[\\Im\\!\\left({\\bf r}^{\\prime},\\tau^{\\prime}\\right)\\Im\\!\\left( {\\bf r}^{\\prime},\\tau\\right)^{-1}\\right]_{{}_{\\!0}} \\tag{4.4}\\] \\[\\theta\\!\\left(\\tau^{\\prime}-\\tau\\right)\\!\\delta\\!\\left({\\bf R} \\left({\\bf r}^{\\prime},\\tau\\right)-{\\bf R}\\right)\\] Using Eqs. (4.3) and (4.4) in (4.2), we find, integrating by parts: \\[a_{{}_{k}}\\left({\\bf R},\\tau\\right) = -\\!\\int_{{}_{C}}\\dot{t}\\big{(}\\tau^{\\prime}\\big{)}\\!\\int\\!\\partial _{{}_{i}}\\!\\left[Q_{{}_{\\!ij}}\\left({\\bf r}^{\\prime},\\tau^{\\prime}\\right)\\! \\Im_{{}_{b}}\\left({\\bf r}^{\\prime},\\tau^{\\prime}\\right)\\right] \\tag{4.5}\\] \\[G_{{}_{\\!k}}\\left({\\bf r}^{\\prime},\\tau^{\\prime};{\\bf R},\\tau \\right)\\!d^{3}r^{\\prime}d\\tau^{\\prime}\\] Evaluating the integral on the Keldysh contour, we find the following general form vector potential: \\[{\\bf a}\\!\\left({\\bf R}\\left({\\bf r},t\\right),t\\right)=J\\left({\\bf r},t\\right)^ {-1}\\Im\\left({\\bf r},t\\right)^{-1}\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!
Today, most application of time-dependent density functional theory (TDDFT) use adiabatic exchange-correlation (XC) potentials that do not take into account non-local temporal effects. Incorporating such \"memory\" terms into XC potentials is complicated by the constraint that the derived force and torque densities must integrate to zero at every instance. This requirement can be met by deriving the potentials from an XC action that is Galilean invariant (GI). We develop a class of simple but flexible forms for an action that respect these constraints. The basic idea is to formulate the action in terms of the Eulerian-Lagrangian transformation (ELT) metric tensor, which is itself GI. The general form of the XC potentials in this class is then derived and the linear response limit is derived as well.
Condense the content of the following passage.
arxiv-format/0504111v2.md
# Efficient Cluster State Construction Under the Barrett and Kok Scheme Simon Charles Benjamin Department of Materials, University of Oxford. [email protected] ###### In Ref. [1] Barrett and Kok (BK) describe a beautifully simple scheme for entangling separated matter qubits via an optical \"which-path-erasure\" process. Their scheme is necessarily probabilistic, with a destructive failure outcome that must occur at least 50% of the time. Therefore they suggest using the process to construct a cluster state. The term cluster state, along with the more general term graph state, is used to refer to a multi-qubit entangled state with which one can subsequently perform 'one-way' quantum computation purely via local measurements [2; 3]. The construction of such a state can tolerate an arbitrarily high failure rate, within the overall decoherence time, providing that successes are (a) known and (b) high fidelity. These properties are exhibited by the BK scheme and thus it is an efficient route to QC in the formal sense. However, in practice it is vital to know the overhead implied by the finite success probability \\(p\\). In their paper Barrett and Kok suggest that it is necessary to build linear fragments of length greater than \\(1/p\\) in order that, when one subsequently attempts to join those fragments onto some nascent cluster state, there will be a net positive growth. In fact the requirement seems to be rather less severe: the graph state created by a successful fusion of a simple EPR pair possesses a _redundant'_ end' in such a way that when a subsequent addition fails, the total length does not decrease. (An EPR pair is equivalent, up to local unitaries, to an isolated two-qubit graph edge - I use term EPR in that sense.) A real decrease occurs only when a success is followed by _two consecutive_ failures. The general action of a complete (two round) BK entanglement process is as shown in Fig. 1 (see Appendix for analysis) - it is evidently a fusion operation yielding a redundant qubit, in the sense of Ref. [2]. The redundancy proves to be absolutely ideal for efficient cluster state construction, given the necessarily large failure probability. The simplest interesting strategy, which I denote \\(\\mathbb{S}1\\), is shown in Fig. 2: we prepare EPR pairs (at a cost \\(1/p\\) operations each) and attempt to attach them to dangling bonds on the existing cluster. On success, the cluster gains two edges; on failure it loses one edge. The strict limit above which net growth is possible is then seen to be \\(p>1/3\\). However, there are strategies involving preparing larger fragments prior to attachment to the main clus Figure 1: Effect of the BK scheme on arbitrary input cluster states. The outcomes are probabilistic: in ideal circumstances there is a 50% chance of the success, yielding a form of fusion in the sense of Ref.[2]. The set of connections radiating from the two marked qubits is completely arbitrary, including the case that there are no connections - in which case the process simply couples two isolated qubits to form a single graph segment (equivalent to an EPR pair). Figure 2: Simple strategy S1 for growing a cluster state, here a linear chain. We use the rule in Fig. 1 and add in an EPR pair at each stage. In the uppermost row the fragments incorporate linear sections of 2,4,4,5 qubits respectively. ter, which perform better than \\(\\mathbb{S}1\\) regardless of \\(p\\). (This is in contrast to the procedure in Ref.[1], where one only resorts to larger pre-prepared linear sections if growth is impossible otherwise.) The following strategy, \\(\\mathbb{S}2\\), is an example: 1. Prepare a 3-node from EPR pairs, as in Fig. 2. 2. Attempt to attach that 3-node to the main cluster; 3. If successful, we have increased the number of edges by four - we may now go to (1) and repeat. 4. If unsuccessful, we have reduced the number of edges in the cluster by 1, and reduced our 3-node to a linear section of 3 qubits. We then attempt to upgrade this section back to a 3-node by attaching one further qubit to the central qubit. On success we jump to (2), on failure we have no remaining resources and must begin again at (1). This strategy will lead to cluster growth provided \\(p>\\frac{1}{5}\\). Over the full range \\(\\frac{1}{2}>p>\\frac{1}{5}\\) the strategy is less costly than \\(\\mathbb{S}1\\), an observation which suggests that in general the optimal strategy may involve growing large fragments prior to attachment. At \\(p=\\frac{1}{2}\\), the cost per edge is 6, compared to 14 for the BK scheme according to the quantity \\(C_{3}=(p^{-2}+p^{-1}+1)/(3p-1)\\). At \\(p=0.4\\) the cost per edge is 13, which compares to 48.75 under BK - a trend to increasing gain as \\(p\\) falls. When one introduces recycling into the BK strategy (as they suggest) then their costs do fall slightly: 12 for \\(p=\\frac{1}{2}\\) and 41.25 for \\(p=0.4\\) - but the same observations apply. As shown in Fig. 2 and the upper part of Fig. 3, when one creates a linear section using \\(\\mathbb{S}1\\) or \\(\\mathbb{S}2\\) there will be a large number of apparently redundant nodal 'leaves'. These could of course be pruned off by Z-measurements, once the target length is obtained. However they are in fact highly useful: they permit one to join together linear sections into higher dimensional arrays _without_ additional EPR fragments. This is indicated in the lower part of Fig. 3. We will successfully convert a proportion \\(p\\) of our leaves to the 'T' cross pieces. At \\(p=0.4\\), our cost-per-edge for building quasi-linear sections under \\(\\mathbb{S}2\\) was 13; we lose only small proportion of our total number of edges as we connect these linear sections, and I estimate the final cost at about 16. I emphasize that there is no reason to suppose that strategy \\(\\mathbb{S}2\\) is optimal. The potential efficiency of this BK based approach is therefore comparable to non-destructive growth schemes such as the recently proposed ingenious \"repeat-until-success\" process [4]. I have recently been made aware[5] that the BK scheme may also be enhanced at a another level, in parallel to the strategy refinements explained here. The idea is to address the steps involved _within_ each EO. The BK protocol involves a clever 'double heralding' which filters out the unwanted component \\(\\left|11\\right\\rangle\\) from the qubit state, even when photon loss is present. As a development of this idea, one can postpone the filtering steps from successive EO's and subsume them into a single subsequent step. Thanks to Sean Barrett, Dan Browne, Earl Campbell, Jens Eisert, Pieter Kok, Bill Munro and Tom Stace for helpful discussions. ## Appendix: Analysis of the Fusion Process The state specified by the 'before' diagram in Fig. 1 can be written as: \\[\\left(\\left|0\\right\\rangle+\\left|1\\right\\rangle\\sigma_{\\mathrm{L}}^{Z}\\right) \\left(\\left|0\\right\\rangle+\\left|1\\right\\rangle\\sigma_{\\mathrm{R}}^{Z}\\right) \\left|X\\right\\rangle\\] Here \\(\\left|X\\right\\rangle\\) represents the graph state obtained by deleting the two marked qubits. The operator \\(\\sigma_{\\mathrm{L}}^{Z}\\equiv\\prod\\sigma_{1}^{Z}\\sigma_{2}^{Z} \\sigma_{j}^{Z}\\) is the product of \\(\\sigma^{Z}\\) operators applied to each of those qubits \\(1..j\\) inside \\(\\left|X\\right\\rangle\\) to which our _left_ hand qubit is attached by a graph edge. The operator \\(\\sigma_{\\mathrm{R}}^{Z}\\) is analogously defined for our right hand qubit. Following the BK procedure, prior to measurement we make a \\(\\sigma^{X}\\) operation on (say) the left qubit. \\[\\left(\\left|10\\right\\rangle+\\left|11\\right\\rangle\\sigma_{\\mathrm{R}}^{Z}+ \\left|00\\right\\rangle\\sigma_{\\mathrm{L}}^{Z}+\\left|01\\right\\rangle\\sigma_{ \\mathrm{L}}^{Z}\\sigma_{\\mathrm{R}}^{Z}\\right)\\left|X\\right\\rangle\\] The action of the optical entanglement process is defined by one of the four projection operators, \\(\\left|00\\right\\rangle\\left\\langle 00\\right|\\), \\(\\left|11\\right\\rangle\\left\\langle 11\\right|\\), \\(\\left|10\\right\\rangle\\left\\langle 10\\right|\\pm\\left|01\\right\\rangle\\left\\langle 01\\right|\\). Each is associated with a unique measurement signature. The former two are the destructive failures, and the latter two are the successes. Assuming success we have \\[\\left(\\left|10\\right\\rangle\\pm\\left|01\\right\\rangle\\sigma_{\\mathrm{L}}^{Z} \\sigma_{\\mathrm{R}}^{Z}\\right)\\left|X\\right\\rangle\\] Now we flip the left qubit again, and fix the minus sign, if it has occurred, with a \\(\\sigma_{Z}\\) on either qubit. \\[\\left(\\left|00\\right\\rangle+\\left|11\\right\\rangle\\sigma_{\\mathrm{L}}^{Z} \\sigma_{\\mathrm{R}}^{Z}\\right)\\left|X\\right\\rangle\\] Evidently this is a state where a single redundantly encoded qubit (in the sense of Ref.[2]) inherits all the bonds of the Figure 3: Above: a typical linear section growth by the methods depicted in Fig. (2) will feature a large number of nodal ‘leaves’ (green). Below: A higher dimensional network formed by fusing the nodal ‘leaves’ of different linear sections, without supplying further EPR pairs. One successfully fuses a proportion \\(p\\) of the original leaves, and these now provide a dense interconnected structure. There are further fusion opportunities afforded by the second generation leaves. Here the network is shown in 2D for clarity, but obviously the connections need not have this local structure. A graph of this kind is a resource for efficient one-way quantum computation. previous pair (modulo 2, i.e. if the previous pair bonded to a common qubit in \\(|X\\rangle\\), there is no bound to that qubit). The final cluster state in Fig. 1 is then obtained simply by applying a Hadamard rotation to one of our pair, which now becomes our 'leaf'. Obviously these steps subsequent to the measurement can be compressed to a single operation on one qubit. ## References * (1) S.D. Barrett and P. Kok, Phys. Rev. A **71**, 060310(R) (2005). * (2) D.E. Browne and T. Rudolph, quant-ph/0405157. * (3) M. Hein, J. Eisert and H.J. Briegel, Phys. Rev. A **69**, 062311 (2004). * (4) Y.L. Lim, A. Beige, and L.C. Kwek, Phys. Rev. Lett. **95**, 030505 (2005). * (5) Bill Munro, private communication.
Recently Barrett and Kok (BK) proposed an elegant method for entangling separated matter qubits. They outlined a strategy for using their entangling operation (EO) to build graph states, the resource for one-way quantum computing. However by viewing their EO as a graph fusion event, one perceives that each successful event introduces an ideal redundant graph edge, which growth strategies should exploit. For example, if each EO succeeds with probability \\(p\\gtrsim 0.4\\) then a highly connected graph can be formed with an overhead of only about ten EO attempts per graph edge. The BK scheme then becomes competitive with the more elaborate entanglement procedures designed to permit \\(p\\) to approach unity.
Condense the content of the following passage.
arxiv-format/0504598v1.md
# New Jupiter and Saturn formation models meet observations Yann Alibert,1 Olivier Mousis,12 Christoph Mordasini,1 & Willy Benz1 Footnote 1: affiliation: Physikalisches Institut, University of Bern, Sidlerstrasse 5, CH-3012 Bern, Switzerland, and \\({}^{2}\\)Observatoire de Besancon, CNRS-UMR 6091, BP 1615, 25010 Besancon Cedex, France. email: [email protected], [email protected], [email protected], [email protected]. Footnote 2: affiliation: Observatoire de Besancon, CNRS-UMR 6091, BP 1615, 25010 Besancon Cedex, France. email: [email protected], [email protected], [email protected], [email protected]. ## 1 Introduction The standard giant planet formation scenario is the so-called core-accretion model. In this model, a solid core is formed first by the accretion of solid planetesimals. As the core grows, it eventually becomes massive enough to gravitationally bind some nebular gas, forming a gaseous envelope in hydrostatic equilibrium. The further increase in core and envelope masses lead to larger and larger radiative losses which ultimately prevent the existence of an equilibrium envelope. Runaway gas accretion occurs, rapidly building up a giant planet. This scenario, which naturally implies giant planets enriched in heavy elements compared to the Sun, has suffered so far from the problem that the resulting formation time is comparable to, or longer than, the lifetime of protoplanetary disks as inferred from observations (Pollack et al. 1996 - hereafter P96, Haisch et al. 2001). One approach to solve this long-standingproblem has been to revise the opacities used to model the planet's envelope (see Hubickyj et al. 2003) and/or to allow for a local enhancement in the number of planetesimals (Klahr & Bodenheimer 2003). However, Alibert et al. (2004,2005a - hereafter A04 and A05a) have shown recently that extending the original core accretion scenario to include migration of the growing planet and protoplanetary disk evolution, results in a formation speed-up by over an order of magnitude even without local density enhancements or modified opacities1. In this Letter, we show that in addition to solving the formation timescale problem, these models can also account for the characteristics of the two most well known giant planets: Jupiter and Saturn. Footnote 1: The capability of migration to prevent the isolation of the protoplanetary core was already stated by Ward (1989) and Ward & Hahn (1995). Within the framework of our model, we find that the uncertainties on the characteristics of the initial protoplanetary disk are large enough to allow us to match the observed properties of a single planet (Jupiter for example) relatively easily. The situation is more complicated with _two_ planets forming within the _same_ protoplanetary disk. For each satisfactory model matching the observed properties of Jupiter (total mass, distance to the sun, mass of the core, total mass of heavy elements and volatiles enrichments), only two parameters are left in order to account for the same five quantities in Saturn: the initial location of the embryo and the time offset (that can be equal to 0) between the start of the formation of both planets. The purpose of this Letter is to show that, by assuming reasonable properties for the initial protoplanetary disk, it is possible to construct models of Jupiter and Saturn, compatible with all the observations detailled in the next two paragraphs. Using measurements of Jupiter and Saturn (mass, radius, surface temperature, gravitational moments, etc.) and state-of-the-art structure modeling, Saumon & Guillot (2004 - hereafter SG04) have derived important constraints regarding the possible internal structure of these planets. From this modeling, \\(M_{\\rm core}\\), the mass of the core of the planet and \\(M_{\\rm Z,enve}\\), the amount of heavy elements in the envelope (assumed to be homogeneously distributed) can be obtained. In the case of Jupiter, the maximum total amount of heavy elements present in the planet (\\(M_{\\rm core}+M_{\\rm Z,enve}\\)) is of the order of \\(\\sim 42M_{\\oplus}\\) (Earth masses), whereas the mass of the core can vary from 0 to \\(13M_{\\oplus}\\). This large uncertainty is essentially due to the undertainties in the equation of state of hydrogen. In the case of Saturn, \\(M_{\\rm Z,enve}\\) ranges from nearly 0 to 10 \\(M_{\\oplus}\\), the mass of the core being between 8 and 25 \\(M_{\\oplus}\\). Note however that the mass of the solid core might be reduced by up to \\(\\sim 7M_{\\oplus}\\) depending upon the extend of sedimented helium, a process which is required to explain the present day luminosity of the planet (Fortney & Hubbard 2003, Guillot 2005). Abundances of some volatile species in the atmosphere of Jupiter have been measured using the mass spectrometer on-board the _Galileo_ probe (see Mahaffy et al. 2000, Wong et al. 2004). These measurements show that the planet's atmosphere is enriched in C, N, S, Ar, Kr, and Xe by a factor of \\(3.7\\pm 0.9\\), \\(3.2\\pm 1.2\\), \\(2.7\\pm 0.6\\), \\(1.8\\pm 0.4\\), \\(2.4\\pm 0.4\\), and \\(2.1\\pm 0.4\\) respectively compared to solar values (Lodders 2003). For Saturn, ground-based observations (Brigg & Sackett 1989, Kerola et al. 1997) have shown that C and N are enriched by a factor of \\(3.2\\pm 0.8\\) and \\(2.4\\pm 0.5\\) compared to the solar values. Since the two planets are almost entirely convective, we assume that these enrichments are representative of the mean envelope composition. In Sect. 2 of this Letter, we give a short presentation of our formation models. In Sect. 3, we apply these models to Jupiter and Saturn, and show an important role of Jupiter's formation on that of Saturn. In Sect. 4, we calculate the enrichments in volatile species in the atmosphere of the two planets, and Sect. 5 is devoted to summary and conclusions. ## 2 Formation models Our formation models consist in the simulation of the time evolution of the protoplanetary disk and of the two planetary seed embryos that will eventually lead to Jupiter and Saturn. We calculate in a consistent way the structure and evolution of the disk, the migration of the planets, and their growth in mass due to accretion of gas and planetesimals. The evolution of the protoplanetary disk is calculated in the framework of the \\(\\alpha\\) formalism (Shakura & Sunyaev 1973). The initial gas surface density \\(\\Sigma\\) inside the protoplanetary disk (which extends from 0.25 AU to 30 AU) is given by \\(\\Sigma\\propto r^{-3/2}\\). The gas to solids ratio is constant in the whole disk (the embryos always stay beyond the ice line), with \\(\\Sigma_{\\rm gas}/\\Sigma_{\\rm solids}=70\\). The gas surface density evolves as a result of viscous transport and photoevaporation: \\[\\frac{d\\Sigma}{dt}=\\frac{3}{r}\\frac{\\partial}{\\partial r}\\left[r^{1/2}\\frac{ \\partial}{\\partial r}\ u\\Sigma r^{1/2}\\right]+\\dot{\\Sigma}_{w}(r).\\] The photoevaporation term \\(\\dot{\\Sigma}_{w}(r)\\) is taken as in Veras & Armitage (2004). The thermodynamical properties of the disk as function of position and surface density (temperature, pressure, density scale height), as well as the mean viscosity \\(\ u\\), are calculated by solving the vertical structure equations using the method presented in Papaloizou & Terquem (1999) and A05a. These quantities are used to determine the composition of the ices incorporated in the planetesimals, and finally the enrichments in volatile species in the two planets (see Sect. 4). The key point in our models is that both planets are formed concurrently within the same disk, hence the physical assumptions and initial properties of the nebula are the same for both. For a given disk model, we begin by searching a satisfactory model matching the observed properties of Jupiter. Once such a model is found, we try to adjust the two remaining parameters (initial location of the embryo and time delay) to find a similarly suitable model for Saturn. Our entire approach, as well as some tests we have made to check our code, can be found elsewhere (A05a), we give here some details on two points, the calculation of \\(M_{\\rm core}\\) and \\(M_{\\rm Z,enve}\\), and the migration rates. To estimate \\(M_{\\rm core}\\) and \\(M_{\\rm Z,enve}\\), we compute the fate of the infalling planetesimals by computing their trajectory inside the envelope as well as their mass loss. The latter results from thermal effects as well as mechanical ablation due to Rayleigh-Taylor instabilities on the planetesimals' surface (Korycansky et al. 2002). This allows us to determine the fraction of planetesimals' mass that directly reaches the core, which we identify as \\(M_{\\rm core}\\), the core mass at the end of the formation process. The mass deposited inside the envelope is assumed, due to convection2, to be homogeneously distributed within the envelope and is identified as \\(M_{\\rm Z,enve}\\), the mass of heavy elements in the planet's envelope. Finally, we note that processes like core erosion or settling could occur during subsequent evolution of the planet and modify significantly the values of \\(M_{\\rm core}\\) and \\(M_{\\rm Z,enve}\\) found here (see SG04). On the other hand, the total mass of heavy elements in the planets should remain constant. Footnote 2: The interior of the planets is largely unstable with regard to convection, even with the presence of molecular weight gradients: the radiative gradient is dominant over the adiabatic one by orders of magnitude. The calculation of the planetesimal's trajectory also gives the place where the energy of planetesimals is deposited. This quantity is used to calculate the structure of the forming planet, by solving the standard internal structure equations: the amount of energy released by infalling planetesimals enters in the energy equation, in the sinking approximation (see P96 and A05a). Low mass planets undergo type I migration at a rate being linear with the planet's mass. However, the most recent analytical estimates of type I migration rates by Tanaka et al. (2002), which have been derived assuming a laminar disk, are much too large to be compatible with the observed frequency of extra-solar planets. Therefore, planet survival implies a significantly reduced rate of type I migration. First hints how this could be achieved have been obtained by Nelson & Papaloizou (2004) in numerical modelling of turbulent disks in which much reduced migration rates have been obtained. In our calculations, we have reduced the rate of type I migration by multiplying the analytical estimates by an arbitrary factor \\(f_{\\rm I}\\), whose value varied in order to check its influence on the results. For higher mass planets, the migration is of type II (Ward 1997), the rate being independant of the planet mass. When the mass of the planet becomes comparable to the one of the disk, migration slows down and eventually stoppes. The switch from type I to type II occurs when the Hill's radius of the planet is equal to the disk density scale height, which is calculated with the vertical structure of the disk. Finally, note that we do not take into account gravitational interactions between the two forming planets that could alter the migration rates. ## 3 Jupiter and Saturn formation We consider values of \\(f_{\\rm I}\\) between 0 (no type I migration) and 0.03 (as we shall see below, higher values would imply too large starting locations of proto-Jupiter to account for the present structure of Saturn). For this range, we find suitable Jupiters to form from embryos starting between 9.2 AU (Astronomical Units) and 13.5 AU in a disk with a total mass ranging from 0.05 to 0.035 \\(M_{\\odot}\\) (solar masses) and a total photoevaporation rate comprised between 1 and 1.5 \\(\\times 10^{-8}M_{\\odot}\\)/ yr. For all the cases considered here (\\(f_{\\rm I}\\)=0, 0.001, 0.005, 0.01 and 0.03), it was possible to form within 3 Myr a planet whose final mass, location and global internal structure were compatible with the Jupiter ones (see Fig. 1d). We note that the final structure of the planet (\\(M_{\\rm core}\\), \\(M_{\\rm Z,enve}\\)) is independant of the assumed type I migration rate. This rate only gives the starting location of the embryo. Concurrently with the formation of Jupiter, we also follow the growth of the proto-Saturn embryo. The latter is started at a larger heliocentric distance and with an arbitrary time delay. This implies that, depending upon initial conditions, proto-Saturn may actually enter a region of the disk already visited and consequently modified (less planetesimals for example) by proto-Jupiter (see Fig. 1c). In Fig. 2, we present the successful Saturn formation model corresponding to the Jupiter model presented in Fig. 1 (red curves, \\(f_{\\rm I}\\)=0.001). The synthetic Saturn started as an embryo at 11.9 AU, 0.2 Myr after proto-Jupiter. The resulting planet exhibits characteristics quite comparable to the actual Saturn (see Fig. 2d). The mass of the core is slightly lower than the one allowed by SG04. However, we recall that the mass derived in SG04 may be decreased by up to \\(\\sim 7M_{\\oplus}\\) due to the sedimentation of helium (see Sect. 1). The mass of Saturn's final core is similar to the one obtained for Jupiter. This is because the core is built from the infalling planetesimals that are able to traverse the gaseous envelope without being disrupted. For a fixed envelope, disruption is essentially a function of the size of the planetesimals which in our work is assumed to be identical at all locations (100 km). Increasing the mass of the planetesimals by a factor ten leads to core masses of the order of \\(8M_{\\oplus}\\). The importance of Jupiter's wake on the formation of Saturn is evidenced by the green curves in Fig. 2a which were obtained by forming Saturn in the absence of Jupiter in an otherwise identical disk. In this case, the increased rate of planetesimal infall prevented the accretion of a sizeable envelope, and the resulting planet, while at Saturn's current location, remained quite small (\\(20M_{\\oplus}\\)). This can be explained by the fact that the gas accretion rate is inversely dependent upon the energy deposited by infalling planetesimals. Thus, once proto-Saturn's feeding zone enters a region previously depleted in planetesimals by the passage of Jupiter, their infall is reduced and gas accretion proceeds at a faster rate, ultimately leading to a more massive planet than in the case without planetesimal depletion. In the latter case (ignoring the effects of Jupiter's formation), we checked that even by varying the initial location and formation starting time, it was not possible to obtain a Saturn-mass planet at its current location (see Fig. 2c). Increasing the starting location of Jupiter (beyond \\(\\sim 10\\) AU, corresponding to \\(f_{\\rm I}\\) larger than 0.01), results in Saturn-like planets containing too few heavy elements compared to the actual planet. Moreover, the mass of accreted planetesimals never reaches a level which could trigger a significant accretion of gas. At the present location of Saturn, the synthetic planet remains less massive than the actual one (see Fig 2c, blue curve). ## 4 Enrichments in volatile species We now concentrate on the models that can form both Jupiter and Saturn (red ones in Fig. 1a and Fig. 2a) and examine whether they can also account for the volatile abundances measured in the atmospheres of the two planets. To do this, we use the clathrate trapping theory (Lunine & Stevenson 1985) and the thermodynamical conditions inside the disk as calculated in our models to compute the composition of ices incorporated in the planetesimals. Knowing the mass of accreted planetesimals, we compute the total expected abundances of some volatile species, in our case C, N, S, Ar, Kr, and Xe. We use for this calculation the recent solar abundances determinations of Lodders (2003). C is set to have been present in the solar nebula vapor phase under the form of CO\\({}_{2}\\), CO and CH\\({}_{4}\\), with CO\\({}_{2}\\):CO:CH\\({}_{4}\\)=30:10:1, ratios which are compatible with ISM measurements (see Allamandola et al. 1999, Gibb et al. 2004). Moreover, N is taken to have been present under the form of N\\({}_{2}\\) and NH\\({}_{3}\\), with a ratio NH\\({}_{3}\\):N\\({}_{2}\\)=1, and S under the form of H\\({}_{2}\\)S and other sulfur compounds (Pasek et al. 2005). Other initial ratios of CO\\({}_{2}\\):CO:CH\\({}_{4}\\) and NH\\({}_{3}\\):N\\({}_{2}\\) can lead to slightly different abundances of volatiles, but do not modify our main conclusions. We note finally that CO\\({}_{2}\\) crystallizes as a pure condensate prior to be clathrated (see Alibert et al. 2005b) which has a considerable influence on the total amount of water required for trapping all the volatiles in the planet. The results of these abundances calculations in the case of Jupiter have been presented elsewhere in details (see Alibert et al. 2005b), we only summarize the main conclusions here: C, N, S, Ar, Kr and Xe are enriched respectively by a factor of about 2.8, 2.5, 2.1, 2, 2.1, 2.6 compared to their solar values. These values are compatible with the _in situ_ measurements made by the _Galileo_ probe and recalled at the beginning of this Letter. The resulting enrichment for oxygen (not yet measured) is at least O/H \\(=3.4\\times 10^{-3}\\) or \\(\\sim 6\\) times the solar value. In the case of the Saturn formation model, we obtain enrichments of 2.4 and 2.2 compared to solar values for C and N respectively. This is again compatible with the ground-based observations quoted in the introduction. Moreover, we predict that S, Ar, Kr, and Xe are enhanced by a factor of respectively 1.9, 1.7, 1.9 and 2.3. Our formation model predicts the accretion of \\(\\sim 13.2M_{\\oplus}\\) of heavy elements, and the trapping of the volatiles result from the accretion of at least \\(5.4M_{\\oplus}\\) of ices (depending on the efficacy of the clathration process). These two calculations imply that the mean Ices/Rocks (I/R) ratio of accreted planetesimals was \\(>0.7\\), a value consistent with the one inferred for Saturnian satellites. Finally, the resulting enrichment of O in Saturn is O/H \\(\\sim 3\\times 10^{-3}\\), ie 5.2 times the solar value. ## 5 Summary and discussion We have calculated in this Letter formation models of the two gas giant planets of our Solar System, in the framework of our extended core-accretion models taking into account migration and disk evolution. The calculations presented here are simplified in some aspects, that could be improved in the future. In particular, our disk model is calculated in the framework of the \\(\\alpha\\) formalism of Shakura & Sunyaev (1973), which itself is a limitation. Moreover, we do not take into account gravitational interactions between the two planets that can alter the migration rates. Our calculations allowed us to give an estimate of the core mass, enrichment in heavy elements, and enrichment in volatile species that can be compared with observational data about Jupiter and Saturn. These calculations therefore show that our models can lead to the formation of two giant planets closely resembling our Jupiter and Saturn in less than 3 Myr. In order for our synthetic planets to match the bulk properties of the two gas giants in our solar system, we found that Jupiter must have started at a heliocentric distance smaller than \\(\\sim 10\\) AU otherwise Saturn, which follows in its trail, cannot accrete enough heavy elements. The heavy elements content as well as the core mass obtained for our synthetic planets are in good agreement with the interior models of SG04. Finally, in the framework of our model, the enrichments (compared to solar) of some volatile species measured in the atmosphere of both giant planets can also be accounted for in a self-consistent manner. However, we note that, recently, the _Cassini_ spacecraft measurements have led to a revised value of the abundance of C in Saturn's atmosphere of \\(8.1\\pm 1.6\\) times the solar value (see Flasar et al. 2005). Using the clathrate hydrate trapping theory, and assuming the minimum value for the C abundance (6.5 times the solar value), we obtain abundances of N, S, Ar, Kr, and Xe of respectively 5.9, 5, 4.7, 5.1 and 6.1 times the solar values. The resulting O enrichment would be about 14 times the solar value. These predicted enrichments are significantly higher than the ones quoted in Sect. 1, in particular for N. The future confirmation of both the new measurement of C and of the \"old\" value of N would imply that there has been some unknown fractionation processes between these species in the solar nebula gas-phase, and that their abundances in Saturn cannot be explained using solely the standard clathrate hydrate trapping theory. However, the measurement of C in Saturn by the _Cassini_ spacecraft may be subject to revisions in a near future. This work was supported in part by the Swiss National Science Foundation. O.M. was supported by an ESA external fellowship. ## References * (1) Alibert, Y., Mordasini, C., & Benz, W. 2004, Astronomy & Astrophysics, 417, L25-L28 (A04) * (2) Alibert, Y., Mordasini, C., Benz, W. & Winisdoerffer, C. 2005, Astronomy & Astrophysics, _in press_, astroph-0412444 (A05a) * (3) Alibert, Y., Mousis, O. & Benz, W. 2005, ApJ, _in press_, astroph-0502325 * (4) Allamandola, L. J., Bernstein, M. P., Sandford, S. A., & Walker, R. L. 1999, Space Sci. Rev., 90, 219 * (5) Briggs, F.H. & Sackett, P. D. 1989, Icarus 80, 77-103 * (6) Flasar, F. M. et al. 2005, Science, _in press_ * (7) Fortney, J., Hubbard, W.B., 2003, Icarus, 164, 228 * (8)Gibb, E. L., Whittet, D. C. B., Boogert, A. C. A., & Tielens, A. G. G. M. 2004, ApJ Supp., 151, 35 * Guillot (2005) Guillot, T. 2005, Annual Review of Earth and Planetary Sciences, _in press_ * Haisch et al. (2001) Haisch, K. E., Lada, E. A. & Lada, C. J. 2001, ApJ 553, L153-L156 * Hartman et al. (1998) Hartman L., Calvet, N., Gullbring, E. & D'Alession, P. 1998, ApJ, 495, 385 * Hubickyj et al. (2003) Hubickyj, O., Bodenheimer, P. & Lissauer, J. J. 2003, Proceedings of the 35th DPS meeting, 25.06 * Kerola et al. (1997) Kerola, D. X., Larson, H. P. & Tomasko, M. G. 1997, Icarus 127, 190-212 * Klahr & Bodenheimer (2003) Klahr, H. & Bodenheimer, P. 2003, In ESA SP-539: Earths: DARWIN/TPF and the Search for Extrasolar Terrestrial Planets, 481-483 * Korycansky et al. (2002) Korycansky, D. G., Zahnle K. J. & Mac Low M. M. 2002, Icarus, 157, 1-23 * Lunine & Stevenson (1985) Lunine, J. I., & Stevenson, D. J. 1985, ApJ Supp., 58, 493 * Lodders (2003) Lodders, K. 2003, ApJ, 591, 1220 * Mahaffy et al. (2000) Mahaffy, P. R. et al. J. 2000, Geophys. Res. 105, 15061-15072 * Nelson & Papaloizou (2004) Nelson, R. P. & Papaloizou, J. C. B. 2004, MNRAS, 350, 849 * Papaloizou & Terquem (1999) Papaloizou, J. C. B. & Terquem, C. 1999, ApJ, 521, 823 * Pasek et al. (2005) Pasek, M. A., Milsom, J. A., Ciesla, F. J. et al. 2005, Icarus, _in press_ * Pollack et al. (1996) Pollack, J. B., Hubickyj, O., Bodenheimer, P., Lissauer, J. J., Podolak, M., & Greenzweig, Y. 1996, Icarus, 124, 62 (P96) * Saumon & Guillot (2004) Saumon, D. & Guillot, T. 2004, ApJ 609, 1170-1180 (SG04) * Shakura & Sunyaev (1973) Shakura, N. I. & Sunyaev, R. A. 1973, A&A, 24, 337 * Tanaka et al. (2002) Tanaka, H., Takeuchi, T. & Ward, W. R. 2002, ApJ, 565, 1257 * Veras & Armitage (2004) Veras, D. & Armitage, P. J. 2004, MNRAS, 347, 613 * Ward (1989) Ward, W. R. 1989, ApJ, 345, L90 * Ward & Hahn (1995) Ward, W. R. & Hahn, J. M. 1995, ApJ, 440, L25 * Ward et al. (2002)* () Ward, W. R. 1997, ApJ, 482, L211 * () Wong M. H. et al. 2004, Icarus, 171, 153-170Figure 1: Jupiter formation models. The red (resp. blue) curves are obtained using \\(f_{\\rm I}=0.001\\) (resp. 0.03). (a) Mass of accreted planetesimals (dashed lines), total mass (dotted lines) and mass of solid core (solid lines) as a function of time for two simulations. (b) Heliocentric distance as a function of time. (c) Final surface density of planetesimals \\(\\Sigma_{p}\\) as a function of heliocentric distance. (d) Core mass (\\(M_{\\rm core}\\)) and mass of heavy elements (\\(M_{\\rm Z,enve}\\)) dissolved in the envelope. The black curve gives the domain allowed by the present day structure models of SG04. If some core dissolution occurs after the formation, the two points would evolve along the lines. For all the values of \\(f_{\\rm I}\\) we used, the points representing the final structures are very close to the red and blue ones. These points are not represented here for clarity. Figure 2: Saturn formation models. The red (resp. green) lines are obtained while taking into account the effect of Jupiter (resp. without this effect). (a,b) Same as Fig. 1 a,b. (c) Red small dots: different Saturn models (final mass and position), varying the initial position and the time delay after the beginning of Jupiter’s formation. The approximate maximum mass that can be reached by proto-Saturn at a given distance to the sun is indicated by the lines. The blue line is similar to the red one, except that the corresponding Jupiter formation is calculated with \\(f_{\\rm I}=0.03\\) (blue curves in Fig. 1). The green line is similar to the red one, except that the effect of Jupiter’s wake on Saturn formation is _not_ taken into account. The black star indicates Saturn's position in this diagram. (d) Core mass (\\(M_{\\rm core}\\)) and mass of heavy elements (\\(M_{\\rm Z,enve}\\)) dissolved in the envelope. The black curve gives the domain allowed by the present day structure models of SG04, taking into account a possible shell of sedimented helium, of mass between 0 and \\(7M_{\\oplus}\\).
The wealth of observational data about Jupiter and Saturn provides strong constraints to guide our understanding of the formation of giant planets. The size of the core and the total amount of heavy elements in the envelope have been derived from internal structure studies by Saumon & Guillot (2004). The atmospheric abundance of some volatile elements has been measured _in situ_ by the _Galileo_ probe (Mahaffy et al. 2000, Wong et al. 2004) or by remote sensing (Briggs & Sackett 1989, Kerola et al. 1997). In this Letter, we show that, by extending the standard core accretion formation scenario of giant planets by Pollack et al. (1996) to include migration and protoplanetary disk evolution, it is possible to account for all of these constraints in a self-consistent manner. planetary systems - planetary systems: formation - solar system: formation
Summarize the following text.
arxiv-format/0505051v1.md
# Theory of Initialization-Free Decoherence-Free Subspaces and Subsystems Alireza Shabani\\({}^{(1)}\\) Daniel A. Lidar\\({}^{(2)}\\) \\({}^{(1)}\\)Physics Department, Center for Quantum Information and Quantum Control, University of Toronto, 60 St. George St., Toronto, Ontario M5S 1A7, Canada \\({}^{(2)}\\)Chemical Physics Theory Group, Chemistry Department, and Center for Quantum Information and Quantum Control, University of Toronto, 80 St. George St., Toronto, Ontario M5S 3H6, Canada ## I Introduction In recent years much effort has been expended to develop methods for tackling the deleterious interaction of controlled quantum systems with their environment. This effort has been motivated in large part by the need to overcome decoherence in quantum information processing tasks, a goal which was thought to be unattainable at first [1; 2; 3]. Decoherence-free (or noiseless) subspaces [4; 5; 6; 7] and subsystems [8; 9; 10; 11] (DFSs) are among the methods which have been proposed to this end, and also experimentally realized in a variety of systems [12; 13; 14; 15]. In this manner of passive quantum error correction, one uses symmetries in the form of the interaction between system and environment to find a \"quiet corner\" in the system Hilbert space not experiencing this interaction. Of the various methods of quantum error correction, so far only DFSs have been combined with quantum algorithms in the presence of decoherence [16; 17]. For a review of DFSs and a comprehensive list of references see Ref. [18]. We have re-examined the theoretical foundation of DFSs and have found that the conditions for their existence can be generalized. It is our purpose in this paper to present these generalized conditions. Our most significant result is a drastic relaxation of the initialization condition for DFSs: whereas it was previously believed that one must be able to perfectly initialize a state inside a DFS, here we show that this does in fact need not be so. Instead one can tolerate an arbitrarily large preparation error, which in turn means significantly relaxed experimental preparation conditions. In contrast, only a small preparation error can be tolerated when quantum error correcting codes (QECC) are used to overcome decoherence [19]. Whether a similar generalization is possible in the case of QECC is an interesting open question, the answer to which may be within the realm of very recent results strengthening the DFS/QECC connection [20]. The relaxation of the initialization requirement is perhaps most significant in light of a series of results showing that a class of important quantum algorithms (Shor [21], Grover [22], and Deutsch-Josza [23] included) can be successfully executed under imperfect initialization conditions [24; 25; 26; 27; 28; 29; 30; 31; 32]. This means that imperfectly initialized DFSs can be used as a \"substrate\" for running these algorithms. To present our results we first review and re-examine the previous results on DFSs, in Section II. We do so for both general completely positive (CP) maps and for Markovian dynamics. The definitions we give for DFSs in these two cases are slightly different, reflecting the fact that Markovian dynamics is always continuous in time, whereas CP maps can also describe discrete-time evolution. In Section III, we present our generalized DFS conditions for CP maps and for Markovian dynamics. We illustrate the new conditions for Markovian dynamics with an example which reveals some of the new features. In Section IV we discuss the implications of our relaxed initialization condition in the context of quantum algorithms. Section V is devoted to a case-study of non-Markovian dynamics, intermediate between (formally exact) CP maps and (approximate) Markovian dynamics. A unique formulation does not exist in this case, and we consider the master equation introduced in Ref. [33]. The analytical solvability of this equation permits a rigorous derivation of the conditions for a DFS. For clarity of presentation we defer most supporting calculations to the appendices. ## II Review of previous conditions for decoherence-free subspaces and subsystems We refer the reader to Ref. [18] for a detailed review, including many references and historical context. Here we focus on aspects of direct relevance to our new results. ### Decoherence-Free Subspaces Consider a system with Hilbert space \\(\\mathcal{H}_{S}\\). In Refs. [5; 6; 7; 34; 35] a subspace \\(\\mathcal{H}_{\\rm DFS}\\subset\\mathcal{H}_{S}\\) was called decoherence-free if any state \\(\\rho_{S}(0)\\) of the system initially prepared in this subspace is unitarily related to the final state \\(\\rho_{S}(t)\\) of the system, i.e., \\[\\rho_{S}(0)=\\mathcal{P}_{\\rm d}\\rho_{S}(0)\\mathcal{P}_{\\rm d} \\Longrightarrow\\rho_{S}(t)=\\mathbf{U}\\rho_{S}(0)\\mathbf{U}^{\\dagger}. \\tag{1}\\] Here \\(\\mathbf{U}\\) is unitary and \\(\\mathcal{P}_{\\rm d}\\) is the projection operator onto \\(\\mathcal{H}_{\\rm DFS}\\). Important and motivating early examples of DFSs were given in [36; 37; 4; 38]. An alternative definition of a DFS is as a subspace in which the state purity is always one [39]; here we will not pursue this approach. To exploit DF-states for quantum information preservation one needs a method to experimentally verify these states [40], but from a theoretical standpoint one needs to first formulate the effect of the environment. In the following, we consider general CP maps and Markovian dynamics. #### ii.1.1 Completely Positive Maps The modeling of environmental effects on an open quantum system has been a challenging problem since at least the 1950's [41; 42], but under certain simplifying assumptions one can obtain a simple form for the dynamical equations of open systems [43]. For example, the assumption of an initially decoupled state of system and bath, \\(\\rho_{SB}(0)=\\rho_{S}(0)\\otimes\\rho_{B}\\), results in a CP map known as the Kraus operator sum representation [44]: \\[\\rho_{S}(t) = {\\rm Tr}_{B}[\\mathbf{\\Lambda}(t)\\left(\\rho_{S}(0)\\otimes\\rho_{B} \\right)\\mathbf{\\Lambda}(t)^{\\dagger}] \\tag{2}\\] \\[= \\sum_{\\alpha}\\mathbf{E}_{\\alpha}(t)\\rho_{S}(0)\\mathbf{E}_{\\alpha }^{\\dagger}(t).\\] Here \\[\\mathbf{\\Lambda}(t)=\\mathcal{T}\\exp(-i\\int_{0}^{t}\\mathbf{H}(s)ds) \\tag{3}\\] is the unitary propagator for the joint evolution of system and bath governed by total Hamiltonian \\(\\mathbf{H}\\) (\\(\\mathcal{T}\\) denotes time-ordering and we work in units such that \\(\\hbar=1\\)); the \"Kraus operators\" \\(\\{\\mathbf{E}_{\\alpha}\\}\\) are given by \\[\\mathbf{E}_{\\alpha}=\\sqrt{\\lambda_{\ u}}\\langle\\mu|\\mathbf{\\Lambda}|\ u \\rangle;\\hskip 28.452756pt\\alpha=(\\mu,\ u), \\tag{4}\\] where \\(|\\mu\\rangle,|\ u\\rangle\\) are bath states in the spectral decomposition \\(\\rho_{B}=\\sum_{\ u}\\lambda_{\ u}|\ u\\rangle\\langle\ u|\\). Trace preservation of \\(\\rho_{S}(t)\\) implies the sum rule \\[\\sum_{\\alpha}\\mathbf{E}_{\\alpha}^{\\dagger}\\mathbf{E}_{\\alpha}= \\mathbf{I}_{S}, \\tag{5}\\] where \\(\\mathbf{I}_{S}\\) is the identity operator on the system. In [35] a DFS-condition was derived for general CP maps of this type. We denote the subspace of states orthogonal to \\(\\mathcal{H}_{\\rm DFS}\\) by \\(\\mathcal{H}_{\\rm DFS^{\\perp}}\\), so that \\(\\mathcal{H}_{S}=\\mathcal{H}_{\\rm DFS}\\oplus\\mathcal{H}_{\\rm DFS^{\\perp}}\\). According to Eq. (4) in [35] the Kraus operators take the block-diagonal form \\[\\mathbf{E}_{\\alpha}=\\left(\\begin{array}{cc}c_{\\alpha}\\mathbf{U}_{\\rm DFS}& \\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{B}_{\\alpha}\\end{array}\\right), \\tag{6}\\] where the upper (lower) non-zero block acts entirely inside \\(\\mathcal{H}_{\\rm DFS}\\) (\\(\\mathcal{H}_{\\rm DFS^{\\perp}}\\)); \\(\\mathbf{U}_{\\rm DFS}\\) is a unitary matrix that is independent of the Kraus operator label \\(\\alpha\\); \\(c_{\\alpha}\\) is a scalar (\\(\\sum_{\\alpha}|c_{\\alpha}|^{2}=1\\)); and \\(\\mathbf{B}_{\\alpha}\\) is arbitrary, except that \\(\\sum_{\\alpha}\\mathbf{B}_{\\alpha}^{\\dagger}\\mathbf{B}_{\\alpha}=\\mathbf{I}_{\\rm DFS ^{\\perp}}\\). It is simple to verify that the DFS definition (1) is satisfied in this case, with \\(\\mathbf{U}=\\mathbf{U}_{\\rm DFS}\\). Theorem 1 in [35] reads: \"A subspace \\(\\mathcal{H}_{\\rm DFS}\\) is a DFS iff all Kraus operators have an identical unitary representation upon restriction to it, up to a multiplicative constant.\" This theorem is actually compatible with a more general form for the Kraus operators than Eq. (6), since \"upon restriction to it\" concerns only the upper-left block of \\(\\mathbf{E}_{\\alpha}\\). We derive the most general form of \\(\\mathbf{E}_{\\alpha}\\) in Section III below, and find that, indeed, a more general form than Eq. (6) is possible: one of the off-diagonal blocks need not vanish. In other words, leakage from \\(\\mathcal{H}_{\\rm DFS^{\\perp}}\\) into \\(\\mathcal{H}_{\\rm DFS}\\) is permitted. As we further show in Section III, the form (6) in fact appears in the context of unital channels. #### ii.1.2 Markovian Dynamics The most general form of CP Markovian dynamics is given by the Lindblad equation [45; 46; 47]: \\[\\frac{\\partial\\rho_{S}}{\\partial t} = -i[\\mathbf{H}_{S},\\rho_{S}]+\\mathcal{L}[\\rho_{S}],\\] \\[\\mathcal{L} = \\sum_{\\alpha}\\mathbf{F}_{\\alpha}\\cdot\\mathbf{F}_{\\alpha}^{\\dagger }-\\frac{1}{2}\\mathbf{F}_{\\alpha}^{\\dagger}\\mathbf{F}_{\\alpha}\\cdot-\\frac{1}{2} \\cdot\\mathbf{F}_{\\alpha}^{\\dagger}\\mathbf{F}_{\\alpha}, \\tag{7}\\] where \\(\\mathbf{F}_{\\alpha}\\) are bounded (or unbounded, if subject to appropriate domain restrictions [48; 49]) operators acting on \\(\\mathcal{H}_{\\rm S}\\), and where \\(\\mathbf{H}_{S}\\) may include a Lamb shift [50]. Given such dynamics, one restores unitarity [i.e., the DFS definition (1) with \\(\\mathbf{U}\\) generated by the Hamiltonian \\(\\mathbf{H}_{S}\\)] if the Lindblad term \\(\\mathcal{L}[\\rho_{S}]\\) can be eliminated. According to Refs. [51; 6], a necessary and sufficient condition for this to be the case is \\[\\mathbf{F}_{\\alpha}|i\\rangle=c_{\\alpha}|i\\rangle, \\tag{8}\\] where \\(\\mathcal{H}_{\\rm DFS}=\\mathrm{Span}\\{|i\\rangle\\}\\) and \\(\\{c_{\\alpha}\\}\\) are arbitrary complex scalars. Thus the Lindblad operators can be written in block-form as follows: \\[\\mathbf{F}_{\\alpha}=\\left(\\begin{array}{cc}c_{\\alpha}\\mathbf{I}&\\mathbf{A}_ {\\alpha}\\\\ \\mathbf{0}&\\mathbf{B}_{\\alpha}\\end{array}\\right), \\tag{9}\\] with the blocks on the diagonal corresponding once again to operators restricted to \\(\\mathcal{H}_{\\rm DFS}\\) and \\(\\mathcal{H}_{\\rm DFS^{\\perp}}\\). Note the appearance of the off-diagonal block \\(\\mathbf{A}_{\\alpha}\\) mixing \\(\\mathcal{H}_{\\rm DFS}\\) and\\(\\mathcal{H}_{\\text{DFS}^{\\perp}}\\); its presence is permitted since the DFS condition (8) gives no information about matrix elements of the form \\(\\langle i|\\mathbf{F}_{\\alpha}|j^{\\perp}\\rangle\\), with \\(|i\\rangle\\in\\mathcal{H}_{\\text{DFS}}\\) and \\(|j^{\\perp}\\rangle\\in\\mathcal{H}_{\\text{DFS}^{\\perp}}\\). As observed in Refs. [6; 35], one should in addition require that \\(\\mathbf{H}_{S}\\) does not mix DF states with non-DF ones, i.e., mixed matrix elements of the type \\(\\langle j^{\\perp}|\\mathbf{H}_{\\text{S}}|i\\rangle\\), with \\(|i\\rangle\\in\\mathcal{H}_{\\text{DFS}}\\) and \\(|j^{\\perp}\\rangle\\in\\mathcal{H}_{\\text{DFS}^{\\perp}}\\), should vanish. We show below that this condition must be made more stringent. ### Noiseless Subsystems An important observation made in Ref. [8] is that there is no need to restrict the decoherence-free dynamics to a subspace. A more general situation is when the DF dynamics is a \"subsystem\", or a factor in a tensor product decomposition of subspace. Following Ref. [8], this comes about as follows. Consider the dynamics of a system \\(S\\) coupled to a bath \\(B\\) via the Hamiltonian \\[\\mathbf{H}=\\mathbf{H}_{S}\\otimes\\mathbf{I}_{B}+\\mathbf{I}_{S}\\otimes\\mathbf{ H}_{B}+\\mathbf{H}_{I}, \\tag{10}\\] where \\(\\mathbf{H}_{S}\\) (\\(\\mathbf{H}_{B}\\)), the system (bath) Hamiltonian, acts on the system (bath) Hilbert space \\(\\mathcal{H}_{S}\\) (\\(\\mathcal{H}_{B}\\)); \\(\\mathbf{I}_{S}\\) (\\(\\mathbf{I}_{B}\\)) is the identity operator on the system (bath) Hilbert space; \\(\\mathbf{H}_{I}\\) is the interaction term of Hamiltonian which can be written in general as \\(\\sum_{\\alpha}\\mathbf{S}_{\\alpha}\\otimes\\mathbf{B}_{\\alpha}\\). If the system Hamiltonian \\(\\mathbf{H}_{S}\\) and the system components of the interaction Hamiltonian, the \\(\\mathbf{S}_{\\alpha}\\)'s, form an algebra \\(\\mathcal{S}\\), it must be \\(\\dagger\\)-closed to preserve the unitarity of system-bath dynamics. Now, if \\(\\mathcal{A}\\) is a \\(\\dagger\\)-closed operator algebra which includes the identity operator, then a fundamental theorem of C\\({}^{*}\\) algebras states that \\(\\mathcal{A}\\) is a reducible subalgebra of the full algebra of operators [52]. This theorem implies that the algebra is isomorphic to a direct sum of \\(d_{J}\\times d_{J}\\) complex matrix algebras, each with multiplicity \\(n_{J}\\): \\[\\mathcal{S}\\cong\\bigoplus_{J\\in\\mathcal{J}}\\mathbf{I}_{n_{J}}\\otimes\\mathcal{ M}(d_{J},\\mathbb{C}) \\tag{11}\\] Here \\(\\mathcal{J}\\) is a finite set labeling the irreducible components of \\(\\mathcal{S}\\), and \\(\\mathcal{M}(d_{J},\\mathbb{C})\\) denotes a \\(d_{J}\\times d_{J}\\) complex matrix algebra. Associated with this decomposition of the algebra \\(\\mathcal{S}\\) is a decomposition of the system Hilbert space: \\[\\mathcal{H}_{S}=\\bigoplus_{J\\in\\mathcal{J}}\\mathbb{C}^{n_{J}}\\otimes\\mathbb{ C}^{d_{J}}. \\tag{12}\\] If we encode quantum information into a subsystem (factor) \\(\\mathbb{C}^{n_{J}}\\) it is preserved, since the noise algebra \\(\\mathcal{S}\\) acts trivially (as \\(\\mathbf{I}_{n_{J}}\\)). In such a case \\(\\mathbb{C}^{n_{J}}\\) is called a decoherence-free, or noiseless subsystem (NS) [8]. Examples of this construction were given independently in Refs. [9; 11]. #### ii.2.1 Completely Positive Maps As the Kraus operators are given by Eq. (4), they take the form of the decomposition (11): \\[\\mathbf{E}_{\\alpha}=\\bigoplus_{J\\in\\mathcal{J}}\\mathbf{I}_{n_{J}}\\otimes \\mathbf{M}_{\\alpha}(d_{J}), \\tag{13}\\] where \\(\\mathbf{M}_{\\alpha}(d_{J})\\) is an arbitrary \\(d_{J}\\)-dimensional complex matrix. Therefore a factor \\(\\mathbb{C}^{n_{J}}\\) is a NS if the Kraus operators have the representation (13). #### ii.2.2 Markovian Dynamics The aforementioned reducibility theorem [52] does not apply directly in the Markovian case, since the set of Lindblad operators \\(\\{\\mathbf{F}_{\\alpha}\\}\\) need not be closed under conjugation. Nevertheless, as shown in [10], the concept of a subsystem applies in the Markovian case as well: the condition for a NS was found to be \\[\\mathbf{F}_{\\alpha}\\mathcal{P}_{\\text{d}}=\\mathbf{I}_{n_{J}}\\otimes\\mathbf{M} _{\\alpha}(d_{J})\\mathcal{P}_{\\text{d}}, \\tag{14}\\] with the \\(\\mathbf{M}_{\\alpha}\\) again being arbitrary complex matrices and \\(\\mathcal{P}_{\\text{d}}\\) being the projection operator onto a given subspace \\(\\mathbb{C}^{n_{J}}\\otimes\\mathbb{C}^{d_{J}}\\). The NS is then a factor \\(\\mathbb{C}^{n_{J}}\\) as in Eq. (12), with the same tensor product structure as in Eq. (14). ## III Generalized Conditions for Decoherence-Free Subspaces and Subsystems We now proceed to re-examine the conditions for the existence of decoherence-free subspaces and subsystems. We will show that the conditions presented in the papers laying the general theoretical foundation [5; 6; 8; 10; 34; 35; 51], can be generalized and sharpened, both for CP maps and for Markovian dynamics. Our main new finding is that the preparation step can tolerate arbitrarily large errors. Relatedly, we consider the possibility of leakage from outside of the protected subspace/subsystem into it. Previous studies did not allow for this possibility, but we will show that it can be permitted under appropriate restrictions. In doing so we generalize the definition of a NS with respect to the original definition that relied on the algebraic isomorphism (11) (see Ref. [20] for a related recent result). In the case of Markovian dynamics, our main new finding is that if one demands perfect initialization into a DFS then the condition on the Hamiltonian component of the evolution is modified compared to previous studies. The derivation of these results is somewhat tedious. Hence, for clarity of presentation we focus on presenting our generalized conditions in this section. Mathematical proofs are deferred to the appendices. We begin with the simpler case of decoherence-free subspaces and consider the case of CP maps and Markovian dynamics. We then move on to the case of decoherence-free (noiseless) subsystems. The case of non-Markovian continuous-time dynamics is treated later, in Section V. ### Decoherence-Free Subspaces The system density matrix \\(\\rho_{S}\\) is an operator on the entire system Hilbert space \\(\\mathcal{H}_{S}\\), which we assume to be decomposable into a direct sum as \\(\\mathcal{H}=\\mathcal{H}_{\\mathrm{DFS}}\\oplus\\mathcal{H}_{\\mathrm{DFS}^{\\perp}}\\). It is convenient for our purposes to represent the system state (and later on the Kraus and Lindblad operators) in a matrix form whose block structure corresponds to this decomposition of the Hilbert space. Thus the system density matrix takes the form \\[\\rho_{S}=\\left(\\begin{array}{cc}\\rho_{\\mathrm{DFS}}&\\rho_{2}\\\\ \\rho_{2}^{\\dagger}&\\rho_{3}\\end{array}\\right), \\tag{15}\\] We also define a projector \\[\\mathcal{P}_{\\mathrm{DFS}}=\\left(\\begin{array}{cc}\\mathbf{I}_{\\mathrm{DFS}}& \\mathbf{0}\\end{array}\\right), \\tag{16}\\] so that \\(\\rho_{\\mathrm{DFS}}=\\mathcal{P}_{\\mathrm{DFS}}\\rho_{S}\\mathcal{P}_{\\mathrm{DFS}}^ {\\dagger}\\). Finally, \\[\\mathcal{P}_{\\mathrm{d}}=\\left(\\begin{array}{cc}\\mathbf{I}_{\\mathrm{DFS}}& \\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{0}\\end{array}\\right),\\quad\\mathcal{P}_{\\mathrm{d}^{\\perp} }=\\left(\\begin{array}{cc}\\mathbf{0}&\\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{I}_{\\mathrm{DFS}}\\end{array}\\right) \\tag{17}\\] are projection operators onto \\(\\mathcal{H}_{\\mathrm{DFS}}\\) and \\(\\mathcal{H}_{\\mathrm{DFS}^{\\perp}}\\), respectively. #### ii.1.1 Completely Positive Maps The original concept of a DFS, Eq. (1), poses a practical problem: the perfect initialization of a quantum system inside a DFS might be challenging in many cases. Therefore we introduce a generalized definition to relax this constraint: **Definition 1**: _Let the system Hilbert space \\(\\mathcal{H}_{S}\\) decompose into a direct sum as \\(\\mathcal{H}=\\mathcal{H}_{\\mathrm{DFS}}\\oplus\\mathcal{H}_{\\mathrm{DFS}^{\\perp}}\\), and partition the system state \\(\\rho_{S}\\) accordingly into blocks, as in Eq. (15). Assume \\(\\rho_{\\mathrm{DFS}}(0)=\\mathcal{P}_{\\mathrm{DFS}}\\rho_{S}(0)\\mathcal{P}_{ \\mathrm{DFS}}^{\\dagger}\ eq\\mathbf{0}\\). Then \\(\\mathcal{H}_{\\mathrm{DFS}}\\) is called decoherence-free iff the initial and final DFS-blocks of \\(\\rho_{S}\\) are unitarily related:_ \\[\\rho_{\\mathrm{DFS}}(t)=\\mathbf{U}_{\\mathrm{DFS}}\\rho_{\\mathrm{DFS}}(0) \\mathbf{U}_{\\mathrm{DFS}}^{\\dagger}, \\tag{18}\\] _where \\(\\mathbf{U}_{\\mathrm{DFS}}\\) is a unitary matrix acting on \\(\\mathcal{H}_{\\mathrm{DFS}}\\)._ **Definition 2**: _Perfect initialization (DF subspaces): \\(\\rho_{2}=\\mathbf{0}\\) and \\(\\rho_{3}=\\mathbf{0}\\) in Eq. (15)._ **Definition 3**: _Imperfect initialization (DF subspaces): \\(\\rho_{2}\\) and/or \\(\\rho_{3}\\) in Eq. (15) are non-vanishing._ We prove in Appendix A.1: **Theorem 1**: _Assume imperfect initialization. Let \\(\\mathbf{U}\\) be unitary, \\(c_{\\alpha}\\) scalars satisfying \\(\\sum_{\\alpha}|c_{\\alpha}|^{2}=1,\\) and \\(\\mathbf{B}_{\\alpha}\\) arbitrary operators on \\(\\mathcal{H}_{\\mathrm{DFS}^{\\perp}}\\) satisfying \\(\\sum_{\\alpha}\\mathbf{B}_{\\alpha}^{\\dagger}\\mathbf{B}_{\\alpha}=\\mathbf{I}_{ \\mathrm{DFS}^{\\perp}}\\). A necessary and sufficient condition for the existence of a DFS with respect to CP maps is that the Kraus operators have a matrix representation of the form_ \\[\\mathbf{E}_{\\alpha}=\\left(\\begin{array}{cc}c_{\\alpha}\\mathbf{U}&\\mathbf{0} \\\\ \\mathbf{0}&\\mathbf{B}_{\\alpha}\\end{array}\\right). \\tag{19}\\] This form is identical to the previous result (6), with the important distinction that due to the new definition of a DFS, Eq. (18), the theorem holds not just for states initialized perfectly into \\(\\mathcal{H}_{\\mathrm{DFS}}\\), but for arbitrary initial states. Note that unlike fault-tolerant QECC, where the initial state must be sufficiently close to a valid code state [19], here the initial state can be arbitrarily far from a DFS-code state, as long as the initial projection into the DFS is non-vanishing. These observations lead us to reconsider the original definition, wherein the system _is_ initialized inside the DFS. This situation admits more general Kraus operators. Specifically, we prove Appendix A.1 that: **Corollary 1**: _Assume perfect initialization. Then the DFS condition is:_ \\[\\mathbf{E}_{\\alpha}=\\left(\\begin{array}{cc}c_{\\alpha}\\mathbf{U}&\\mathbf{A} _{\\alpha}\\\\ \\mathbf{0}&\\mathbf{B}_{\\alpha}\\end{array}\\right), \\tag{20}\\] _where \\(\\mathbf{U}\\) is unitary._ Note that due to the sum rule \\(\\sum_{\\alpha}\\mathbf{E}_{\\alpha}^{\\dagger}\\mathbf{E}_{\\alpha}=\\mathbf{I}\\) the otherwise arbitrary operators \\(\\mathbf{A}_{\\alpha}\\) and \\(\\mathbf{B}_{\\alpha}\\) satisfy the constraints (i) \\(\\sum_{\\alpha}\\mathbf{A}_{\\alpha}^{\\dagger}\\mathbf{A}_{\\alpha}+\\mathbf{B}_{ \\alpha}^{\\dagger}\\mathbf{B}_{\\alpha}=\\mathbf{I}_{\\mathrm{DFS}^{\\perp}}\\) and (ii) \\(\\sum_{\\alpha}c_{\\alpha}^{*}\\mathbf{A}_{\\alpha}=\\mathbf{0}\\), and where additionally the scalars \\(c_{\\alpha}\\) satisfy (iii) \\(\\sum_{\\alpha}|c_{\\alpha}|^{2}=1\\). In contrast to the diagonal form in the previous conditions (6) and (19), Eq. (20) allows for the existence of the off-diagonal term \\(\\mathbf{A}_{\\alpha}\\), which permits leakage from \\(\\mathcal{H}_{\\mathrm{DFS}^{\\perp}}\\) into \\(\\mathcal{H}_{\\mathrm{DFS}}\\). This more general form of the Kraus operators imply that a larger class of noise processes allow for the existence of DFSs, as compared to the previous condition (6).1 Footnote 1: We re-emphasize that Theorem 1 in [35] is compatible with Eq. (20); the latter generalizes the explicit matrix representation Eq. (4) given in that paper [condition (6) in the present paper], but does not invalidate Theorem 1 in [35]. #### ii.1.2 Unital Maps A unital (sometimes called bi-stochastic) channel is a CP map \\(\\mathbf{\\Phi}(\\rho)=\\sum_{\\alpha}\\mathbf{E}_{\\alpha}\\rho\\mathbf{E}_{\\mathrm{d}}^ {\\dagger}\\) that preserves the identity operator: \\(\\mathbf{\\Phi}(\\mathbf{I})=\\sum_{\\alpha}\\mathbf{E}_{\\alpha}\\mathbf{E}_{\\alpha} ^{\\dagger}=\\mathbf{I}\\). Consider the fixed points of \\(\\mathbf{\\Phi}\\), i.e., \\(\\mathrm{Fix}(\\mathbf{\\Phi})\\equiv\\{\\rho:\\mathbf{\\Phi}(\\rho)=\\rho\\}\\). Such states,which are invariant under \\(\\mathbf{\\Phi}\\), are clearly examples of DF-states of the corresponding channel. Recently it has been shown that the fixed point set of unital CP maps is the commutant of the algebra generated by Kraus operators [53]. In other words, if \\(\\mathcal{E}\\) is the set of all polynomials in \\(\\{\\mathbf{E}_{\\alpha}\\}\\), or \\(\\mathcal{E}=\\mathrm{Alg}\\{\\mathbf{E}_{\\alpha}\\}\\), then \\[\\mathrm{Fix}(\\mathbf{\\Phi})=\\{\\mathbf{T}\\in\\mathcal{B}(\\mathcal{H}):[ \\mathbf{T},\\mathcal{E}]=\\mathbf{0}\\}, \\tag{21}\\] where \\(\\mathcal{B}(\\mathcal{H})\\) is the (Banach) space of all bounded operators on the Hilbert space \\(\\mathcal{H}\\). In other words, the fixed points of a unital CP map, which are DF states, can alternatively be characterized as the commutant of \\(\\mathrm{Alg}\\{\\mathbf{E}_{\\alpha}\\}\\), i.e., the set \\(\\{\\mathbf{T}\\}\\). It is our purpose in this subsection to show that, under our generalized definition of DFSs, this characterization of DF states is sufficient but not necessary. Consider the generalized DFS-condition (20) applied to unital maps. We have \\[\\mathbf{\\Phi}(\\rho)=\\sum_{\\alpha}\\left(\\begin{array}{cc}c_{\\alpha}\\mathbf{ I}_{\\mathrm{DFS}}&\\mathbf{A}_{\\alpha}\\\\ \\mathbf{0}&\\mathbf{B}_{\\alpha}\\end{array}\\right)\\rho\\left(\\begin{array}{cc}c _{\\alpha}^{*}\\mathbf{I}_{\\mathrm{DFS}}&\\mathbf{0}\\\\ \\mathbf{A}_{\\alpha}^{\\dagger}&\\mathbf{B}_{\\alpha}^{\\dagger}\\end{array}\\right). \\tag{22}\\] Unitality, \\(\\mathbf{\\Phi}(\\mathbf{I})=\\mathbf{I}\\), together with \\(\\sum_{\\alpha}|c_{\\alpha}|^{2}=1\\) implies: \\[\\left(\\begin{array}{cc}\\mathbf{I}_{\\mathrm{DFS}}+\\sum_{\\alpha}\\mathbf{A}_{ \\alpha}\\mathbf{A}_{\\alpha}^{\\dagger}&\\sum_{\\alpha}\\mathbf{A}_{\\alpha}\\mathbf{ B}_{\\alpha}^{\\dagger}\\\\ \\sum_{\\alpha}\\mathbf{B}_{\\alpha}\\mathbf{A}_{\\alpha}^{\\dagger}&\\sum_{\\alpha} \\mathbf{B}_{\\alpha}\\mathbf{B}_{\\alpha}^{\\dagger}\\end{array}\\right)=\\mathbf{I}. \\tag{23}\\] This implies the vanishing of the matrices \\(\\mathbf{A}_{\\alpha}\\), so that we are left with the Kraus operators in the simple block-diagonal form: \\[\\mathbf{E}_{\\alpha}=\\left(\\begin{array}{cc}c_{\\alpha}\\mathbf{I}&\\mathbf{0} \\\\ \\mathbf{0}&\\mathbf{B}_{\\alpha}\\end{array}\\right), \\tag{24}\\] together with the additional constraint \\(\\sum_{\\alpha}\\mathbf{B}_{\\alpha}\\mathbf{B}_{\\alpha}^{\\dagger}=\\mathbf{I}_{ \\mathrm{DFS}^{\\perp}}\\) (which, in the present unital case, naturally supplements the previously derived normalization constraint \\(\\sum_{\\alpha}\\mathbf{B}_{\\alpha}^{\\dagger}\\mathbf{B}_{\\alpha}=\\mathbf{I}_{ \\mathrm{DFS}^{\\perp}}\\)). Thus, unitality restricts the class of Kraus operators, so that in fact we must assume the DFS-condition (19) rather than (20). This then means that we may consider the generalized DFS definition Eq. (18). Next, let us find the commutant of this class of Kraus operators. First, \\[\\mathrm{Alg}\\{\\mathbf{E}_{\\alpha}\\}=\\{\\left(\\begin{array}{cc}\\mathrm{poly} (c_{\\alpha})\\mathbf{I}&\\mathbf{0}\\\\ \\mathbf{0}&\\mathrm{poly}(\\mathbf{B}_{\\alpha})\\end{array}\\right)\\}, \\tag{25}\\] where \\(\\mathrm{poly}(x)\\) denotes all possible polynomials in \\(x\\). Representing an arbitrary operator \\(\\mathbf{T}\\in\\mathcal{B}(\\mathcal{H})\\) in the form \\[\\mathbf{T}=\\left(\\begin{array}{cc}\\mathbf{L}&\\mathbf{M}\\\\ \\mathbf{N}&\\mathbf{P}\\end{array}\\right), \\tag{26}\\] it is simple to derive that the commutant of \\(\\mathrm{Alg}\\{\\mathbf{E}_{\\alpha}\\}\\) is the space of matrices \\(\\mathbf{T}\\) of the form \\[\\mathbf{T}=\\left(\\begin{array}{cc}\\mathbf{L}&\\mathbf{0}\\\\ \\mathbf{0}&c\\mathbf{I}\\end{array}\\right), \\tag{27}\\] where \\(\\mathbf{L}\\) and \\(c\\) are arbitrary. The aforementioned theorem [53] states that the fixed-point set of the channel, i.e., the DF states, coincides with this commutant. Of course, for \\(\\mathbf{T}\\) to be a proper quantum state it must be Hermitian and have unit trace, whence \\(c\\geq 0\\) and \\(\\mathbf{L}\\) is Hermitian. Subject to these constraints we see that the aforementioned theorem [53] gives a sufficient, but not necessary characterization of the allowed DF states. Indeed, the form (27) arises as a special case of our considerations, where we allow for \\(\\mathbf{T}\\) to be a state with support in \\(\\mathcal{H}_{\\mathrm{DFS}^{\\perp}}\\), but not of the most general form allowed by Eq. (18), which includes off-diagonal blocks. #### ii.1.3 Markovian Dynamics In the case of CP maps we are only interested in the output state and the intermediate-time states are ignored. Since, as is well known, Markovian dynamics is a special case of CP maps (e.g., [50; 47]), one may of course apply the results we have obtained above for general CP maps in the Markovian case as well, provided one is only interested in the state at the end of the Markovian channel. However, one may instead be interested in a different notion of decoherence-freeness, wherein the system remains DF throughout the entire evolution. Such a notion is more suited to experiments in which the final time is not a priori known. This is the notion we will pursue here in our treatment of continuous-time dynamics, in both the Markovian and non-Markovian cases. Thus, while we allow that the system not be fully initialized into the DFS, we require that the component that is, undergoes unitary dynamics _at all times_. Correspondingly, we define a DFS in the Markovian case as follows: **Definition 4**: _Let the system Hilbert space \\(\\mathcal{H}_{S}\\) decompose into a direct sum as \\(\\mathcal{H}_{S}=\\mathcal{H}_{\\mathrm{DFS}}\\oplus\\mathcal{H}_{\\mathrm{DFS}^{ \\perp}}\\), and partition the system state \\(\\rho_{S}\\) accordingly into blocks. Let \\(\\mathcal{P}_{\\mathrm{DFS}}\\) be a projector onto \\(\\mathcal{H}_{\\mathrm{DFS}}\\) and assume \\(\\rho_{\\mathrm{DFS}}(0)\\equiv\\mathcal{P}_{\\mathrm{DFS}}\\rho_{S}(0)\\mathcal{P}_ {\\mathrm{DFS}}^{\\dagger}\ eq\\mathbf{0}\\). Then \\(\\mathcal{H}_{\\mathrm{DFS}}\\) is called decoherence-free iff \\(\\rho_{\\mathrm{DFS}}\\) undergoes Schrodinger-like dynamics,_ \\[\\frac{\\partial\\rho_{\\mathrm{DFS}}}{\\partial t}=-i[\\mathbf{H}_{\\mathrm{DFS}}, \\rho_{\\mathrm{DFS}}], \\tag{28}\\] _where \\(\\mathbf{H}_{\\mathrm{DFS}}\\) is a Hermitian operator._ Before presenting the DFS conditions, let us recall the quantum trajectories interpretation of Markovian dynamics [54; 55; 56]. Expanding Eq. (7) to first order in the short time-interval \\(\\tau\\) yields the CP map \\[\\rho_{S}(t+\\tau)=\\sum_{\\beta=0}\\mathbf{W}_{\\beta}\\rho(t)\\mathbf{W}_{\\beta}^{ \\dagger}, \\tag{29}\\] where \\[\\mathbf{W}_{0} = \\mathbf{I}-i\\tau\\mathbf{H}_{S}-\\frac{\\tau}{2}\\sum_{\\alpha}\\mathbf{ F}_{\\alpha}^{\\dagger}\\mathbf{F}_{\\alpha}, \\tag{30}\\] \\[\\mathbf{W}_{\\beta>0} = \\sqrt{\\tau}\\mathbf{F}_{\\beta}, \\tag{31}\\]and to the same order we also have the normalization condition \\[\\sum_{\\beta=0}\\mathbf{W}_{\\beta}^{\\dagger}\\mathbf{W}_{\\beta}=\\mathbf{I}. \\tag{32}\\] Thus the Lindblad equation has been recast as a Kraus operator sum (2), but only to first order in \\(\\tau\\), the coarse-graining time scale for which the Markovian approximation is valid [50]. This implies a measurement interpretation, wherein the system state is \\(\\rho_{S}(t+\\tau)=\\mathbf{W}_{\\beta}\\rho(t)\\mathbf{W}_{\\beta}^{\\dagger}/p_{\\beta}\\) (to first-order in \\(\\tau\\)) with probability \\(p_{\\beta}=\\mathrm{Tr}[\\mathbf{W}_{\\beta}\\rho(t)\\mathbf{W}_{\\beta}^{\\dagger}]\\). This happens because the bath functions as a probe coupled to the system while being subjected to a quasi-continuous series of measurements at each infinitesimal time interval \\(\\tau\\)[33]. The result is the well-known quantum jump process [54; 55; 56], wherein the measurement operators are \\(\\mathbf{W}_{0}\\approx\\exp(-i\\tau\\mathbf{H}_{\\mathrm{c}})\\), the \"conditional\" evolution, generated by the non-Hermitian \"Hamiltonian\" \\[\\mathbf{H}_{\\mathrm{c}}\\equiv\\mathbf{H}_{S}-\\frac{i}{2}\\sum_{\\alpha}\\mathbf{F} _{\\alpha}^{\\dagger}\\mathbf{F}_{\\alpha}, \\tag{33}\\] and \\(\\sqrt{\\tau}\\mathbf{F}_{\\beta}\\) (the \"jump\"). Note that \\(\\mathbf{H}_{S}\\) is here meant to include all renormalization effects due to the system-bath interaction, e.g., a possible Lamb shift (see, e.g., Ref. [50]). By a simple algebraic rearrangement one can rewrite the Lindblad equation in the following form: \\[\\dot{\\rho}_{S}=-i(\\mathbf{H}_{\\mathrm{c}}\\rho_{S}-\\rho_{S}\\mathbf{H}_{ \\mathrm{c}}^{\\dagger})+\\sum_{\\alpha}\\mathbf{F}_{\\alpha}\\rho_{S}\\mathbf{F}_{ \\alpha}^{\\dagger}, \\tag{34}\\] where according to the above interpretation the first term generates non-unitary dynamics, while the second is responsible for the quantum jumps. Now recall the Markovian DFS condition derived in Refs. [6; 38]: the Lindblad operators should have trivial action on DF-states, as in Eq. (8), i.e., \\(\\mathbf{F}_{\\alpha}|i\\rangle=c_{\\alpha}|i\\rangle\\). Viewed from the perspective of the quantum-jump picture of Markovian dynamics, this implies that the jump operators do not alter a DF-state, i.e., the term \\(\\sum_{\\alpha}\\mathbf{F}_{\\alpha}\\rho_{S}\\mathbf{F}_{\\alpha}^{\\dagger}\\) in Eq. (34) transforms \\(\\rho_{S}\\) to \\(\\sum_{\\alpha}|c_{\\alpha}|^{2}\\rho_{S}\\) and thus has trivial action. Given Eq. (8), the Lindblad operators can be written in block-form as follows [Eq. (9)]: \\[\\mathbf{F}_{\\alpha}=\\left(\\begin{array}{cc}c_{\\alpha}\\mathbf{I}&\\mathbf{A} _{\\alpha}\\\\ \\mathbf{0}&\\mathbf{B}_{\\alpha}\\end{array}\\right), \\tag{35}\\] with the blocks on the diagonal corresponding once again to operators restricted to \\(\\mathcal{H}_{\\mathrm{DFS}}\\) and \\(\\mathcal{H}_{\\mathrm{DFS}^{\\perp}}\\). Note the appearance of the off-diagonal block \\(\\mathbf{A}_{\\alpha}\\) mixing \\(\\mathcal{H}_{\\mathrm{DFS}}\\) and \\(\\mathcal{H}_{\\mathrm{DFS}^{\\perp}}\\); its presence is permitted since the DFS condition (8) gives no information about matrix elements of the form \\(\\langle i|\\mathbf{F}_{\\alpha}|j^{\\perp}\\rangle\\), with \\(|i\\rangle\\in\\mathcal{H}_{\\mathrm{DFS}}\\) and \\(|j^{\\perp}\\rangle\\in\\mathcal{H}_{\\mathrm{DFS}^{\\perp}}\\). As observed in [6], one should in addition require that \\(\\mathbf{H}_{S}\\) does not mix DF states with non-DF ones. It turns out that this condition is compatible with the case that the DF state is imperfectly initialized (Definition 3). In this case, as shown in Appendix A.2, the following theorem holds: **Theorem 2**: _Assume imperfect initialization. Then a subspace \\(\\mathcal{H}_{\\mathrm{DFS}}\\) of the total Hilbert space \\(\\mathcal{H}\\) is decoherence-free with respect to Markovian dynamics iff the Lindblad operators \\(\\mathbf{F}_{\\alpha}\\) and the system Hamiltonian \\(\\mathbf{H}_{S}\\) assume the block-diagonal form_ \\[\\mathbf{H}_{S}=\\left(\\begin{array}{cc}\\mathbf{H}_{\\mathrm{DFS}}&\\mathbf{0} \\\\ \\mathbf{0}&\\mathbf{H}_{\\mathrm{DFS}^{\\perp}}\\end{array}\\right),\\;\\;\\mathbf{F}_{ \\alpha}=\\left(\\begin{array}{cc}c_{\\alpha}\\mathbf{I}&\\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{B}_{\\alpha}\\end{array}\\right), \\tag{36}\\] _where \\(\\mathbf{H}_{\\mathrm{DFS}}\\) and \\(\\mathbf{H}_{\\mathrm{DFS}^{\\perp}}\\) are Hermitian, \\(c_{\\alpha}\\) are scalars, and \\(\\mathbf{B}_{\\alpha}\\) are arbitrary operators on \\(\\mathcal{H}_{\\mathrm{DFS}^{\\perp}}\\)._ But, as is clear from the quantum jumps picture, in particular Eqs. (33),(34), there also exists a non-Hermitian term, which appears not to be addressed properly by merely restricting \\(\\mathbf{H}_{S}\\). Indeed, this is the case if one demands that the system state is perfectly initialized into the DFS (Definition 2). As shown in Appendix A.2, the full condition on the Hamiltonian term then is: \\[\\langle i|(-i\\mathbf{H}_{S}+\\frac{1}{2}\\sum_{\\alpha}\\mathbf{F}_{\\alpha}^{ \\dagger}\\mathbf{F}_{\\alpha})|k^{\\perp}\\rangle=0,\\quad\\forall i,k^{\\perp}, \\tag{37}\\] where \\(|i\\rangle\\in\\mathcal{H}_{\\mathrm{DFS}}\\), \\(|k^{\\perp}\\rangle\\in\\mathcal{H}_{\\mathrm{DFS}^{\\perp}}\\). Applying the DFS conditions (9),(37), the Lindblad equation (7) reduces to the Schrodinger-like equation (28). Combining these results, we have: **Theorem 3**: _Assume perfect initialization. Then a subspace \\(\\mathcal{H}_{\\mathrm{DFS}}\\) of the total Hilbert space \\(\\mathcal{H}\\) is decoherence-free with respect to Markovian dynamics iff the Lindblad operators \\(\\mathbf{F}_{\\alpha}\\) and Hamiltonian \\(\\mathbf{H}_{S}\\) satisfy_ \\[\\mathbf{F}_{\\alpha} = \\left(\\begin{array}{cc}c_{\\alpha}\\mathbf{I}&\\mathbf{A}_{\\alpha} \\\\ \\mathbf{0}&\\mathbf{B}_{\\alpha}\\end{array}\\right) \\tag{38}\\] \\[\\mathcal{P}_{\\mathrm{DFS}}\\mathbf{H}_{S}\\mathcal{P}_{\\mathrm{DFS}}^ {\\dagger} = -\\frac{i}{2}\\sum_{\\alpha}c_{\\alpha}^{*}\\mathbf{A}_{\\alpha}. \\tag{39}\\] Note that \\(\\mathbf{H}_{S}\\) (which, again, includes the Lamb shift) must satisfy a more stringent constraint than previously noted due to the extra condition on its off-diagonal block. This has implications in examples of practical interest, as we next illustrate. iii.1.4 Example (significance of the new condition on the off-diagonal blocks of \\(\\mathbf{H}_{S}\\)) We present an example meant to demonstrate how the new constraint, Eq. (37) [or, equivalently, Eq. (39)], may lead to a different prediction than the old constraint, that matrix elements of the type \\(\\langle j^{\\perp}|\\mathbf{H}_{S}|i\\rangle\\), with \\(|i\\rangle\\in\\mathcal{H}_{\\mathrm{DFS}}\\) and \\(|j^{\\perp}\\rangle\\in\\mathcal{H}_{\\mathrm{DFS}^{\\perp}}\\), should vanish. Consider a system of three qubits interacting with a common bath. The system is under influence of the bath via: 1) Spontaneous emission from the highest level \\(|111\\rangle\\) to the lower levels, 2) Dephasing of the first and the second qubits. For simplicity we set the system and bath Hamiltonians, \\(\\mathbf{H}_{S}\\) and \\(\\mathbf{H}_{B}\\), to zero. The total Hamiltonian then contains only the system-bath interaction: \\[\\mathbf{H}_{I} = \\lambda_{1}(\\sigma_{1}^{z}+\\sigma_{2}^{z})\\otimes\\mathbf{B}+ \\lambda_{2}[(\\sigma_{1}^{-}+\\sigma_{2}^{-}+\\sigma_{3}^{-})\\otimes\\mathbf{b}^{\\dagger} \\tag{40}\\] \\[+(\\sigma_{1}^{+}+\\sigma_{2}^{+}+\\sigma_{3}^{+})\\otimes\\mathbf{b}],\\] where \\[\\sigma_{1}^{-}=|001\\rangle\\langle 111|,\\ \\sigma_{2}^{-}=|010\\rangle\\langle 111|,\\ \\sigma_{3}^{-}=|100\\rangle\\langle 111|, \\tag{41}\\] and \\(\\mathbf{b}\\) is a bosonic annihilation operator. The corresponding Lindblad equation may be derived, e.g., using the method developed in Ref. [50]. It may then be shown that \\[\\mathcal{L}[\\rho_{S}]=\\frac{1}{2}\\sum_{i=1}^{2}[\\mathbf{F}_{i},\\rho_{S} \\mathbf{F}_{i}^{\\dagger}]+[\\mathbf{F}_{i}\\rho_{S},\\mathbf{F}_{i}^{\\dagger}], \\tag{42}\\] where the Lindblad operators are \\[\\mathbf{F}_{1} = \\sqrt{d_{1}}(u_{11}\\mathbf{K}_{1}+u_{12}\\mathbf{K}_{2}),\\] \\[\\mathbf{F}_{2} = \\sqrt{d_{2}}(u_{21}\\mathbf{K}_{1}+u_{22}\\mathbf{K}_{2}). \\tag{43}\\] Here \\(\\mathbf{K}_{1}=\\sigma_{1}^{z}+\\sigma_{2}^{z}\\), \\(\\mathbf{K}_{2}=\\sigma_{1}^{-}+\\sigma_{2}^{-}+\\sigma_{3}^{-}\\), and \\(\\{d_{1},d_{2}\\}\\) are the eigenvalues of the Hermitian matrix \\(\\mathbf{A}=[u_{ij}]\\) of coefficients in the pre-diagonalized Lindblad equation, with the diagonalizing matrix denoted \\(\\mathbf{U}=[u_{ij}]\\). Now let us find the DFS conditions under the assumption of perfect initialization. The previously-derived Eq. (8) yields that \\(\\{|000\\rangle,|001\\rangle\\}\\) is a DFS, since \\(\\mathbf{K}_{2}\\) annihilates these states, and they are both eigenstates of \\(\\mathbf{K}_{1}\\) with an eigenvalue of \\(+2\\): \\[\\mathbf{F}_{1}|000\\rangle = 2\\sqrt{d_{1}}u_{11}|000\\rangle,\\quad\\mathbf{F}_{2}|000\\rangle=2 \\sqrt{d_{2}}u_{21}|000\\rangle\\] \\[\\mathbf{F}_{1}|001\\rangle = 2\\sqrt{d_{1}}u_{11}|001\\rangle,\\quad\\mathbf{F}_{2}|001\\rangle=2 \\sqrt{d_{1}}u_{11}|001\\rangle.\\] However, the new condition (37) tightens the situation. Choosing as representatives the states \\(|001\\rangle\\in\\mathcal{H}_{\\mathrm{DFS}}\\) and \\(|111\\rangle\\in\\mathcal{H}_{\\mathrm{DFS}^{\\perp}}\\), we find from Eq. (37): \\[\\langle 001|\\sum_{\\alpha=1}^{2}\\mathbf{F}_{\\alpha}^{\\dagger} \\mathbf{F}_{\\alpha}|111\\rangle = 2d_{1}u_{11}^{*}u_{12}+2d_{2}u_{21}^{*}u_{22} \\tag{45}\\] \\[= 0.\\] Since \\(u_{11}^{*}u_{12}+u_{21}^{*}u_{22}=0\\) (from unitarity of \\(\\mathbf{U}\\)), we see that the new condition imposes the extra symmetry constraint \\(d_{1}=d_{2}\\). This example illustrate the importance of the new condition, Eq. (37). ### Noiseless Subsystems We now consider again the more general setting of subsystems, rather than subspaces. #### iii.2.1 Completely Positive Maps Suppose the system Hilbert space can be decomposed as \\(\\mathcal{H}_{S}=\\mathcal{H}_{\\mathrm{NS}}\\otimes\\mathcal{H}_{\\mathrm{in}} \\oplus\\mathcal{H}_{\\mathrm{out}}\\), where \\(\\mathcal{H}_{\\mathrm{NS}}\\) is the factor in which quantum information will be stored. The subspace \\(\\mathcal{H}_{\\mathrm{out}}\\) may itself have a tensor product structure, i.e., additional factors similar to \\(\\mathcal{H}_{\\mathrm{NS}}\\) may be contained in it [as in Eq. (12)], but we shall not be interested in those other factors since the direct sum structure implies that different noiseless factors cannot be used simultaneously in a coherent manner. As in the DF subspace case considered above, we allow for the most general situation of a system that is _not_ necessarily initially DF. To make this notion precise, let us generalize the definitions of the projector \\(\\mathcal{P}_{\\mathrm{DFS}}\\) and projection operators \\(\\mathcal{P}_{\\mathrm{d}},\\mathcal{P}_{\\mathrm{d}^{\\perp}}\\) given in the DFS case, as follows: \\[\\mathcal{P}_{\\mathrm{NS-in}}=\\left(\\begin{array}{cc}\\mathbf{I}_{\\mathrm{NS}} \\otimes\\mathbf{I}_{\\mathrm{in}}&\\mathbf{0}\\end{array}\\right), \\tag{46}\\] \\[\\mathcal{P}_{\\mathrm{d}}=\\left(\\begin{array}{cc}\\mathbf{I}_{\\mathrm{NS}} \\otimes\\mathbf{I}_{\\mathrm{in}}&\\mathbf{0}\\end{array}\\right),\\quad\\mathcal{P }_{\\mathrm{d}^{\\perp}}=\\left(\\begin{array}{cc}\\mathbf{0}&\\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{I}_{\\mathrm{NS}}\\otimes\\mathbf{I}_{\\mathrm{in}}\\end{array}\\right) \\tag{47}\\] There is no risk of confusion in using the DFS notation, \\(\\mathcal{P}_{\\mathrm{d}}\\), for the NS case, as the DFS case is obtained when \\(\\mathbf{I}_{\\mathrm{in}}\\) is a scalar. The system density matrix takes the corresponding block form \\[\\rho_{S}=\\left(\\begin{array}{cc}\\rho_{\\mathrm{NS-in}}&\\rho^{\\prime}\\\\ \\rho^{\\prime\\dagger}&\\rho_{\\mathrm{out}}\\end{array}\\right). \\tag{48}\\] **Definition 5**: _Let the system Hilbert space \\(\\mathcal{H}_{S}\\) decompose as \\(\\mathcal{H}_{S}=\\mathcal{H}_{\\mathrm{NS}}\\otimes\\mathcal{H}_{\\mathrm{in}} \\oplus\\mathcal{H}_{\\mathrm{out}}\\), and partition the system state \\(\\rho_{S}\\) accordingly into blocks, as in Eq. (48). Assume \\(\\rho_{\\mathrm{NS-in}}(0)=\\mathcal{P}_{\\mathrm{NS-in}}\\rho_{S}(0)\\mathcal{P}_{ \\mathrm{NS-in}}^{\\dagger}\ eq\\mathbf{0}\\). Then the factor \\(\\mathcal{H}_{\\mathrm{NS}}\\) is called a decoherence-free (or noiseless) subsystem if the following condition holds:_ \\[\\mathrm{Tr}_{\\mathrm{in}}\\{\\rho_{\\mathrm{NS-in}}(t)\\}=\\mathbf{U}_{\\mathrm{NS}} \\mathrm{Tr}_{\\mathrm{in}}\\{\\rho_{\\mathrm{NS-in}}(0)\\}\\mathbf{U}_{\\mathrm{NS}}^{ \\dagger}, \\tag{49}\\] _where \\(\\mathbf{U}_{\\mathrm{NS}}\\) is a unitary matrix acting on \\(\\mathcal{H}_{\\mathrm{NS}}\\)._ **Definition 6**: _Perfect initialization (DF subsystems): \\(\\rho^{\\prime}=\\mathbf{0}\\) and \\(\\rho_{\\mathrm{out}}=\\mathbf{0}\\) in Eq. (48)._ **Definition 7**: _Imperfect initialization (DF subsystems): \\(\\rho^{\\prime}\\) and/or \\(\\rho_{\\mathrm{out}}\\) in Eq. (48) are non-vanishing._ According to Definition 5, a quantum state encoded into the \\(\\mathcal{H}_{\\mathrm{NS}}\\) factor at some time \\(t\\) is unitarily related to the \\(t=0\\) state. The factor \\(\\mathcal{H}_{\\mathrm{in}}\\) is unimportant, and hence is traced over. Clearly, a NS reduces to a DF subspace when \\(\\mathcal{H}_{\\mathrm{in}}\\) is one-dimensional, i.e., when \\(\\mathcal{H}_{\\mathrm{in}}=\\mathbb{C}\\). We now present the necessary and sufficient conditions for a NS and later we show that the algebra-dependent definition, Eq. (11), is a special case of this generalized form. In stating constraints on the form of the Kraus operators, below, it is understood that in addition they must satisfy the sum rule \\(\\sum_{\\alpha}\\mathbf{E}_{\\alpha}^{\\dagger}\\mathbf{E}_{\\alpha}=\\mathbf{I}\\), which we do not specify explicitly. **Theorem 4**: _Assume imperfect initialization. Then a subsystem \\(\\mathcal{H}_{\\rm NS}\\) in the decomposition \\(\\mathcal{H}_{S}=\\mathcal{H}_{\\rm NS}\\otimes\\mathcal{H}_{\\rm in}\\oplus\\mathcal{H} _{\\rm out}\\) is decoherence-free (or noiseless) with respect to CP maps iff the Kraus operators have the matrix representation_ \\[\\mathbf{E}_{\\alpha}=\\left(\\begin{array}{cc}\\mathbf{U}\\otimes\\mathbf{C}_{ \\alpha}&\\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{B}_{\\alpha}\\end{array}\\right) \\tag{50}\\] **Corollary 2**: _Assume perfect initialization. Then the Kraus operators have the relaxed form_ \\[\\mathbf{E}_{\\alpha}=\\left(\\begin{array}{cc}\\mathbf{U}\\otimes\\mathbf{C}_{ \\alpha}&\\mathbf{A}_{\\alpha}\\\\ \\mathbf{0}&\\mathbf{B}_{\\alpha}\\end{array}\\right) \\tag{51}\\] We note that this result has been recently derived from an operator quantum error correction perspective in Ref. [20]. Note again that there is a trade-off between the quality of preparation and the amount of leakage that can be tolerated, a fact that was not noted previously for subsystems, and has important experimental implications. As discussed above, the original definition of a NS was based on representation theory of the error algebra. Here we have argued in favor of a more comprehensive definition, based on the quantum channel picture. Let us now state explicitly why our result is more general. Indeed, in the algebraic approach one arrives at the representation (13) of the Kraus operators, namely \\(\\mathbf{E}_{\\alpha}=\\bigoplus_{J\\in\\mathcal{J}}\\mathbf{I}_{n_{J}}\\otimes \\mathbf{G}_{\\alpha,J}\\). However, it is clear from Eq. (51) that our channel-based approach leads to a form for the Kraus operators that includes this latter form as a special case, since it allows for the off-diagonal block \\(\\mathbf{A}_{\\alpha}\\). The representation (13) of the Kraus operators does agree with Eq. (50), but in that case we do not need to assume initialization inside the NS, so that again, our result is more general than the algebraic one. #### iii.2.2 Markovian Dynamics As in the CP-map based definition of a NS, we need to trace out the \\(\\mathcal{H}_{\\rm in}\\) factor, here in order to obtain the dynamical equation for the subsystem factor: \\[\\frac{\\partial\\rho_{\\rm NS}}{\\partial t} = \\frac{\\partial\\mathrm{Tr}_{\\rm in}\\{\\mathcal{P}_{\\rm NS-in}\\rho_{ S}\\mathcal{P}_{\\rm NS-in}^{\\dagger}\\}}{\\partial t} \\tag{52}\\] \\[= \\mathrm{Tr}_{\\rm in}\\{\\frac{\\partial\\mathcal{P}_{\\rm NS-in}\\rho_ {S}\\mathcal{P}_{\\rm NS-in}^{\\dagger}}{\\partial t}\\}\\] \\[= \\mathrm{Tr}_{\\rm in}\\{\\mathcal{P}_{\\rm NS-in}(-\\frac{i}{\\hbar}[ \\mathbf{H}_{S},\\rho_{S}]+\\frac{1}{2}\\sum_{\\alpha}2\\mathbf{F}_{\\alpha}\\rho_{S} \\mathbf{F}_{\\alpha}^{\\dagger}\\] \\[-\\mathbf{F}_{\\alpha}^{\\dagger}\\mathbf{F}_{\\alpha}\\rho_{S}-\\rho_{ S}\\mathbf{F}_{\\alpha}^{\\dagger}\\mathbf{F}_{\\alpha})\\mathcal{P}_{\\rm NS-in}^{\\dagger}\\}.\\] **Definition 8**: _The factor \\(\\mathcal{H}_{\\rm NS}\\) is called a decoherence-free (or noiseless) subsystem under Markovian dynamics if a state subject to Eq. (52), undergoes continuous unitary evolution:_ \\[\\dot{\\rho}_{\\rm NS}=i[\\mathbf{M},\\rho_{\\rm NS}], \\tag{53}\\] _where \\(\\mathbf{M}\\) is Hermitian._ Clearly, again, a NS reduces to a DF subspace when \\(\\mathcal{H}_{\\rm in}\\) is one-dimensional, i.e., when \\(\\mathcal{H}_{\\rm in}=\\mathbb{C}\\). Our goal is to find necessary and sufficient conditions such that Eq. (52) leads to Eq. (53). In the case of perfect initialization, since it does not involve \\(\\mathcal{H}_{\\rm out}\\), Eq. (52) is meaningful only if the system remains in the subspace \\(\\mathcal{H}_{\\rm NS}\\otimes\\mathcal{H}_{\\rm in}\\). An analysis of Eq. (52) reveals that this leakage-prevention goal is achieved by imposing the constraints stated in the following theorem, proven in Appendix A.2: **Theorem 5**: _Assume perfect initialization. Then a subsystem \\(\\mathcal{H}_{\\rm NS}\\) in the decomposition \\(\\mathcal{H}_{S}=\\mathcal{H}_{\\rm NS}\\otimes\\mathcal{H}_{\\rm in}\\oplus \\mathcal{H}_{\\rm out}\\) is decoherence-free (or noiseless) with respect to Markovian dynamics iff the Lindblad operators have the matrix representation_ \\[\\mathbf{F}_{\\alpha}\\mathbf{=}\\left(\\begin{array}{cc}\\mathbf{I}_{\\rm NS} \\otimes\\mathbf{C}_{\\alpha}&\\mathbf{A}_{\\alpha}\\\\ \\mathbf{0}&\\mathbf{B}_{\\alpha}\\end{array}\\right) \\tag{54}\\] _and the system Hamiltonian (including a possible Lamb shift) has the matrix representation_ \\[\\mathbf{H}_{S}\\mathbf{=}\\left(\\begin{array}{cc}\\mathbf{H}_{\\rm NS}\\otimes \\mathbf{I}_{\\rm in}\\mathbf{+}\\mathbf{I}_{\\rm NS}\\otimes\\mathbf{H}_{\\rm in}& \\mathbf{H}_{2}\\\\ \\mathbf{H}_{2}^{\\dagger}&\\mathbf{H}_{3}\\end{array}\\right) \\tag{55}\\] _where \\(\\mathbf{H}_{\\rm in}\\) is constant along its diagonal, and where_ \\[\\mathbf{H}_{2}=-\\frac{i}{2}\\sum_{\\alpha}\\left(\\mathbf{I}_{\\rm NS}\\otimes \\mathbf{C}_{\\alpha}^{\\dagger}\\right)\\mathbf{A}_{\\alpha}. \\tag{56}\\] Eqs. (55),(56) are new additional constraints on the Lindblad operators (compared to Ref. [10]) which must be satisfied in order to find a NS. If, on the other hand, we allow for imperfect initialization, we find a different set of conditions: **Theorem 6**: _Assume imperfect initialization. Then a subsystem \\(\\mathcal{H}_{\\rm NS}\\) in the decomposition \\(\\mathcal{H}_{S}=\\mathcal{H}_{\\rm NS}\\otimes\\mathcal{H}_{\\rm in}\\oplus\\mathcal{H }_{\\rm out}\\) is decoherence-free (or noiseless) with respect to Markovian dynamics iff the Lindblad operators have the matrix representation_ \\[\\mathbf{F}_{\\alpha}=\\left(\\begin{array}{cc}\\mathbf{I}_{\\rm NS}\\otimes \\mathbf{C}_{\\rm in}^{\\alpha}&\\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{B}_{\\alpha}\\end{array}\\right), \\tag{57}\\] _and the system Hamiltonian (including a possible Lamb shift) has the matrix representation_ \\[\\mathbf{H}=\\left(\\begin{array}{cc}\\mathbf{H}_{\\rm NS}\\otimes\\mathbf{I}_{\\rm in }\\mathbf{+}\\mathbf{I}_{\\rm NS}\\otimes\\mathbf{H}_{\\rm in}&\\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{H}_{\\rm out}\\end{array}\\right). \\tag{58}\\] ## IV Performance of Quantum Algorithms over Imperfectly Initialized DFSs In this section we discuss applications of our generalized formulation of DFSs to quantum algorithms. As mentioned above, a major obstacle to exploiting decoherence-free methods is the unrealistic assumption of perfect initialization inside a DFS. Removing this constraint enables us to perform algorithms without perfect initialization, while not suffering from information loss. We separate the role of an initialization error in the algorithm (i.e., starting from an imperfect input state), from the effect of noise in the output due to environment-induced decoherence. Thus we first quantify an error entirely due to incorrect initialization (\\(\\Delta_{\\text{leak}}\\) below), then compare the DFS situations prior and post this work, by relating them to \\(\\Delta_{\\text{leak}}\\). 1) Initialization error in the absence of decoherence: Assume no decoherence at all, that the initial state is \\[\\rho^{\\text{actual}}(0)\\!\\!=\\!\\left(\\begin{array}{cc}\\rho_{1}&\\rho_{2}\\\\ \\rho_{2}^{\\dagger}&\\rho_{3}\\end{array}\\right), \\tag{59}\\] while the ideal input state is fully in the DFS: \\[\\rho^{\\text{ideal}}(0)\\!\\!=\\!\\left(\\begin{array}{cc}\\rho&\\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{0}\\end{array}\\right). \\tag{60}\\] Further assume that the algorithm is implemented via unitary transformations \\(\\mathbf{U}=\\mathbf{U}_{\\text{DFS}}\\oplus\\mathbf{I}_{\\text{DFS}^{+}}\\), applied to \\(\\mathcal{H}_{\\text{DFS}}\\). In general this will lead to an output error in the algorithm, which can be quantified as \\[\\Delta_{\\text{leak}} \\equiv ||\\mathbf{U}\\rho^{\\text{actual}}(0)\\mathbf{U}^{\\dagger}\\!-\\! \\mathbf{U}\\rho^{\\text{ideal}}(0)\\mathbf{U}^{\\dagger}|| \\tag{61}\\] \\[= \\left\\|\\left(\\begin{array}{cc}\\mathbf{U}_{\\text{DFS}}(\\rho_{1} -\\rho)\\mathbf{U}_{\\text{DFS}}^{\\dagger}&\\mathbf{U}_{\\text{DFS}}\\rho_{2}\\\\ \\rho_{2}^{\\dagger}\\mathbf{U}_{\\text{DFS}}^{\\dagger}&\\rho_{3}\\end{array}\\right) \\right\\|,\\] where \\(||\\cdot||\\) denotes an appropriate operator norm. This error appears not because of decoherence but because of an erroneous initial state. This is a generic situation in quantum algorithms, which is not special to the DFS case: Eq. (59) is generic in the sense that one can view the DFS block as the computational subspace, with the other blocks representing additional levels (e.g., a qubit which is embedded in a larger Hilbert space). Methods for correcting such deviations from the ideal result exist (leakage elimination [57; 58]), but are beyond the scope of this paper. 2) Initialization error in the presence of decoherence: Assume that the input state is imperfectly initialized, as in Eq. (59), and in addition there is decoherence, i.e., \\[\\rho^{\\text{actual}}(t)=\\sum_{\\alpha}\\mathbf{E}_{\\alpha}(t)\\rho^{\\text{actual }}(0)\\mathbf{E}_{\\alpha}^{\\dagger}(t), \\tag{62}\\] with the Kraus operators given by Eq. (19) [the form compatible with decoherence-free evolution starting from \\(\\rho^{\\text{actual}}(0)\\)]. Prior to our work it was believed that for an imperfect initial state of the form \\(\\rho^{\\text{actual}}(0)\\), leakage due to the components \\(\\rho_{2}\\) and \\(\\rho_{3}\\) would cause non-unitary evolution of the DFS component. Thus instead of an error \\(\\mathbf{U}_{\\text{DFS}}(\\rho_{1}-\\rho)\\mathbf{U}_{\\text{DFS}}^{\\dagger}\\) in the DFS block of Eq. (61), it was believed that one had \\(\\mathcal{E}(\\rho_{1})-\\mathbf{U}_{\\text{DFS}}\\rho\\mathbf{U}_{\\text{DFS}}^{\\dagger}\\) where \\(\\mathcal{E}\\) is an appropriate superoperator component. This would have led to a reduced algorithmic fidelity, \\(\\Delta_{\\text{leak}}^{\\prime}<\\Delta_{\\text{leak}}\\). However, we now know that even for an initial state of the form \\(\\rho^{\\text{actual}}(0)\\), when the Kraus operators are given by Eq. (19) the actual algorithmic fidelity is still given by \\(\\Delta_{\\text{leak}}\\), since in fact the evolution of the DFS block is still unitary. The above arguments apply when imperfect initialization is unavoidable _but one knows the component_\\(\\rho_{1}\\). A worse (though perhaps more typical) scenario is one where not only is imperfect initialization unavoidable, but one does not even know the component \\(\\rho_{1}\\). In this case the above arguments apply in the context of algorithms that allow arbitrary input states. Almost all the important examples of quantum algorithms are now known to have a flexibility of this type: Grover's algorithm [22] was the first to be generalized to allow for arbitrary input states, first pure [24; 25; 26], then mixed [27]; Shor's algorithm [21] can run efficiently with a single pure qubit and all other qubits in an arbitrary mixed state [28]; a similar result applies to a class of interesting physics problems, such as finding the spectrum of a Hamiltonian [29]; the Deutsch-Josza [23] algorithm was generalized to allow for arbitrary input states [30], and a similar result holds for an algorithm that performs the functional phase rotation (a generalized form of the conventional conditional phase transform) [31]. Most recently it was shown that Simon's problem and the period-finding problem can be solved quantumly without initializing the auxiliary qubits [32]. For algorithms that do not allow arbitrary input states, one could still make use of the flexibility we have introduced into DFS state initialization, provided it is possible to apply post-selection: one modifies the output error of algorithm by observing whether the measurement outcome came from the DFS block or not (this could be done, e.g., via frequency-selective measurements, similar to the cycling transition method used in trapped-ion quantum computing [59]). ## V Decoherence Free Subspaces and Subsystems in Non-Markovian Dynamics ### Decoherence Free Subspaces In Ref. [33] a new class of non-Markovian master equations was introduced. The following equation was derived as an analytically solvable example of this class: \\[\\frac{\\partial\\rho_{S}}{\\partial t}=-i[\\mathbf{H}_{S},\\rho_{S}]+ \\mathcal{L}\\int_{0}^{t}dt^{\\prime}k(t^{\\prime})\\exp(\\mathcal{L}t^{\\prime})\\rho _{S}(t-t^{\\prime}) \\tag{63}\\] where \\(\\mathcal{L}\\) is Lindblad super-operator and \\(k(t)\\) represents the memory effects of the bath. The Markovian limit is clearly recovered when \\(k(t)\\propto\\delta(t)\\).2 Footnote 2: We note that Ref. [33] contains a small error: the Markovian limit is recovered for \\(k(t)=\\delta(t)\\) only if the lower limit in Eq. (63) is \\(-t\\). This change can easily be applied to the derivation of Ref. [33]. Some examples of physical systems which can be described by this master equation are (i) a two-level atom coupled to a single cavity mode, wherein the memory function is exponentially decaying, \\(k(t)=e^{-\\lambda t}\\)[43], and (ii) a single qubit subject to telegraph noise in the particular case that \\(||\\mathcal{L}||\\ll 1/t\\), whence Eq. (63) reduces to \\(\\dot{\\rho}_{S}=\\mathcal{L}\\int_{0}^{t}dt^{\\prime}k(t^{\\prime})\\rho(t-t^{ \\prime})\\)[60]. It is interesting to investigate the conditions for a DFS in the case of dynamics governed by Eq. (63), and to compare the results with the Markovian limit, \\(k(t)\\propto\\delta(t)\\). We defer proofs to Appendix A.3 and here present only the DFS-condition, stated in the following theorem (note that, similarly to the Markovian case, we consider here a continuous-time DFS). **Theorem 7**: _Assume imperfect initialization. Then a subspace \\(\\mathcal{H}_{\\rm DFS}\\) is decoherence free iff the system Hamiltonian \\(\\mathbf{H}_{S}\\) and Lindblad operators \\(\\mathbf{F}_{\\alpha}\\) have the matrix representation_ \\[\\mathbf{H}_{S}=\\left(\\begin{array}{cc}\\mathbf{H}_{\\rm DFS}&\\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{H}_{\\rm DFS^{\\perp}}\\end{array}\\right),\\ \\ \\mathbf{F}_{\\alpha}=\\left(\\begin{array}{cc}c_{\\alpha} \\mathbf{I}&\\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{B}_{\\alpha}\\end{array}\\right) \\tag{64}\\] These conditions are identical to those we found in the case of Markovian dynamics with imperfect initialization - cf. Theorem 2. This fact provides evidence for the robustness of decoherence-free states against variations in the nature of the decoherence process. Interestingly, the conditions under the assumption of perfect initialization differ somewhat when comparing the Markovian and non-Markovian cases: **Corollary 3**: _Assume perfect initialization. Then a subspace \\(\\mathcal{H}_{\\rm DFS}\\) is decoherence free iff the system Hamiltonian \\(\\mathbf{H}_{S}\\) and Lindblad operators \\(\\mathbf{F}_{\\alpha}\\) have the matrix representation_ \\[\\mathbf{H}_{S} = \\left(\\begin{array}{cc}\\mathbf{H}_{\\rm DFS}&\\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{H}_{\\rm DFS^{\\perp}}\\end{array}\\right), \\tag{65}\\] \\[\\mathbf{F}_{\\alpha} = \\left(\\begin{array}{cc}c_{\\alpha}\\mathbf{I}&\\mathbf{A}_{\\alpha} \\\\ \\mathbf{0}&\\mathbf{B}_{\\alpha}\\end{array}\\right)\\text{ and }\\sum_{\\alpha}c_{\\alpha}^{*} \\mathbf{A}_{\\alpha}=\\mathbf{0}. \\tag{66}\\] Compared to the Markovian case (Theorem 3), the difference is that now the off-diagonal blocks of the Hamiltonian must vanish, whereas in the Markovian case we had the constraint [Eq. (39)] \\(\\mathcal{P}_{\\rm DFS}\\mathbf{H}_{S}\\mathcal{P}_{\\rm DFS}^{\\dagger}=-\\frac{i}{2 }\\sum_{\\alpha}c_{\\alpha}^{*}\\mathbf{A}_{\\alpha}\\). ### Decoherence Free Subsystems We now consider the NS case. The dynamics governing a NS is derived by tracing out \\(\\mathcal{H}_{\\rm in}\\): \\[\\frac{\\partial\\rho_{\\rm NS}}{\\partial t} = \\frac{\\partial\\mathrm{Tr}_{\\rm in}\\{\\rho_{S}\\}}{\\partial t}= \\mathrm{Tr}_{\\rm in}\\{\\frac{\\partial\\rho_{S}}{\\partial t}\\} \\tag{67}\\] \\[= \\mathrm{Tr}_{\\rm in}\\{-i[\\mathbf{H}_{S},\\rho_{S}]\\] \\[+\\mathcal{L}\\int_{0}^{t}dt^{\\prime}k(t^{\\prime})\\exp(\\mathcal{L} t^{\\prime})\\rho_{S}(t-t^{\\prime})\\}\\] **Theorem 8**: _Assume imperfect initialization. Then a subsystem \\(\\mathcal{H}_{\\rm NS}\\) in the decomposition \\(\\mathcal{H}_{S}=\\mathcal{H}_{\\rm NS}\\otimes\\mathcal{H}_{\\rm in}\\oplus \\mathcal{H}_{\\rm out}\\) is decoherence-free (or noiseless) with respect to non-Markovian dynamics [Eq. (63)] iff the Lindblad operators and the system Hamiltonian have the matrix representation_ \\[\\mathbf{F}_{\\alpha} = \\left(\\begin{array}{cc}\\mathbf{I}_{\\rm NS}\\otimes\\mathbf{C}_{ \\alpha}&\\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{B}_{\\alpha}\\end{array}\\right) \\tag{68}\\] \\[\\mathbf{H}_{S} = \\left(\\begin{array}{cc}\\mathbf{H}_{\\rm NS}\\otimes\\mathbf{I}_{ \\rm in}+\\mathbf{I}_{\\rm NS}\\otimes\\mathbf{H}_{\\rm in}&\\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{H}_{\\rm out}\\end{array}\\right). \\tag{69}\\] Note that this form is, once again, identical to the Markovian case with imperfect initialization (cf. Theorem 6). However, as in the DFS case, the conditions are slightly different between Markovian and non-Markovian dynamics if we demand perfect initialization: **Corollary 4**: _Assume perfect initialization. Then a subsystem \\(\\mathcal{H}_{\\rm NS}\\) in the decomposition \\(\\mathcal{H}_{S}=\\mathcal{H}_{\\rm NS}\\otimes\\mathcal{H}_{\\rm in}\\oplus \\mathcal{H}_{\\rm out}\\) is decoherence-free (or noiseless) with respect to non-Markovian dynamics [Eq. (63)] iff the Lindblad operators and the system Hamiltonian have the matrix representation_ \\[\\mathbf{F}_{\\alpha}=\\left(\\begin{array}{cc}\\mathbf{I}_{\\rm NS} \\otimes\\mathbf{C}_{\\alpha}&\\mathbf{A}_{\\alpha}\\\\ \\mathbf{0}&\\mathbf{B}_{\\alpha}\\end{array}\\right), \\tag{70}\\] \\[\\sum_{\\alpha}(\\mathbf{I}_{\\rm NS}\\otimes\\mathbf{C}_{\\alpha}^{ \\dagger})\\mathbf{A}_{\\alpha}=\\mathbf{0},\\] (71) \\[\\mathbf{H}=\\left(\\begin{array}{cc}\\mathbf{H}_{\\rm NS}\\otimes \\mathbf{I}_{\\rm in}+\\mathbf{I}_{\\rm NS}\\otimes\\mathbf{H}_{\\rm in}&\\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{H}_{\\rm out}\\end{array}\\right). \\tag{72}\\] ## VI Summary and Conclusions We have revisited the concepts of decoherence-free subspaces and (noiseless) subsystems (DFSs), and introduced definitions of DFSs that generalize previous work. We have analyzed the conditions for the existence of DFSs in the case of CP maps, Markovian dynamics, and (for the first time) non-Markovian continuous-time dynamics. Our main finding implies significantly relaxed demands on the preparation of decoherence-free states: the initial state can be arbitrarily noisy. If, on the other hand, the initial state is perfectly prepared, then almost arbitrary leakage from outside the DFS into the DFS can be tolerated. In the case of Markovian dynamics, if one demands perfect initialization, our findings are of an opposite nature: we have shown that then an additional constraint must be imposed on the system Hamiltonian, which implies more stringent conditions for the possibility of manipulating a DFS than previously believed. We have presented an example to illustrate this fact. We have also shown that the notion of noiseless subsystems, as originally developed using an algebraic approach, admits a generalization when it is instead developed from a quantum channel approach. Our results have implications for experimental work on DFSs, and in particular on quantum algorithms over DFSs [16; 17]. It is now known that a large class of quantum algorithms can tolerate almost arbitrary preparation errors and still provide an advantage over their classical counterparts [24; 25; 26; 27; 28; 29; 30; 31; 32]. The relaxed preparation conditions for DFSs presented here are naturally compatible with this approach to quantum computation in noisy systems. This should provide further impetus for the experimental exploration of quantum computation over DFSs. ###### Acknowledgements. D.A.L. gratefully acknowledges financial support from Sloan Foundation. This material is partially based on research sponsored by the Defense Advanced Research Projects Agency under the QuIST program and managed by the Air Force Research Laboratory (AFOSR), under agreement F49620-01-1-0468. ## Appendix A Proofs of theorems and corollaries Here we present proofs of all our results above. We shorten the calculations by starting from the NS case and obtain the DFS conditions as a special case. ### CP Maps #### a.1.1 Arbitrary Initial State Assume the system evolution due to its interaction with a bath is described by a CP map with Kraus operators \\(\\{\\mathbf{E}_{\\alpha}\\}\\): \\[\\rho_{S}(t)=\\sum_{\\alpha}\\mathbf{E}_{\\alpha}\\rho_{S}(0)\\mathbf{E}_{\\alpha}^{ \\dagger}. \\tag{10}\\] Note that here \\(\\rho_{S}\\) is an operator on the entire system Hilbert space \\(\\mathcal{H}_{S}\\), which we assume to be decomposable as \\(\\mathcal{H}_{\\mathrm{NS}}\\otimes\\mathcal{H}_{\\mathrm{in}}\\oplus\\mathcal{H}_{ \\mathrm{out}}\\). From the NS definition, Eq. (49), we have \\[\\mathrm{Tr}_{\\mathrm{in}}\\{\\mathbf{U}\\otimes\\mathbf{I}(\\mathcal{P }_{\\mathrm{NS-in}}\\rho_{S}(0)\\mathcal{P}_{\\mathrm{NS-in}}^{\\dagger})\\mathbf{ U}^{\\dagger}\\otimes\\mathbf{I}\\}=\\] \\[\\mathrm{Tr}_{\\mathrm{in}}\\{\\sum_{\\alpha}(\\mathcal{P}_{\\mathrm{NS -in}}\\mathbf{E}_{\\alpha})\\,\\rho_{S}(0)(\\mathbf{E}_{\\alpha}^{\\dagger}\\mathcal{ P}_{\\mathrm{NS-in}}^{\\dagger})\\}. \\tag{11}\\] Let us represent the Kraus operators in the same block-structure matrix-form as that of the system state, i.e., corresponding to the decomposition \\(\\mathcal{H}_{S}=\\mathcal{H}_{\\mathrm{NS}}\\otimes\\mathcal{H}_{\\mathrm{in}} \\oplus\\mathcal{H}_{\\mathrm{out}}\\), where the blocks correspond to the subspaces \\(\\mathcal{H}_{\\mathrm{NS}}\\otimes\\mathcal{H}_{\\mathrm{in}}\\) (upper-left block) and \\(\\mathcal{H}_{\\mathrm{out}}\\) (lower-right block). Then \\[\\rho_{S} = \\left(\\begin{array}{cc}\\rho_{1}&\\rho_{2}\\\\ \\rho_{2}^{\\dagger}&\\rho_{3}\\end{array}\\right), \\tag{12}\\] \\[\\mathbf{E}_{\\alpha} = \\left(\\begin{array}{cc}\\mathbf{P}_{\\alpha}&\\mathbf{A}_{\\alpha} \\\\ \\mathbf{D}_{\\alpha}&\\mathbf{B}_{\\alpha}\\end{array}\\right), \\tag{13}\\] with appropriate normalization constraints, considered below. Equation (11) simplifies in this matrix form as \\[\\mathrm{Tr}_{\\mathrm{in}}\\{\\mathbf{U}\\otimes\\mathbf{I}\\rho_{1} \\mathbf{U}^{\\dagger}\\otimes\\mathbf{I}\\}=\\mathrm{Tr}_{\\mathrm{in}}\\{\\sum_{ \\alpha}\\mathbf{P}_{\\alpha}\\rho_{1}\\mathbf{P}_{\\alpha}^{\\dagger}\\] \\[+\\mathbf{P}_{\\alpha}\\rho_{2}\\mathbf{A}_{\\alpha}^{\\dagger}+ \\mathbf{A}_{\\alpha}\\rho_{2}^{\\dagger}\\mathbf{P}_{\\alpha}^{\\dagger}+\\mathbf{A }_{\\alpha}\\rho_{3}\\mathbf{A}_{\\alpha}^{\\dagger}\\}, \\tag{14}\\] which must hold for arbitrary \\(\\rho_{S}(0)\\). To derive constraints on the various terms we therefore consider special cases, which yield necessary conditions. First, consider an initial state \\(\\rho_{S}(0)\\) such that \\(\\rho_{2}=\\mathbf{0}\\). Then, as the LHS of Eq. (14) is independent from \\(\\rho_{3}\\), the last term must vanish: \\[\\sum_{\\alpha}\\mathbf{A}_{\\alpha}\\rho_{3}\\mathbf{A}_{\\alpha}^{\\dagger}= \\mathbf{0}\\Longrightarrow\\mathbf{A}_{\\alpha}=\\mathbf{0}. \\tag{15}\\] Further assume \\(\\rho_{1}=|i\\rangle\\langle i|\\otimes|i^{\\prime}\\rangle\\langle i^{\\prime}|\\). Note that the partial matrix element \\(\\langle j^{\\prime}|\\mathbf{P}_{\\alpha}|i^{\\prime}\\rangle\\) is an operator on the \\(\\mathcal{H}_{\\mathrm{NS}}\\) factor, \\(|i\\rangle\\langle i|\\). Then Eq. (14) reduces to \\[|i\\rangle\\langle i|=\\sum_{\\alpha,j^{\\prime}}\\left[\\mathbf{U}^{\\dagger}\\langle j ^{\\prime}|\\mathbf{P}_{\\alpha}|i^{\\prime}\\rangle\\right]|i\\rangle\\langle i| \\left[\\langle i^{\\prime}|\\mathbf{P}_{\\alpha}^{\\dagger}|j^{\\prime}\\rangle \\mathbf{U}\\right]. \\tag{16}\\] Taking matrix elements with respect to \\(|i^{\\perp}\\rangle\\), a state orthogonal to \\(|i\\rangle\\), yields: \\[0 = \\sum_{\\alpha,j^{\\prime}}|\\langle i^{\\perp}|\\left[\\mathbf{U}^{ \\dagger}\\langle j^{\\prime}|\\mathbf{P}_{\\alpha}|i^{\\prime}\\rangle\\right]|i \\rangle|^{2} \\tag{17}\\] \\[\\Longrightarrow \\langle i^{\\perp}|\\left[\\mathbf{U}^{\\dagger}\\langle j^{\\prime}| \\mathbf{P}_{\\alpha}|i^{\\prime}\\rangle\\right]|i\\rangle=\\mathbf{0},\\] which, in turn implies that \\(\\left[\\mathbf{U}^{\\dagger}\\langle j^{\\prime}|\\mathbf{P}_{\\alpha}|i^{\\prime} \\rangle\\right]|i\\rangle\\) is proportional to \\(|i\\rangle\\), i.e., \\[\\left[\\langle j^{\\prime}|\\mathbf{P}_{\\alpha}|i^{\\prime}\\rangle\\right]|i \\rangle\\propto\\mathbf{U}|i\\rangle. \\tag{18}\\] Since \\(|i^{\\prime}\\rangle,|j^{\\prime}\\rangle\\) are arbitrary this condition implies that the submatrix \\(\\mathbf{P}_{\\alpha}\\) must be of the form \\(\\mathbf{P}_{\\alpha}=\\mathbf{U}\\otimes\\mathbf{C}_{\\alpha}\\). Substituting \\(\\mathbf{P}_{\\alpha}=\\mathbf{U}\\otimes\\mathbf{C}_{\\alpha}\\) into Eq. (14) we have \\[\\mathrm{Tr}_{\\mathrm{in}}\\{\\mathbf{U}\\otimes\\mathbf{I}\\rho_{1} \\mathbf{U}^{\\dagger}\\otimes\\mathbf{I}\\} = \\mathrm{Tr}_{\\mathrm{in}}\\{\\sum_{\\alpha}\\mathbf{U}\\otimes\\mathbf{C}_{ \\alpha}\\rho_{1}\\mathbf{U}^{\\dagger}\\otimes\\mathbf{C}_{\\alpha}^{\\dagger}\\}, \\tag{38}\\] so that \\[\\mathrm{Tr}_{\\mathrm{in}}\\{\\rho_{1}\\}=\\mathrm{Tr}_{\\mathrm{in}} \\{\\sum_{\\alpha}\\mathbf{I}_{\\mathrm{NS}}\\otimes\\mathbf{C}_{\\alpha}\\rho_{1} \\mathbf{I}_{\\mathrm{NS}}\\otimes\\mathbf{C}_{\\alpha}^{\\dagger}\\}. \\tag{39}\\] Now suppose \\(\\rho_{1}=\\sum_{iji^{\\prime}j^{\\prime}}\\lambda_{iji^{\\prime}j^{\\prime}}|i \\rangle\\langle j|\\otimes|i^{\\prime}\\rangle\\langle j^{\\prime}|\\); then from Eq. (39) we find \\[\\sum_{iji^{\\prime}}\\lambda_{iji^{\\prime}i^{\\prime}}|i\\rangle \\langle j|=\\] \\[\\sum_{iji^{\\prime}j^{\\prime}k^{\\prime}\\alpha}\\lambda_{iji^{ \\prime}j^{\\prime}}|i\\rangle\\langle j|\\ \\langle k^{\\prime}|\\mathbf{C}_{\\alpha}|i^{\\prime}\\rangle \\langle j^{\\prime}|\\mathbf{C}_{\\alpha}^{\\dagger}|k^{\\prime}\\rangle. \\tag{40}\\] Using \\(\\sum_{k^{\\prime}}|k^{\\prime}\\rangle\\langle k^{\\prime}|=\\mathbf{I}_{\\mathrm{in}}\\), Eq. (38) becomes \\[\\sum_{iji^{\\prime}}\\lambda_{iji^{\\prime}i^{\\prime}}|i\\rangle \\langle j|=\\sum_{iji^{\\prime}j^{\\prime}}\\lambda_{iji^{\\prime}j^{\\prime}}|i \\rangle\\langle j|\\langle j^{\\prime}|\\sum_{\\alpha}\\mathbf{C}_{\\alpha}^{\\dagger }\\mathbf{C}_{\\alpha}|i^{\\prime}\\rangle. \\tag{41}\\] It follows that \\[\\sum_{\\alpha}\\mathbf{C}_{\\alpha}^{\\dagger}\\mathbf{C}_{\\alpha}= \\mathbf{I}_{\\mathrm{in}}. \\tag{42}\\] Next consider the normalization constraint \\(\\sum_{\\alpha}\\mathbf{E}_{\\alpha}^{\\dagger}\\mathbf{E}_{\\alpha}=\\mathbf{I}\\) for the Kraus operators, together with the additional constraints we have derived (\\(\\mathbf{A}_{\\alpha}=\\mathbf{0}\\), \\(\\mathbf{P}_{\\alpha}=\\mathbf{U}\\otimes\\mathbf{C}_{\\alpha}\\)): \\[\\sum_{\\alpha}\\mathbf{P}_{\\alpha}^{\\dagger}\\mathbf{P}_{\\alpha}+ \\mathbf{D}_{\\alpha}^{\\dagger}\\mathbf{D}_{\\alpha}=\\mathbf{I}_{\\mathrm{NS}} \\otimes\\mathbf{I}_{\\mathrm{in}}\\] \\[\\Longrightarrow\\mathbf{I}_{\\mathrm{NS}}\\otimes\\sum_{\\alpha} \\mathbf{C}_{\\alpha}^{\\dagger}\\mathbf{C}_{\\alpha}+\\sum_{\\alpha}\\mathbf{D}_{ \\alpha}^{\\dagger}\\mathbf{D}_{\\alpha}=\\mathbf{I}_{\\mathrm{NS}}\\otimes\\mathbf{I }_{\\mathrm{in}}. \\tag{43}\\] But, from Eq. (42) we have \\(\\sum_{\\alpha}\\mathbf{P}_{\\alpha}^{\\dagger}\\mathbf{P}_{\\alpha}=\\mathbf{I}_{ \\mathrm{NS}}\\otimes\\mathbf{I}_{\\mathrm{in}}\\). Therefore \\(\\mathbf{D}_{\\alpha}=\\mathbf{0}\\). Taking all these conditions together finalizes the matrix representation of the Kraus operators as \\[\\mathbf{E}_{\\alpha}=\\left(\\begin{array}{cc}\\mathbf{U}\\otimes \\mathbf{C}_{\\alpha}&\\mathbf{0}\\\\ \\mathbf{0}&\\mathbf{B}_{\\alpha}\\end{array}\\right). \\tag{44}\\] For a scalar \\(\\mathbf{C}_{\\alpha}\\) we recover the DFS condition (19). These considerations establish the necessity of the representation (44); it is simple to show that this representation is also sufficient, by substitution and checking that the NS and DFS conditions are satisfied. Therefore we have proved Theorems 1 and 4. #### a.2.2 Perfect Initialization We now prove Corollaries 1 and 2 for DF-initialized states of the form \\(\\rho_{S}(0)=\\mathcal{P}_{\\mathrm{d}}\\rho_{S}(0)\\mathcal{P}_{\\mathrm{d}}\\). Thus, we have to prove that \\(\\mathbf{D}_{\\alpha}=\\mathbf{0}\\) in Eq. (36). When \\(\\rho_{S}(0)=\\mathcal{P}_{\\mathrm{d}}\\rho_{S}(0)\\mathcal{P}_{\\mathrm{d}}\\) we have that \\(\\rho_{2}=\\mathbf{0}\\) and \\(\\rho_{3}=\\mathbf{0}\\) and Eq. (37) reduces to \\[\\mathrm{Tr}_{\\mathrm{in}}\\{\\mathbf{U}\\otimes\\mathbf{I}\\rho_{1} \\mathbf{U}^{\\dagger}\\otimes\\mathbf{I}\\}=\\mathrm{Tr}_{\\mathrm{in}}\\{\\sum_{ \\alpha}\\mathbf{P}_{\\alpha}\\rho_{1}\\mathbf{P}_{\\alpha}^{\\dagger}\\}. \\tag{45}\\] The argument leading to the vanishing of the \\(\\mathbf{A}_{\\alpha}\\) [Eq. (38)] then does not apply, and indeed the \\(\\mathbf{A}_{\\alpha}\\) need not vanish. However, the arguments leading to \\(\\mathbf{P}_{\\alpha}=\\mathbf{U}\\otimes\\mathbf{C}_{\\alpha}\\) and \\(\\sum_{\\alpha}\\mathbf{P}_{\\alpha}^{\\dagger}\\mathbf{P}_{\\alpha}=\\mathbf{I}_{ \\mathrm{NS}}\\otimes\\mathbf{I}_{\\mathrm{in}}\\) do apply. Hence \\(\\mathbf{D}_{\\alpha}=\\mathbf{0}\\). ### Markovian Dynamics #### a.2.3 Arbitrary Initial State Consider Markovian dynamics \\[\\frac{\\partial\\rho_{S}}{\\partial t} = -i[\\mathbf{H}_{S},\\rho_{S}]+\\sum_{\\alpha}\\mathbf{F}_{\\alpha}\\rho_ {S}\\mathbf{F}_{\\alpha}^{\\dagger} \\tag{46}\\] \\[-\\frac{1}{2}\\mathbf{F}_{\\alpha}^{\\dagger}\\mathbf{F}_{\\alpha}\\rho_ {S}-\\frac{1}{2}\\rho_{S}\\mathbf{F}_{\\alpha}^{\\dagger}\\mathbf{F}_{\\alpha},\\] with the following matrix representation of the various operators: \\[\\rho_{S} = \\left(\\begin{array}{cc}\\rho_{1}&\\rho_{2}\\\\ \\rho_{2}^{\\dagger}&\\rho_{3}\\end{array}\\right),\\] \\[\\mathbf{H}_{S} = \\left(\\begin{array}{cc}\\mathbf{H}_{1}&\\mathbf{H}_{2}\\\\ \\mathbf{H}_{2}^{\\dagger}&\\mathbf{H}_{3}\\end{array}\\right),\\ \\mathbf{F}_{\\alpha}=\\left(\\begin{array}{cc}\\mathbf{P}_{ \\alpha}&\\mathbf{A}_{\\alpha}\\\\ \\mathbf{D}_{\\alpha}&\\mathbf{B}_{\\alpha}\\end{array}\\right).\\] Then we find the dynamics of the NS block to be \\[\\frac{\\partial\\rho_{\\mathrm{NS}}}{\\partial t}=\\frac{\\partial \\mathrm{Tr}_{\\mathrm{in}}\\{\\rho_{1}\\}}{\\partial t}=\\] \\[-i\\mathrm{Tr}_{\\mathrm{in}}\\{[\\mathbf{H}_{1},\\rho_{1}]\\}-i\\mathrm{ Tr}_{\\mathrm{in}}\\{(\\mathbf{H}_{2}\\rho_{2}^{\\dagger}-\\rho_{2}\\mathbf{H}_{2}^{\\dagger})\\}+\\] \\[\\mathrm{Tr}_{\\mathrm{in}}\\{\\sum_{\\alpha}\\mathbf{P}_{\\alpha}\\rho_{1} \\mathbf{P}_{\\alpha}^{\\dagger}+\\mathbf{A}_{\\alpha}\\rho_{2}^{\\dagger}\\mathbf{P}_{ \\alpha}^{\\dagger}+\\mathbf{P}_{\\alpha}\\rho_{2}\\mathbf{A}_{\\alpha}^{\\dagger}+ \\mathbf{A}_{\\alpha}\\rho_{3}\\mathbf{A}_{\\alpha}^{\\dagger}\\] \\[-\\frac{1}{2}\\sum_{\\alpha}(\\mathbf{P}_{\\alpha}^{\\dagger}\\mathbf{P}_{ \\alpha}+\\mathbf{D}_{\\alpha}^{\\dagger}\\mathbf{D}_{\\alpha})\\rho_{1}+(\\mathbf{P}_{ \\alpha}^{\\dagger}\\mathbf{A}_{\\alpha}+\\mathbf{D}_{\\alpha}^{\\dagger}\\mathbf{B}_{ \\alpha})\\rho_{2}^{\\dagger}\\] \\[-\\frac{1}{2}\\sum_{\\alpha}\\rho_{1}(\\mathbf{P}_{\\alpha}^{\\dagger} \\mathbf{P}_{\\alpha}+\\mathbf{D}_{\\alpha}^{\\dagger}\\mathbf{D}_{\\alpha})+\\rho_{2}( \\mathbf{A}_{\\alpha}^{\\dagger}\\mathbf{P}_{\\alpha}+\\mathbf{B}_{\\alpha}^{\\dagger} \\mathbf{D}_{\\alpha})\\}\\] The right-hand side of this equation must be independent of \\(\\rho_{2}\\) and \\(\\rho_{3}\\), for any matrices \\(\\rho_{2}\\) and \\(\\rho_{3}\\). Therefore the term \\(\\mathbf{A}_{\\alpha}\\rho_{3}\\mathbf{A}_{\\alpha}^{\\dagger}\\) implies \\(\\mathbf{A}_{\\alpha}=\\mathbf{0}\\). Collecting the remaining terms acting on \\(\\rho_{2}^{\\dagger}\\) from the left yields \\(\\mathrm{Tr}_{\\mathrm{in}}\\{(-i\\mathbf{H}_{2}-\\mathbf{D}_{\\alpha}^{\\dagger} \\mathbf{B}_{\\alpha})\\rho_{2}^{\\dagger}\\}=\\mathbf{0}\\). Together we have \\[\\mathbf{A}_{\\alpha}=\\mathbf{0},\\quad i\\mathbf{H}_{2}+\\sum_{ \\alpha}\\mathbf{D}_{\\alpha}^{\\dagger}\\mathbf{B}_{\\alpha}=\\mathbf{0}. \\tag{47}\\]This reduces Eq. (105) to \\[\\frac{\\partial\\rho_{\\rm NS}}{\\partial t}=\\frac{\\partial{\\rm Tr}_{ \\rm in}\\{\\rho_{1}\\}}{\\partial t}=\\] \\[-i{\\rm Tr}_{\\rm in}[{\\bf H}_{1},\\rho_{1}]+{\\rm Tr}_{\\rm in}\\sum_{ \\alpha}{\\bf P}_{\\alpha}\\rho_{1}{\\bf P}_{\\alpha}^{\\dagger}\\] \\[-\\frac{1}{2}{\\rm Tr}_{\\rm in}\\sum_{\\alpha}\\{({\\bf P}_{\\alpha}^{ \\dagger}{\\bf P}_{\\alpha}+{\\bf D}_{\\alpha}^{\\dagger}{\\bf D}_{\\alpha}),\\rho_{1}\\} \\tag{106}\\] Consider the initial state \\(\\rho_{1}=\\rho_{\\rm NS}\\otimes|i^{\\prime}\\rangle\\langle i^{\\prime}|\\), with \\(|i^{\\prime}\\rangle\\in{\\cal H}_{\\rm in}\\): \\[\\frac{\\partial\\rho_{\\rm NS}}{\\partial t}=-i[\\langle i^{\\prime}|{ \\bf H}_{1}|i^{\\prime}\\rangle,\\rho_{\\rm NS}]\\] \\[+\\sum_{\\alpha}\\langle j^{\\prime}|{\\bf P}_{\\alpha}|i^{\\prime} \\rangle\\rho_{\\rm NS}\\langle i^{\\prime}|{\\bf P}_{\\alpha}^{\\dagger}|j^{\\prime}\\rangle\\] \\[-\\frac{1}{2}\\sum_{\\alpha}\\{\\rho_{\\rm NS},(\\langle i^{\\prime}|{ \\bf P}_{\\alpha}^{\\dagger}|j^{\\prime}\\rangle\\langle j^{\\prime}|{\\bf P}_{\\alpha} |i^{\\prime}\\rangle\\] \\[+\\langle i^{\\prime}|{\\bf D}_{\\alpha}^{\\dagger}|j^{\\prime}\\rangle \\langle j^{\\prime}|{\\bf D}_{\\alpha}|i^{\\prime}\\rangle)\\} \\tag{107}\\] Let \\(\\rho_{\\rm NS}=|\\psi\\rangle\\langle\\psi|\\) with \\(\\psi\\) arbitrary and apply \\(\\langle\\psi^{\\perp}| |\\psi^{\\perp}\\rangle\\), such that \\(\\langle\\psi^{\\perp}|\\psi\\rangle=0\\), to Eq. (107), denoting \\({\\bf P}_{\\alpha,i^{\\prime},j^{\\prime}}\\equiv\\langle j^{\\prime}|{\\bf P}_{\\alpha }|i^{\\prime}\\rangle\\): \\[\\sum_{\\alpha}|\\langle\\psi^{\\perp}|{\\bf P}_{\\alpha,i^{\\prime},j^{\\prime}}|\\psi \\rangle|^{2}=0. \\tag{108}\\] Since this identity must hold for all \\(\\psi\\) and \\(\\psi^{\\perp}\\), we find that \\({\\bf P}_{\\alpha,i^{\\prime},j^{\\prime}}=c_{\\alpha,i^{\\prime},j^{\\prime}}{\\bf I }_{\\rm NS}\\), which implies that \\({\\bf P}_{\\alpha}={\\bf I}_{\\rm NS}\\otimes{\\bf C}_{\\rm in}^{\\alpha}\\). Moreover, by definition of a NS, there exists a Hermitian matrix \\({\\bf H}_{\\rm NS}\\) such that \\(\\rho_{\\rm NS}\\) obeys a Schrodinger equation, \\(\\partial\\rho_{\\rm NS}/\\partial t=-i[{\\bf H}_{\\rm NS},\\rho_{\\rm NS}]\\). Therefore the non-Hermitian term \\(\\sum_{\\alpha}{\\bf D}_{\\alpha}^{\\dagger}{\\bf D}_{\\alpha}\\) in Eq. (106) must vanish, implying that \\({\\bf D}_{\\alpha}={\\bf 0}\\). Combining these results with Eq. (106) yields \\[\\frac{\\partial{\\rm Tr}_{\\rm in}\\{\\rho_{1}\\}}{\\partial t} = -i{\\rm Tr}_{\\rm in}\\{[{\\bf H}_{1},\\rho_{1}]\\} \\tag{109}\\] \\[\\equiv -i[{\\bf H}_{\\rm NS},\\rho_{\\rm NS}]\\] This identity can be realized iff \\({\\bf H}_{1}={\\bf H}_{\\rm NS}\\otimes{\\bf I}_{\\rm in}+{\\bf I}_{\\rm NS}\\otimes{ \\bf H}_{\\rm in}\\). Therefore the NS conditions are obtained as \\[{\\bf H} = \\left(\\begin{array}{cc}{\\bf H}_{\\rm NS}\\otimes{\\bf I}_{\\rm in} +{\\bf I}_{\\rm NS}\\otimes{\\bf H}_{\\rm in}&{\\bf 0}\\\\ {\\bf 0}&{\\bf H}_{3}\\end{array}\\right),\\] \\[{\\bf F}_{\\alpha} = \\left(\\begin{array}{cc}{\\bf I}_{\\rm NS}\\otimes{\\bf C}_{\\rm in}^ {\\alpha}&{\\bf 0}\\\\ {\\bf 0}&{\\bf B}_{\\alpha}\\end{array}\\right). \\tag{110}\\] The DFS condition is a special case of (107), with \\({\\rm dim}({\\cal H}_{\\rm in})=1\\). This concludes the proof of Theorems 2 and 6. #### a.2.2 Perfect Initialization Now consider perfect initialization: \\[\\rho_{S}=\\left(\\begin{array}{cc}\\rho_{1}&={\\bf 0}\\\\ ={\\bf 0}&={\\bf 0}\\end{array}\\right). \\tag{111}\\] This is just the case of an arbitrary initial state considered above, with \\(\\rho_{2}={\\bf 0}\\) and \\(\\rho_{3}={\\bf 0}\\) in Eq. (105). This then yields the dynamics of \\(\\rho_{\\rm NS}\\) as being given by Eq. (106). Repeating the derivation following Eq. (106) we conclude again that \\({\\bf D}_{\\alpha}={\\bf 0}\\), \\({\\bf P}_{\\alpha}\\)= \\({\\bf I}_{\\rm NS}\\otimes{\\bf C}_{\\rm in}^{\\alpha}\\) and \\({\\bf H}_{1}={\\bf H}_{\\rm NS}\\otimes{\\bf I}_{\\rm in}+{\\bf I}_{\\rm NS}\\otimes{ \\bf H}_{\\rm in}\\). Note that Eq. (106) now does not apply (it was obtained assuming nonzero \\(\\rho_{2},\\rho_{3}\\)), i.e., we cannot conclude that \\({\\bf A}_{\\alpha}\\) and \\({\\bf H}_{2}\\) vanish. This implies that that \\(\\partial\\rho_{S}/\\partial t\\) has a non-zero off-diagonal elements, which, using the master equation (103), we calculate to be: \\[{\\rm upper\\ right\\ block}:\\] \\[=i\\rho_{1}{\\bf H}_{2}-\\frac{1}{2}\\rho_{1}\\sum_{\\alpha}({\\bf I}_{ \\rm NS}\\otimes{\\bf C}_{\\rm in}^{\\alpha\\dagger}){\\bf A}_{\\alpha}\\] \\[\\mbox{\\rm bottom\\ right\\ block:}\\ \\sum_{\\alpha}{\\bf D}_{\\alpha}\\rho_{1}{\\bf D}_{ \\alpha}^{\\dagger}={\\bf 0}.\\] To prevent the appearance of corresponding off-diagonal blocks in \\(\\rho_{S}\\), we must therefore demand \\[{\\bf H}_{2}+\\frac{i}{2}\\sum_{\\alpha}({\\bf I}_{\\rm NS}\\otimes{\\bf C}_{\\rm in}^{ \\alpha\\dagger}){\\bf A}_{\\alpha}={\\bf 0}, \\tag{112}\\] which is Eq. (56). The DFS case is obtained with \\({\\rm dim}({\\cal H}_{\\rm in})=1\\). This concludes the proof of Theorems 3 and 5. ### Non-Markovian Dynamics The derivation of the conditions for decoherence-freeness in the case of non-Markovian dynamics is somewhat different from the other two cases we have considered, because of the appearance of the nonlocal-in-time integral in the master equation: \\[\\frac{\\partial\\rho_{S}}{\\partial t}=-i[{\\bf H}_{S},\\rho_{S}]+{\\cal L}\\int_{0}^{t }dt^{\\prime}k(t^{\\prime})\\exp({\\cal L}t^{\\prime})\\rho_{S}(t-t^{\\prime}) \\tag{113}\\] In order to find necessary conditions on the structure of \\({\\bf H}_{S}\\) and \\({\\cal L}\\) consider the case of small \\(t\\), expand \\[\\rho_{S}(t)=\\sum_{n=0}t^{n}\\rho_{S}^{(n)}(0),\\ \\ k(t)=\\sum_{m=0}t^{m}k^{(m)}(0), \\tag{114}\\] and substitute into Eq. (113). The constant \\((t^{0})\\) term yields \\[\\rho_{S}^{(1)}(0)=-i[{\\bf H}_{S},\\rho_{S}(0)]. \\tag{115}\\] The terms involving \\(t^{1}\\) yield, after Taylor-expanding \\(\\exp({\\cal L}t^{\\prime})\\): \\[2\\rho_{S}^{(2)}(0)=-i[{\\bf H}_{S},\\rho_{S}^{(1)}(0)]+k(0){\\cal L}\\rho_{S}(0). \\tag{116}\\]Thus the solution of Eq. (100) up to first and second order in time is: \\[\\rho_{S}(t) = \\rho_{S}(0)-it[{\\bf H}_{S},\\rho_{S}(0)]+O(t^{2}), \\tag{101}\\] \\[\\rho_{S}(t) = \\rho_{S}(0)-it[{\\bf H}_{S},\\rho_{S}(0)]\\] \\[-\\frac{t^{2}}{2}\\{-[{\\bf H}_{S},[{\\bf H}_{S},\\rho_{S}(0)]]+k(0){ \\cal L}\\rho_{S}(0)\\}+O(t^{3}).\\] #### a.2.1 Arbitrary Initial State Consider once again the matrix representations as in Eq. (100). Substituting these expressions into the first order equation (101), the \\(\\rho_{1}(t)\\) block yields \\[\\rho_{\\rm NS}(t) = \\rho_{\\rm NS}(0)-it{\\rm Tr}_{\\rm in}\\{[{\\bf H}_{1},\\rho_{1}(0)]\\}\\] \\[-it{\\rm Tr}_{\\rm in}\\{{\\bf H}_{2}\\rho_{2}^{\\dagger}(0)-\\rho_{2}(0 ){\\bf H}_{2}^{\\dagger}\\}\\Longrightarrow\\] \\[{\\bf H}_{2} = {\\bf 0},\\ \\ {\\bf H}_{1}={\\bf H}_{\\rm NS}\\otimes{\\bf I}_{\\rm in}+{ \\bf I}_{\\rm NS}\\otimes{\\bf H}_{\\rm in}.\\] Continuing to second order, Eq. (101), the NS block is found to be \\[\\rho_{\\rm NS}(t)=\\rho_{\\rm NS}(0)-it[{\\bf H}_{\\rm NS},\\rho_{\\rm NS }(0)]\\] \\[-\\frac{t^{2}}{2}[{\\bf H}_{\\rm NS},[{\\bf H}_{\\rm NS},\\rho_{\\rm NS }(0)]]+{\\rm Tr}_{\\rm in}\\{2k(0)\\sum_{\\alpha}{\\bf P}_{\\alpha}\\rho_{1}{\\bf P}_{ \\alpha}^{\\dagger}\\] \\[+{\\bf A}_{\\alpha}\\rho_{2}^{\\dagger}{\\bf P}_{\\alpha}^{\\dagger}+{ \\bf P}_{\\alpha}\\rho_{2}{\\bf A}_{\\alpha}^{\\dagger}+{\\bf A}_{\\alpha}\\rho_{3}{ \\bf A}_{\\alpha}^{\\dagger}\\] \\[-k(0)\\sum_{\\alpha}({\\bf P}_{\\alpha}^{\\dagger}{\\bf P}_{\\alpha}+{ \\bf D}_{\\alpha}^{\\dagger}{\\bf D}_{\\alpha})\\rho_{1}+({\\bf P}_{\\alpha}^{\\dagger }{\\bf A}_{\\alpha}+{\\bf D}_{\\alpha}^{\\dagger}{\\bf B}_{\\alpha})\\rho_{2}^{\\dagger}\\] \\[-k(0)\\sum_{\\alpha}\\rho_{1}({\\bf P}_{\\alpha}^{\\dagger}{\\bf P}_{ \\alpha}+{\\bf D}_{\\alpha}^{\\dagger}{\\bf D}_{\\alpha})+\\rho_{2}({\\bf A}_{\\alpha} ^{\\dagger}{\\bf P}_{\\alpha}+{\\bf B}_{\\alpha}^{\\dagger}{\\bf D}_{\\alpha})\\}.\\] The first three terms correspond to unitary evolution, but the remaining terms are essentially identical to the case of Markovian dynamics and must be made to vanish, just as in Eq. (102). The same arguments used there apply and consequently \\[{\\bf F}_{\\alpha}=\\left(\\begin{array}{cc}{\\bf I}_{\\rm NS}\\otimes{\\bf C}_{ \\rm in}^{\\alpha}&{\\bf 0}\\\\ {\\bf 0}&{\\bf B}_{\\alpha}\\end{array}\\right). \\tag{102}\\] The conditions (100), (102) are necessary and sufficient for unitary evolution of the NS block under our non-Markovian master equation. The DFS case is obtained with \\(\\dim({\\cal H}_{\\rm in})=1\\). This concludes the proof of Theorems 7 and 8. #### a.2.2 Perfect Initialization Assume \\[\\rho_{S}(0)=\\left(\\begin{array}{cc}\\rho(0)&{\\bf 0}\\\\ {\\bf 0}&{\\bf 0}\\end{array}\\right); \\tag{103}\\] then from the first order equation (101), the NS block is found to satisfy \\[\\rho_{\\rm NS}(t) = \\rho_{\\rm NS}(0)-it{\\rm Tr}_{\\rm in}\\{[{\\bf H}_{1},\\rho(0)]\\}\\Longrightarrow\\] \\[{\\bf H}_{1} = {\\bf H}_{\\rm NS}\\otimes{\\bf I}_{\\rm in}+{\\bf I}_{\\rm NS}\\otimes{ \\bf H}_{\\rm in}. \\tag{104}\\] To second order in time [Eq. (101)]: \\[\\rho_{\\rm NS}(t)=\\rho_{\\rm NS}(0)-it[{\\bf H}_{\\rm NS},\\rho_{\\rm NS }(0)]\\] \\[-\\frac{t^{2}}{2}[{\\bf H}_{\\rm NS},[{\\bf H}_{\\rm NS},\\rho_{\\rm NS }(0)]]\\] \\[+\\frac{t^{2}}{2}{\\rm Tr}_{\\rm in}\\{-{\\bf H}_{2}{\\bf H}_{2}^{ \\dagger}\\rho(0)-\\rho(0){\\bf H}_{2}{\\bf H}_{2}^{\\dagger}\\] \\[+2k(0)\\sum_{\\alpha}{\\bf P}_{\\alpha}\\rho{\\bf P}_{\\alpha}^{\\dagger} -({\\bf P}_{\\alpha}^{\\dagger}{\\bf P}_{\\alpha}+{\\bf D}_{\\alpha}^{\\dagger}{\\bf D}_ {\\alpha})\\rho(0)\\] \\[-\\rho(0)({\\bf P}_{\\alpha}^{\\dagger}{\\bf P}_{\\alpha}+{\\bf D}_{ \\alpha}^{\\dagger}{\\bf D}_{\\alpha})\\},\\] which is again similar to the Markovian case. Similar logic therefore yields \\({\\bf H}_{2}={\\bf D}_{\\alpha}={\\bf 0}\\), and hence \\[{\\bf F}_{\\alpha}=\\left(\\begin{array}{cc}{\\bf I}_{\\rm NS}\\otimes{\\bf C}_{ \\alpha}&{\\bf A}_{\\alpha}\\\\ {\\bf 0}&{\\bf B}_{\\alpha}\\end{array}\\right). \\tag{105}\\] Here we should notice that the density matrix \\(\\rho_{S}(0)\\) has an off-diagonal element \\(\\rho(0)\\sum_{\\alpha}({\\bf P}_{\\alpha}^{\\dagger}{\\bf A}_{\\alpha}+{\\bf D}_{ \\alpha}^{\\dagger}{\\bf B}_{\\alpha})=\\rho(0)\\sum_{\\alpha}{\\bf P}_{\\alpha}^{ \\dagger}{\\bf A}_{\\alpha}\\). This term must vanish, for otherwise \\(\\rho_{S}(t)\\) has non-zero off-diagonal elements. Summarizing, we have \\[{\\bf F}_{\\alpha} = \\left(\\begin{array}{cc}{\\bf I}_{\\rm NS}\\otimes{\\bf C}_{\\alpha}& {\\bf A}_{\\alpha}\\\\ {\\bf 0}&{\\bf B}_{\\alpha}\\end{array}\\right),\\quad\\sum_{\\alpha}({\\bf I}_{\\rm NS} \\otimes{\\bf C}_{\\alpha}^{\\dagger}){\\bf A}_{\\alpha}=0,\\] \\[{\\bf H} = \\left(\\begin{array}{cc}{\\bf H}_{\\rm NS}\\otimes{\\bf I}_{\\rm in}+{ \\bf I}_{\\rm NS}\\otimes{\\bf H}_{\\rm in}&{\\bf 0}\\\\ {\\bf 0}&{\\bf H}_{\\rm out}\\end{array}\\right). \\tag{106}\\] The DFS case is obtained with \\(\\dim({\\cal H}_{\\rm in})=1\\). This concludes the proof of Corollaries 3 and 4. ## References * (1) R. Landauer, Proc. Roy. Soc. London Ser. A **353**, 367 (1995). * (2) W.G. Unruh, Phys. Rev. A **51**, 992 (1995). * (3) S. Haroche and J.M. Raimond, Physics Today **49**, 51 (1996). * (4) L.-M Duan and G.-C. Guo, Phys. Rev. A **57**, 737 (1998). * (5) P. Zanardi and M. Rasetti, Phys. Rev. Lett. **79**, 3306 (1997). * (6) D.A. Lidar, I.L. Chuang and K.B. Whaley, Phys. Rev. Lett. **81**, 2594 (1998). * (7) D.A. Lidar, D. Bacon, J. Kempe, and K.B. Whaley, Phys. Rev. A **63**, 022306 (2001). * (8) E. Knill, R. Laflamme and L. Viola, Phys. Rev. Lett. **84**, 2525 (2000). * (9) S. De Filippo, Phys. Rev. A **62**, 052307 (2000). * (10) J. Kempe, D. Bacon, D.A. Lidar, and K.B. Whaley, Phys. Rev. A **63**, 042307 (2001). * (11) C.-P. Yang and J. Gea-Banacloche, Phys. Rev. A **63**, 022311 (2001). * (12) P.G. Kwiat, A.J. Berglund, J.B. Altepeter, and A.G. White, Science **290**, 498 (2000). * (13) D. Kielpinski, V. Meyer, M.A. Rowe, C.A. Sackett, W.M. Itano, C. Monroe, and D.J. Wineland, Science **291**, 1013 (2001). * (14) E.M. Fortunato, L. Viola, J. Hodges, G. Teklemariam, and D.G. Cory, New J. Phys. **4**, 5 (2002). * (15) L. Viola, E.M. Fortunato, M.A. Pravia, E. Knill, R. Laflamme, D.G. Cory, Science **293**, 2059 (2001). * (16) M. Mohseni, J.S. Lundeen, K.J. Resch, A.M. Steinberg, Phys. Rev. Lett. **91**, 187903 (2003). * (17) J. Ollerenshaw, D.A. Lidar, and L.E. Kay, Phys. Rev. Lett. **91**, 217904 (2003). * (18) D.A. Lidar, K.B. Whaley, in _Irreversible Quantum Dynamics_, Vol. 622 of _Lecture Notes in Physics_, edited by F. Benatti and R. Floreanini (Springer, Berlin, 2003), p. 83, eprint quant-ph/0301032. * (19) J. Preskill, in _Introduction to Quantum Computation and Information_, edited by H.K. Lo, S. Popescu and T.P. Spiller (World Scientific, Singapore, 1999). * (20) D.W. Kribs, R. Laflamme, D. Poulin, and M. Lesosky, (2005), eprint quant-ph/0504189. * (21) P.W. Shor, SIAM J. on Comp. **26**, 1484 (1997). * (22) L.K. Grover, _Proceedings of the 28th Annual ACM Symposium on the Theory of Computing_ (ACM, New York, NY, 1996), p. 212. * (23) D. Deutsch and R. Jozsa, Proc. Roy. Soc. London Ser. A **439**, 553 (1992). * (24) D. Biron, O. Biham, E. Biham, M. Grassl and D.A. Lidar, in _Quantum Computing & Quantum Communications; First NASA International Conference; selected papers, QCQC?98_, Vol. 1509 of _Lecture Notes in Computer Science_, edited by C. P. Williams (Springer, Palm Springs, 1998), pp.140-147, eprint quant-ph/9801066. * (25) E. Biham, O. Biham, D. Biron, M. Grassl and D.A. Lidar, Phys. Rev. A **60**, 2742 (1999). * (26) E. Biham, O. Biham, D. Biron, M. Grassl, D.A. Lidar, and D. Shapira, Phys. Rev. A **63**, 012310 (2001). * (27) E. Biham, D. Kenigsberg, Phys. Rev. A **66**, 062301 (2002). * (28) S. Parker and M.B. Plenio, Phys. Rev. Lett. **85**, 3049 (2000). * (29) E. Knill and R. Laflamme, Phys. Rev. Lett. **81**, 5672 (1998). * (30) D.P. Chi, J. Kim, and S. Lee, J. Phys. A **34**, 5251 (2001). * (31) J. Kim, S. Lee and D.P. Chi, J. Phys. A **35**, 6911 (2002). * (32) D.P. Chi, J.S. Kim, and S. Lee, eprint quant-ph/0504173. * (33) A. Shabani and D.A. Lidar, Phys. Rev. A **71**, 020101(R) (2005). * (34) P. Zanardi and M. Rasetti, Mod. Phys. Lett. B **11**, 1085 (1997). * (35) D.A. Lidar, D. Bacon and K.B. Whaley, Phys. Rev. Lett. **82**, 4556 (1999). * (36) G.M. Palma, K.-A. Suominen and A.K. Ekert, Proc. Roy. Soc. London Ser. A **452**, 567 (1996). * (37) L.-M Duan and G.-C. Guo, Phys. Rev. Lett. **79**, 1953 (1997). * (38) P. Zanardi and F. Rossi, Phys. Rev. Lett. **81**, 4752 (1998). * (39) P. Zanardi and D.A. Lidar, Phys. Rev. A **70**, 012315 (2004). * (40) L. Viola, E. Knill, Phys. Rev. A **68**, 032311 (2003). * (41) S. Nakajima, Prog. Theor. Phys. **20**, 948 (1958). * (42) R. Zwanzig, J. Chem. Phys. **33**, 1338 (1960). * (43) H.-P. Breuer and F. Petruccione, _The Theory of Open Quantum Systems_ (Oxford University Press, Oxford, 2002). * (44) K. Kraus, _States, Effects and Operations_, _Fundamental Notions of Quantum Theory_ (Academic, Berlin, 1983). * (45) V. Gorini, A. Kossakowski and E.C.G Sudarshan, J. Math. Phys. **17**, 821 (1976). * (46) G. Lindblad, Commun. Math. Phys. **48**, 119 (1976). * (47) R. Alicki and K. Lendi, _Quantum Dynamical Semigroups and Applications_, No. 286 in _Lecture Notes in Physics_ (Springer-Verlag, Berlin, 1987). * (48) E.B. Davies, Rep. Math. Phys. **11**, 169 (1977). * (49) D.A. Lidar, A. Shabani, and R. Alicki, Chem. Phys., in press (2005), eprint quant-ph/0411119. * (50) D.A. Lidar, Z. Bihary, and K.B. Whaley, Chem. Phys. **268**, 35 (2001). * (51) P. Zanardi, Phys. Rev. A **57**, 3276 (1998). * (52) N.P. Landsman, eprint math-ph/9807030. * (53) D.W. Kribs, Proc. Edinburgh Math. Soc. **46**, 421 (2003). * (54) J. Dalibard, Y. Castin, and K. Molmer, Phys. Rev. Lett. **68**, 580 (1992). * (55) N. Gisin, I.C. Percival, J. Phys. A **25**, 5677 (1992). * (56) M. Plenio and P. Knight, Rev. Mod. Phys. **70**, 101 (1998). * (57) L.-A. Wu, M.S. Byrd, D.A. Lidar, Phys. Rev. Lett. **89**, 127901 (2002). * (58) M.S. Byrd, D.A. Lidar, L.-A. Wu, and P. Zanardi, Phys. Rev. A **71**, 052301 (2005). * (59) D. J. Wineland, C. Monroe, W. M. Itano, D. Leibfried, B. E. King, and D. M. Meekhof, J. of Res. of the National Inst. of Standards and Technology **103**, 259 (1998), [http://nvl.nist.gov/pub/nistpubs/ires/103/3/cnt103-3.htm](http://nvl.nist.gov/pub/nistpubs/ires/103/3/cnt103-3.htm). * (60) S. Daffer, K. Wodkiewicz, J.D. Cresser, J.K. McIver, Phys. Rev. A **70**, 010304 (2004).
We introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization. We derive a new set of conditions for the existence of DFSs within this generalized framework. By relaxing the initialization requirement we show that a DFS can tolerate arbitrarily large preparation errors. This has potentially significant implications for experiments involving DFSs, in particular for the experimental implementation, over DFSs, of the large class of quantum algorithms which can function with arbitrary input states. pacs: 03.67.Lx,03.65.Yz,03.65.Fd
Condense the content of the following passage.
arxiv-format/0505072v2.md
# Geometry of spin-field coupling on the worldline Holger Gies and Jens Hammerling _Institut fur theoretische Physik, Universitat Heidelberg_ _Philosophenweg 16, D-69120 Heidelberg, Germany_ ## 1 Introduction The mapping of quantum field theoretic problems onto the language of quantum mechanics of point particles in the form of the worldline formalism [1] has become a powerful computational tool in recent years. The worldline approach, which can also be viewed as the field theoretic limit of string theory [2, 3, 4, 5, 6, 7], establishes a direct connection between a \"second-quantized\" and a \"first-quantized\" formalism. Particularly for correlators in background fields, computations simplify drastically with worldline techniques [8, 9]. The relation between field theory and quantum particle mechanics can best be illustrated by the worldline representation of a scalar field's propagator in Euclidean spacetime, \\[G(x_{2},x_{1})=\\int_{0}^{\\infty}dT\\,e^{-m^{2}T}\\,{\\cal N}\\int_{x(0)=x_{1}}^{x( T)=x_{2}}{\\cal D}x\\,e^{-\\frac{1}{4}\\int_{0}^{T}dr\\,\\dot{x}^{2}(\\tau)}, \\tag{1}\\]where the integration parameter \\(T\\) is called propertime, and the path integral runs over all paths with fixed end points distributed by a Gaussian velocity weight. The resulting ensemble of paths can be viewed as the set of possible trajectories of the quantum field. This associates virtual fluctuations of a field with particle worldlines in coordinate space, which constitutes a highly intuitive picture for the nature of quantum fluctuations. Incidentally, the path integral with Gaussian velocity weight can also be represented by a sum over trajectories of a random walker [10]. The standard route to worldline representations of propagators for higher-spin fields proceeds with the aid of Grassmann-valued path integrals that encode the spin degrees of freedom as well as the corresponding algebra [11]. Though technically elegant and computationally powerful, this approach goes along with a loss of intuition: trajectories in Grassmann-space are difficult to be visualized. An alternative approach has been suggested in [12, 13] for \\(D=2,3\\) dimensions, where the information about fermionic spin can be encoded in terms of the \"Polyakov spin factor\". This spin factor acts as an insertion in the path integrand and depends solely on the worldline itself; for instance, in \\(D=2\\), it can be represented by the trace of a path-ordered exponential, \\[\\Phi_{\\rm Pol}[x]={\\rm tr}_{\\gamma}{\\cal P}\\ e^{\\frac{i}{2}\\int_{0}^{T}d\\tau \\sigma\\omega_{\\rm Pol}},\\quad\\omega_{\\rm Pol,\\mu\ u}=\\frac{1}{4}(\\dot{x}_{\\mu} \\ddot{x}_{\ u}-\\ddot{x}_{\\mu}\\dot{x}_{\ u}),\\quad\\dot{x}^{2}=1, \\tag{2}\\] where \\(\\sigma_{\\mu\ u}=\\frac{{\\rm i}}{2}[\\gamma_{\\mu},\\gamma_{\ u}]\\). Note that this representation holds for propertime-parameterized worldlines, \\(\\dot{x}^{2}=1\\). The latter property arises naturally in the so-called \"first-order\" formalism for fermions in which the Dirac operator acts linearly on spinor states. The Polyakov spin factor is not only a purely geometric quantity; it also has a topological meaning for closed worldlines. In \\(D=2\\), it equals \\((-1)^{n}\\), with \\(n\\) counting the number of twists of a closed loop. Generalizations of the Polyakov spin factor to higher dimensions reveal interesting relations to geometric quantities [13, 14, 15], such as torsion of the worldline in \\(D=3\\), Berry phases or the notion of a Wess-Zumino term for a bosonic worldline path integral. However, worldline calculations with fermions are most conveniently performed in the \"second-order\" formalism in which Dirac-algebra valued expressions are rewritten such that the Dirac operator always acts quadratically on spinor states [8, 9].1 The main advantage is that (at least for the symmetric part of the spectrum) spinorial properties always occur in the form of explicit spin-field couplings, such as the Pauli term \\(\\sim\\sigma_{\\mu\ u}F_{\\mu\ u}\\) in QED. The natural question arises as to whether the second-order formalism can also be supplemented with a spin-factor calculus, whether such a spin factor also has a topological meaning and whether it opens the door to new calculational strategies. Guided by the idea that gauge-field information can solely be covered by a description in terms of holonomies (Wilson loops), the existence of a spin factor can be anticipated. Footnote 1: For a detailed calculation in the first-order formalism, see, e.g., [16]. In the present work, we derive such a spin-factor representation for the second-order formalism, employing the loop-space approach to gauge theory [18, 19, 20] (this approach has recently witnessed a revival as an alternative strategy for quantizing gravity [21]). Concentrating on QED, we are able to rewrite the Pauli term as a geometric quantity, i.e., an insertion term that depends solely on the worldline of the fluctuating particle itself. We develop a spin-factor calculus for practical computations; as a concrete example, we rederive the famous Heisenberg-Euler effective action of QED [17]. In fact, we have not been able to identify a topological content similar to that of the first-order formalism for our spin factor. But a new geometric conclusion emerges from our formalism: it is the continuous but non-differentiable nature of the random worldlines that gives rise to the coupling between spin and external fields. By contrast, smooth worldlines, i.e., smooth trajectories of a virtual fluctuation, would not support any coupling between spinorial degrees of freedom and an external field. Particularly the \"zigzag\" motion of quantum mechanical worldlines mediates spin; smooth worldlines are indeed a set of measure zero for a quantum particle. The search for a spin-factor representation in the second-order formalism was initiated and advanced in a series of works [22, 23]. Therein, it was argued that the resulting spin factor has the same form as the Polyakov spin factor in \\(D=2\\), cf. Eq. (2). For concrete computations, an ad hoc regularization procedure was proposed to deal with possibly arising singularities [23] and was shown to work in a variety of nontrivial examples. As our results show unambiguously, the spin factor in the second-order formalism is not of the form of the classic Polyakov spin factor. It is particularly the singularity structure of our new spin factor in combination with that of the worldlines that dominates in the second-order formalism and gives rise to the new geometric interpretation. Apart from intrinsic reasons for a spin-factor formalism, our work is also motivated by the recent development of _worldline numerics_[24, 25], which combines the string-inspired worldline formalism with Monte-Carlo techniques; the result is a powerful and efficient algorithm for computing quantum amplitudes in general background fields that has found a variety of applications [26, 27, 28]. Since Monte-Carlo methods for computing path integrals rely on the positivity of the action, the representation of spin by Grassmann-valued integrals is of no use for worldline numerics. Even though fermionic worldline algorithms can be based on the conventional Pauli-term representations [26], the chiral limit becomes computationally demanding. Therefore, we expect that a spin-factor representation offers a new route to treating massless fermions with worldline numerics. In this work, we develop the spin-factor formalism by considering the fermionic determinant, which is part of any perturbatively renormalizable gauge-field theory with charged fermions. For simplicity, it suffices to deal with abelian gauge fields, which keeps the presentation more transparent. In this case, the fermionic determinant corresponds to the one-loop effective action for photons, i.e., the Heisenberg-Euler effective action. In Sect. 2, we derive the spin-factor representation within the second-order formalism for spinor QED. We elucidate the single steps in some detail, paying particular attention to subtleties induced by the non-analyticity of generic worldlines. In Sect. 3, we first develop a spin-factor calculus for performing efficient computations with the new spin factor. We apply this calculus to a rederivation of the Heisenberg-Euler action for constant fields. Furthermore, we combine our spin factor with a representation of the Dirac algebra in terms of Grassmann-valued path integrals. Finally, a nonperturbative application is given by deriving a worldline representation for the effective action of Heisenberg-Euler type to leading nontrivial order in \\(N_{\\rm f}\\) (quenched approximation). We summarize our conclusions in Sect. 4. ## 2 Spin-factor representation in QED ### QED effective action on the worldline Let us begin with the Euclidean one-loop effective action of QED, corresponding to the fermionic determinant [29], \\[\\Gamma^{1}_{\\rm eff}=-\\ln\\det\\left(-{\\rm i}D\\hskip-5.690551pt/-m\\right)=- \\frac{1}{2}\\ \\ln\\det(D\\hskip-5.690551pt/^{2}+m^{2}), \\tag{3}\\] where we have assumed the absence of a spectral asymmetry of the Dirac operator in the second step;2 the two representations of the determinant distinguish between first-order and second-order formalism. Using Schwinger's propertime method [31] together with a path-integral representation of the propertime transition amplitude, the second-order determinant transforms into the worldline representation, Footnote 2: In QED, this holds for parity-invariant formulations which exist in any dimension. In general, our formalism holds for the symmetric part of the Dirac spectrum; worldline representations for the asymmetric part have been discussed in [30]. \\[\\Gamma^{1}_{\\rm eff}=\\frac{1}{2}\\frac{1}{(4\\pi)^{D/2}}\\int_{0}^{\\infty}\\frac{ dT}{T^{1+D/2}}\\ e^{-m^{2}T}\\left\\langle W_{\\rm spin}[A]\\right\\rangle, \\tag{4}\\] where the brackets denote the expectation value with respect to a path integral over closed worldlines, \\[\\langle\\dots\\rangle=\\int_{x(0)=x(T)}{\\cal D}x\\ \\dots\\ e^{-\\frac{1}{4}\\int_{0}^{T}d \\tau\\,i^{2}(\\tau)}. \\tag{5}\\] We emphasize that the path integral is normalized such that \\(\\langle 1\\rangle=1\\). In Eq. (4), we have introduced the \"spinorial\" Wilson loop, \\[W_{\\rm spin}[A]=\\exp\\biggl{[}-i\\oint dx_{\\mu}A_{\\mu}(x)\\biggr{]}\\ {\\rm tr}_{\\gamma}{\\cal P}\\,\\exp\\biggl{[}\\frac{1}{2}\\int_{0}^{T}d\\tau\\ \\sigma_{\\mu\ u}F_{\\mu\ u}\\biggr{]}, \\tag{6}\\] where \\({\\cal P}\\) denotes path ordering with respect to the propertime. The first term is the standard Wilson loop, which can be viewed as the representation of an abstract loop operator. The last term is the spin-field coupling with the Pauli term, which is at the center of interest in the present work. Contrary to the standard Wilson loop, this last term is not worldline reparametrization invariant in the present formulation. Even though this is not essential for our investigation, a reparametrization invariant formulation can be constructed with the aid of an _einbein_ formalism [32]; also our results given below can straightforwardly be generalized to such an invariant formalism. ### Loop derivative The spin-field coupling can be rewritten with the aid of the coordinate-space representation of the loop derivative (also called the area derivative) [18, 19, 20], \\[\\frac{\\delta}{\\delta s_{\\mu\ u}(\\tau)}=\\lim_{\\epsilon\\to 0}\\int_{- \\epsilon}^{\\epsilon}d\\rho\\rho\\ \\frac{\\delta^{2}}{\\delta x_{\\mu}(\\tau+\\frac{\\rho}{2})\\delta x_{\ u}(\\tau-\\frac {\\rho}{2})}, \\tag{7}\\] which is analogous to the curvature tensor in the loop representation of gauge theory. Let us start with an identity that is well known in the loop space formulation of gauge theories [20], involving analytic functions \\(A_{\\mu}(x)\\) and \\(F_{\\mu\ u}(x)\\), \\[e^{-i\\oint dxA(x)}\\ {\\rm tr}_{\\gamma}{\\cal P}\\ e^{\\frac{1}{2}\\int_{0}^{T}d \\tau\\sigma F}={\\rm tr}_{\\gamma}{\\cal P}\\ e^{\\frac{i}{2}\\int_{0}^{T}d\\tau\\sigma \\frac{\\delta}{\\delta s(\\tau)}}\\ e^{-i\\oint dxA(x)}. \\tag{8}\\] It is important to stress that this relation is defined on the set of holonomy-equivalence classes of loops in coordinate space, such that it holds also for continuous but nondifferentiable loops; Eq. (8) can furthermore be represented with discretized worldlines with the gauge potentials sitting on the links. More comments are in order: a crucial ingredient of the loop derivative is given by the \\(\\epsilon\\) limit. A nonzero contribution arises only if the worldline derivatives produce a specific singularity structure \\(\\sim\\dot{\\delta}(\\rho)\\), such that \\(\\int d\\rho\\,\\rho\\,\\dot{\\delta}(\\rho)=-1\\). Weaker singularities or smooth \\(\\rho\\) dependencies vanish in the limit \\(\\epsilon\\to 0\\). In Eq. (8), the required singularity structure is provided by the worldline derivatives acting on the gauge field and the line-integral measure. The fact that the loop derivative can be exponentiated rests on a property of the Wilson loop, namely, \\[\\left[\\frac{\\delta}{\\delta s},\\frac{\\delta}{\\delta s}\\right]\\,e^{-i\\oint dxA( x)}=0, \\tag{9}\\] which holds only for the class of so-called Stokes-type functionals, as introduced in [19]. Finally, the proof of Eq. (8) (as well as that of Eq. (9)) requires a few smoothness assumptions for worldline-dependent expressions. Whether or not they are satisfied is a priori far from obvious, since we need these identities within the worldline integral, but the worldlines are generically continuous but non-differentiable. This question can most suitably be analyzed with the aid of the worldline Green's function, which reads [8, 9]: \\[\\langle x_{\\mu}(\\tau_{2})x_{\ u}(\\tau_{1})\\rangle\\equiv-\\delta_{ \\mu\ u}\\,G(\\tau_{2},\\tau_{1}),\\quad\\mbox{with}\\ G(\\tau_{2},\\tau_{1})=|\\tau_{2 }-\\tau_{1}|-\\frac{(\\tau_{2}-\\tau_{1})^{2}}{T}. \\tag{10}\\] The nonanalyticity of the worldlines becomes visible in the first term of the Green's function, involving the modulus. By Wick contraction, the worldline integral over general functionals of \\(x(\\tau)\\) can be reduced to (a series of) monomials of the Green's function and its derivatives. The following derivative is of particular importance: \\[\\langle\\ddot{x}_{\\mu}(\\tau_{2})\\dot{x}_{\ u}(\\tau_{1})\\rangle=2\\dot{\\delta}( \\tau_{2}-\\tau_{1})\\,\\delta_{\\mu\ u}, \\tag{11}\\] since the singularity structure \\(\\sim\\dot{\\delta}\\) suitable for the loop derivative occurs. Therefore, the proof that Eq. (8) also holds under the worldline integral can be completed by theobservation that all other terms occurring during the calculation do not involve Wick contractions of the type (11). The same statement applies to the proof of Eq. (9). Let us proceed with the spin-factor derivation by performing an infinite series of partial integrations that shifts the loop derivatives from the Wilson loop to the worldline kinetic term, yielding \\[\\langle W_{\\rm spin}[A]\\rangle=\\int{\\cal D}x(\\tau)\\left[{\\rm tr}_{\\gamma}{\\cal P }\\ e^{\\frac{i}{2}\\int_{0}^{T}d\\tau\\sigma\\frac{\\delta}{\\delta s(\\tau)}}\\ e^{-\\int_{0}^{T}d\\tau\\frac{ \\dot{a}^{2}(\\tau)}{4}}\\right]\\ e^{-i\\oint dx(x)}. \\tag{12}\\] No surface terms appear, since the worldlines, if stretched to infinity, have infinite kinetic action. Now the evaluation of the derivatives has to be performed with great care. We begin with the leading order of the exponential series, \\[\\left(\\frac{i}{2}\\int_{0}^{T}d\\tau\\sigma\\frac{\\delta}{\\delta s(\\tau)}\\right) \\biggl{(}e^{-\\int_{0}^{T}d\\tau\\frac{\\dot{a}^{2}(\\tau)}{4}}\\biggr{)}=\\biggl{(} \\frac{i}{2}\\int_{0}^{T}d\\tau\\ \\sigma\\omega(\\tau)\\biggr{)}\\biggl{(}e^{-\\int_{0}^{T}d\\tau\\frac{ \\dot{a}^{2}(\\tau)}{4}}\\biggr{)}, \\tag{13}\\] where we have defined \\[\\omega_{\\mu\ u}(\\tau):=\\frac{1}{4}\\lim_{\\epsilon\\to 0}\\int_{-\\epsilon}^{ \\ \\epsilon}d\\rho\\rho\\ \\ddot{x}_{\\mu}(\\tau+\\frac{\\rho}{2})\\ddot{x}_{\ u}(\\tau-\\frac{\\rho}{2}). \\tag{14}\\] It is this \\(\\omega\\) tensor that carries the information previously encoded in the field strength tensor. The \\(\\omega\\) tensor is significantly different from that of Polyakov's spin factor \\(\\omega_{{\\rm Pol},\\mu\ u}\\sim(\\ddot{x}_{\\mu}\\dot{x}_{\ u}-\\ddot{x}_{\ u}\\dot{x }_{\\mu})\\), arising in the first-order formalism (cf. Eq. (2)). For instance, \\(\\omega_{\\mu\ u}(\\tau)=0\\) for any smooth loop by virtue of the \\(\\epsilon\\) limit, whereas \\(\\omega_{{\\rm Pol},\\mu\ u}\\) is generally nonzero in this case. It is instructive to also study the second order in the loop derivative explicitly:3 Footnote 3: We suppress the path ordering symbol here; it can easily be reinstated at the end of the calculation. \\[\\left(\\frac{i}{2}\\int d\\tau\\sigma\\frac{\\delta}{\\delta s(\\tau)} \\right)^{2}e^{-\\frac{1}{4}\\int_{0}^{T}d\\tau\\dot{x}^{2}}\\] \\[=\\left\\{\\left(\\frac{i}{2}\\int d\\tau\\sigma\\omega\\right)^{2}\\right. \\tag{15}\\] \\[\\qquad\\left.-\\frac{1}{4}\\int d\\tau_{2}d\\tau_{1}\\sigma_{\\lambda \\kappa}\\sigma_{\\mu\ u}\\biggl{[}\\frac{\\delta\\omega_{\\mu\ u}(\\tau_{1})}{\\delta s _{\\lambda\\kappa}(\\tau_{2})}+\\lim_{\\epsilon\\to 0}\\ \\int_{-\\epsilon}^{ \\ \\epsilon}d\\eta\\eta\\frac{\\delta\\omega_{\\mu\ u}(\\tau_{1})}{\\delta x_{\\kappa}( \\tau_{2}-\\frac{\\eta}{2})}\\ddot{x}_{\\lambda}(\\tau_{2}+\\frac{\\eta}{2})\\biggr{]} \\right\\}e^{-\\frac{1}{4}\\int_{0}^{T}d\\tau\\dot{x}^{2}}.\\] Apart from the desired first term \\(\\sim\\omega^{2}\\), we observe the appearance of derivatives of \\(\\omega\\). The latter correspond to a nonvanishing right-hand side of the commutator \\([\\delta/\\delta s,\\delta/\\delta s]\\) acting on the kinetic action. This is in contrast to Eq. (9), and reveals that the kinetic action does not belong to the class of Stokes-type functionals. Proceeding to higher orders in the loop derivative, the result can be represented as \\[{\\rm tr}_{\\gamma}{\\cal P}\\ e^{\\frac{i}{2}\\int_{0}^{T}d\\tau\\sigma\\frac{\\delta}{ \\delta s(\\tau)}}\\ e^{-\\int_{0}^{T}d\\tau\\frac{\\dot{a}^{2}(\\tau)}{4}}=\\left[{\\rm tr }_{\\gamma}{\\cal P}\\ e^{\\frac{i}{2}\\int_{0}^{T}d\\tau\\sigma\\omega}+D[\\omega] \\right]e^{-\\int_{0}^{T}d\\tau\\frac{\\dot{a}^{2}(\\tau)}{4}}, \\tag{16}\\]where \\(D[\\omega]\\) is a functional of \\(\\omega\\) that collects all terms with at least one functional derivative of \\(\\omega\\). This functional can formally be defined by \\[D[\\omega]:=e^{\\int_{0}^{T}d\\tau\\frac{\\dot{x}^{2}(\\tau)}{4}}\\,{\\rm tr }_{\\gamma}{\\cal P}\\ e^{\\frac{i}{2}\\int_{0}^{T}d\\tau\\sigma\\frac{\\delta}{s_{ \\sigma}(\\tau)}}\\ e^{-\\int_{0}^{T}d\\tau\\frac{\\dot{x}^{2}(\\tau)}{4}}-{\\rm tr}_{ \\gamma}{\\cal P}\\ e^{\\frac{i}{2}\\int_{0}^{T}d\\tau\\sigma\\omega}, \\tag{17}\\] with the last term simply subtracting the no-derivative terms. An explicit representation of \\(D[\\omega]\\) can be computed order by order in a series expansion in \\(\\omega\\); the first term, for instance, is given by the second term in the braces in Eq. (15). We would like to stress that \\(D[\\omega]\\) has been missed in the literature so far, e.g., see [22]. However, this functional is absolutely crucial for rendering the spin-factor representation well-defined, as will be discussed in the next section. ### Spin factor The representation of the spin information derived in Eq. (16) seems highly problematic. Let us recall from the definition of \\(\\omega\\) in Eq. (14) that \\(\\omega\\sim\\ddot{x}\\ddot{x}\\). Upon insertion into the worldline integrand, Wick contractions of the form \\(\\langle\\ddot{x}\\ddot{x}\\rangle\\) carrying a strong singularity structure \\(\\sim\\ddot{\\delta}\\) will necessarily appear, cf. Eq. (11). Such singularities can survive the \\(\\epsilon\\) limits and potentially render the expressions ill-defined. In fact, we will now prove that all singularities of the type \\(\\sim\\ddot{\\delta}\\) cancel exactly against the functional \\(D[\\omega]\\) occurring in Eq. (16). This can straightforwardly be derived from the zero-field limit of Eq. (12) for which the Wilson-loop expectation value is normalized to 1, \\[1 = \\langle W_{\\rm spin}[A=0]\\rangle=\\int{\\cal D}x(\\tau)\\,{\\rm tr}_{ \\gamma}{\\cal P}\\ e^{\\frac{i}{2}\\int_{0}^{T}d\\tau\\sigma\\frac{\\delta}{s_{\\sigma }(\\tau)}}\\ e^{-\\int_{0}^{T}d\\tau\\frac{\\dot{x}^{2}(\\tau)}{4}} \\tag{18}\\] \\[= \\int{\\cal D}x(\\tau)\\ \\left[{\\rm tr}_{\\gamma}{\\cal P}\\ e^{\\frac{i}{2} \\int_{0}^{T}d\\tau\\sigma\\omega}+D[\\omega]\\right]\\ e^{-\\int_{0}^{T}d\\tau\\frac{ \\dot{x}^{2}(\\tau)}{4}},\\] where we have used Eqs. (16) and (17) in the last step. Even without reference to the zero-field limit, we could have straighforwardly proven this identity by noting that \\[\\int{\\cal D}x(\\tau)\\left(\\frac{\\delta}{\\delta x_{\\mu}(\\tau)}\\right)^{n}e^{- \\int_{0}^{T}d\\tau\\frac{\\dot{x}^{2}(\\tau)}{4}}=0,\\quad n\\geq 1 \\tag{19}\\] vanishes as a total derivative; we recall that the pure Gaussian velocity integral is normalized to 1. In the language of Wick contractions, we make the important observation from Eq. (18) that \\(\\langle D[\\omega]\\rangle\\) corresponds to the self-contractions of the \\(\\omega\\) exponential:4 Footnote 4: To a given order, this can algebraically be confirmed by direct computation; for an explicit second-order calculation, see Appendix A and [33]. \\[\\langle D[\\omega]\\rangle = 1-\\left\\langle{\\rm tr}_{\\gamma}{\\cal P}\\ e^{\\frac{i}{2}\\int_{0}^{ T}d\\tau\\sigma\\omega}\\right\\rangle.\\] Representing the worldline operators \\(x(\\tau)\\) in Fourier space by Fock-space creation and annihilation operators of Fourier modes (cf. Eq. (27) below), the removal of self-contractionsof any expression can be implemented by normal ordering of the Fock-space operators; thus, we arrive at \\[1 = \\left\\langle{\\rm tr}_{\\gamma}{\\cal P}\\ e^{\\frac{i}{2}\\int_{0}^{T}d \\tau\\sigma\\omega}+D[\\omega]\\right\\rangle\\equiv\\left\\langle{\\rm tr}_{\\gamma}{ \\cal P}\\ :e^{\\frac{i}{2}\\int_{0}^{T}d\\tau\\sigma\\omega}:\\right\\rangle, \\tag{20}\\] where the colons denote the normal-ordering prescription. This concludes our search for a spin-factor representation in the fermionic second-order formalism of the worldline approach. Upon insertion into Eq. (4), we obtain a representation of the one-loop contribution to the effective action for spinor QED, involving the purely geometrical spin factor, \\[\\Gamma^{1}_{\\rm eff}[A] = \\frac{1}{2}\\frac{1}{(4\\pi)^{D/2}}\\int_{0}^{\\infty}\\frac{dT}{T^{(1 +D/2)}}\\ e^{-m^{2}T}\\int{\\cal D}x(\\tau)\\ e^{-\\int_{0}^{T}d\\tau\\frac{\\dot{x}^{2 }(\\tau)}{4}}\\ e^{-i\\oint dxA(x)}\\ \\Phi[x], \\tag{21}\\] \\[{\\rm with}\\ \\Phi[x]:={\\rm tr}_{\\gamma}{\\cal P}:e^{\\frac{i}{2}\\int_{ 0}^{T}d\\tau\\,\\sigma\\omega(\\tau)}:,\\] \\[{\\rm and}\\ \\omega_{\\mu\ u}(\\tau)=\\frac{1}{4}\\lim_{\\epsilon\\to 0} \\int_{-\\epsilon}^{\\ \\ \\epsilon}d\\rho\\rho\\ \\ddot{x}_{\\mu}(\\tau+\\frac{\\rho}{2})\\ddot{x}_{\ u}(\\tau- \\frac{\\rho}{2}). \\tag{22}\\] An obvious advantage of this representation consists in the fact that the dependence on the external gauge field occurs solely in the form of a Wilson loop (holonomy). An explicit spin-field coupling no longer appears, but spin information is extracted from the geometric properties of the worldlines themselves. Let us emphasize once more that a non-zero spin contribution is generated only by specific singularity structures, arising from the continuous but non-differentiable nature of generic worldlines. ## 3 Application of the spin factor ### Spin-factor calculus Next we explore the applicability of the new spin factor in concrete QED examples. At a first glance, the representation of the effective action (21) seems to be disadvantageous; in particular, concrete computations may be plagued by technical difficulties associated with normal ordering. Moreover, even perturbative amplitudes to finite order in \\(A_{\\mu}\\) seemingly receive contributions from terms with arbitrarily high products of worldline monomials: expanding the spin-factor and Wilson-loop exponentials, we find, for instance, terms of the form, \\(\\langle\\omega^{n}\\dot{x}A(x)\\rangle\\sim\\langle(\\ddot{x}\\ddot{x})^{n}\\dot{x}A( x)\\rangle\\), \\(n\\) arbitrary. Nevertheless, it can be shown that many of these apparent high-order contributions cancel each other and that practical calculations actually boil down to roughly the same amount of technical work as in the standard formalism. In view of the variety of possible worldline monomials arising from the expansion of the Wilson loop, the spin factor and the corresponding self-contractions (hidden behind the normal ordering), we do not attempt to give a full account of all possible structures and cancellation mechanisms. Instead, we will pick out all those terms that, upon Wick contraction, lead us back to the full result for the effective action in standard representation. As a result, all possible other terms ultimately have to cancel each other. Let us start with a new operational symbol \\(\\{ \\}_{\\omega}^{\\oint A}\\) that characterizes a subclass of Wick contractions: the \\(\\{ \\}_{\\omega}^{\\oint A}\\) bracket denotes the restriction that, among the manifold contractions arising from Wick's theorem, only those terms have to be accounted for which are _complete_ contractions of one \\(\\sigma\\omega\\) with _one and the same_\\(\\oint dxA(x)\\) factor. This already excludes many Wick contractions, in particular, those where the two \\(\\ddot{x}\\)'s out of one \\(\\omega_{\\mu\ u}\\) are either self-contracted or contracted with two different objects (be it gauge fields or other \\(\\omega\\)'s). It turns out that all these terms of the latter type cancel each other or vanish by the \\(\\epsilon\\) limit. Using the Schwinger-Fock gauge, \\[A_{\\alpha}(x(\\tau)) = \\frac{1}{2}x_{\\lambda}(\\tau)F_{\\lambda\\alpha}(0)+\\frac{1}{3}x_{ \\lambda}(\\tau)x_{\\sigma}(\\tau)\\ \\partial_{\\sigma}F_{\\lambda\\alpha}(0)+ \\tag{23}\\] \\[= \\sum_{n=0}^{\\infty}\\frac{x^{\\lambda}x^{\ u_{1}}\\cdot\\cdot x^{\ u _{n}}}{n!(n+2)}\\ \\partial_{\ u_{1}}\\cdot\\cdot\\cdot\\partial_{\ u_{n}}F_{\\lambda\\alpha}\\,\\] the subclass of \\(\\{ \\}_{\\omega}^{\\oint A}\\) contractions of the Wilson-loop exponential with an \\(\\omega\\) term can straightforwardly be computed order by order in the series (23). The resulting series is identical to the Taylor series of the field strength tensor, which can be summed up to yield \\[\\left\\{\\frac{i}{2}\\int d\\tau\\sigma\\omega\\ (-i)\\int d\\tau\\ \\dot{x}_{\ u}(\\tau)A_{ \ u}(x(\\tau))\\right\\}_{\\omega}^{\\oint A}=\\frac{1}{2}\\int d\\tau\\sigma_{\\mu\ u} F^{\\mu\ u}(x(\\tau)). \\tag{24}\\] Since the operation of Wick contractions of bosonic fields satisfies the elementary rules of a derivation, the same holds for the \\(\\{ \\}_{\\omega}^{\\oint A}\\) symbol. With this observation (or with straightforward combinatorics), it follows that \\[\\left\\{{\\rm tr}_{\\gamma}{\\cal P}\\ e^{\\frac{i}{2}\\int_{0}^{T}d\\tau\\sigma\\omega (\\tau)}\\ e^{-i\\int_{0}^{T}d\\tau\\dot{x}A(x)}\\right\\}_{\\omega}^{\\oint A}=e^{-i \\int_{0}^{T}d\\tau\\dot{x}A(x)}\\ {\\rm tr}_{\\gamma}{\\cal P}\\ e^{\\frac{i}{2}\\int_{0}^{T}d\\tau\\ \\sigma F}. \\tag{25}\\] This tells us immediately that it is the subclass of Wick contractions described by the \\(\\{ \\}_{\\omega}^{\\oint A}\\) symbol which already gives us back the full result for the Pauli term. The resulting recipe is: the spin factor can only contribute if a factor \\(\\sim\\int_{0}^{T}\\sigma\\omega(\\tau)\\) is completely Wick contracted with a factor \\(\\sim\\oint dxA(x)\\) from the Wilson loop. Since the \\(\\omega\\)-independent Wick contractions still have to be performed, the expectation value of the \"spinorial\" Wilson loop can finally be written as \\[\\langle W_{\\rm spin}\\rangle=\\left\\langle\\left\\{{\\rm tr}_{\\gamma}{\\cal P}\\ e^{\\frac{i}{2}\\int_{0}^{T}d\\tau\\sigma\\omega( \\tau)}\\ e^{-i\\int_{0}^{T}d\\tau\\dot{x}A(x)}\\right\\}_{\\omega}^{\\oint A}\\right\\rangle. \\tag{26}\\] Note that this recipe also dispenses with a consideration of normal ordering or a detailed analysis of the self-contraction terms, since these do not contribute to the \\(\\{ \\}_{\\omega}^{\\oint A}\\) bracket by construction. Beyond its definition via partial Wick contractions, the \\(\\{ \\}_{\\omega}^{\\oint A}\\) symbol can more abstractly be used as a projector that removes all terms generated by self-contractions of \\(\\omega\\) or mixed contractions as specified above. As such, the \\(\\{ \\}_{\\omega}^{\\oint A}\\) symbol is a linear operator that can formally be interchanged with the (regularized) worldline integral. This viewpoint will be exploited below. The spin-factor calculus developed here has a physical interpretation: the spin factor is only operating at those space-time points where the fluctuating particle interacts with the external field. The spin of the fluctuation does not generate self-interactions of the fluctuation with its own worldline, nor does spin interact nonlocally with the external field at two different spacetime points simultaneously. In the following section we demonstrate the applicability of the spin-factor calculus by rederiving the classic Heisenberg-Euler effective action with this new formalism. ### Heisenberg-Euler action As a concrete example, let us compute the one-loop effective action for a constant background field, i.e., the Heisenberg-Euler effective action for soft photons. We describe the background field, which is constant in space and time but otherwise arbitrary, by the gauge potential \\(A_{\\mu}=-(1/2)F_{\\mu\ u}x_{\ u}\\). As a first simplification, we note that path ordering is irrelevant for a constant field. Furthermore, we observe that the path integral becomes Gaussian, since both Wilson-loop as well as spin-factor exponents depend quadratically on \\(x\\). The propertime derivatives become diagonal in Fourier space where the worldlines are represented as \\[x_{\\mu}(\\tau)=\\sum_{n=-\\infty}^{\\infty}\\frac{1}{\\sqrt{T}}\\;a_{n\\mu}\\;e^{\\frac {2\\pi in\\tau}{T}}. \\tag{27}\\] The fact that \\(x_{\\mu}\\in\\mathbb{R}^{D}\\) translates into the reality condition \\(a_{-n\\mu}^{*}=a_{n\\mu}\\). In terms of the \\(a_{n\\mu}\\) variables, the worldline integral becomes \\[\\langle W_{\\rm spin}\\rangle=\\int{\\cal D}a\\;{\\rm tr}_{\\gamma}\\left\\{e^{-\\frac{ 1}{2}\\sum_{n}a_{\\mu n}^{*}\\left(\\frac{1}{2}\\left(\\frac{2\\pi}{T}\\right)^{2}n^{2 }\\delta_{\\mu\ u}-\\left(\\frac{2\\pi n}{T}\\right)F_{\\mu\ u}+\\frac{1}{2}\\left( \\frac{2\\pi n}{T}\\right)^{2}\\sigma_{\\mu\ u}\\ g_{n}(\\epsilon)\\right)a_{n\ u}} \\right\\}_{\\omega}^{\\oint A}, \\tag{28}\\] with \\[g_{n}(\\epsilon)=\\left(\\frac{2\\pi n}{T}\\epsilon\\right)\\cos\\left(\\frac{2\\pi n}{ T}\\epsilon\\right)-\\sin\\left(\\frac{2\\pi n}{T}\\epsilon\\right), \\tag{29}\\] arising from the Fourier transform of the spin factor. Here and in the following, the limit \\(\\epsilon\\to 0\\) is implicitly understood. In Eq. (28), we can separate off the Fourier zero mode \\(n=0\\), i.e., the worldline center of mass, corresponding to the spacetime integration of the effective action. We obtain \\[\\langle W_{\\rm spin}\\rangle=\\int d^{D}x\\int{\\cal D}a\\;{\\rm tr}_{\\gamma}\\left\\{ e^{-\\frac{1}{2}\\sum_{n}^{\\prime}a_{\\mu n}^{*}\\ M_{\\mu\ u}\\ a_{\ u n}}\\right\\}_{\\omega}^{\\oint A}=\\int d^{D}x\\,{ \\rm tr}_{\\gamma}\\left\\{{\\rm det}^{\\prime}{}^{-\\frac{1}{2}}\\left[\\frac{M}{M_{0} }\\right]\\right\\}_{\\omega}^{\\oint A},\\] where \\(M\\) denotes the quadratic fluctuation operator in the exponent of Eq. (28). The operator \\(M_{0}\\) abbreviates \\(M\\) in the limit \\(F_{\\mu\ u}\\to 0\\) and the formal limit5\\(g_{n}(\\epsilon)\\to 0\\); the appearance of \\(M_{0}\\) implements the correct normalization of the path integral. The prime indicates the absence of the \\(n=0\\) zero mode. Exponentiating the determinant results in \\[\\left\\{\\det^{\\prime-\\frac{1}{2}}\\!\\frac{M}{M_{0}}\\right\\}_{\\omega}^{ \\oint\\,A} = \\exp\\left[-\\frac{1}{2}\\left\\{\\sum_{n}{}^{\\prime}\\,\\,\\mbox{tr}_{ \\rm L}\\ln\\left(\\mathbb{1}\\,-2\\left(\\frac{T}{2\\pi n}\\right)F+\\sigma g_{n}( \\epsilon)\\right)\\right\\}_{\\omega}^{\\oint\\,A}\\right] \\tag{30}\\] \\[= \\exp\\left[\\sum_{n=1}^{\\infty}\\,\\,\\mbox{tr}_{\\rm L}\\sum_{m=1}^{ \\infty}\\frac{1}{2m}\\left\\{\\left(2\\left(\\frac{T}{2\\pi n}\\right)F-\\sigma g_{n}( \\epsilon)\\right)^{2m}\\right\\}_{\\omega}^{\\oint\\,A}\\right],\\] where we have expanded the logarithm in the last step. Now we use the binomial sum for the term in the \\(\\{ \\}_{\\omega}^{\\oint\\,A}\\) symbol, \\[\\{\\%\\}_{\\omega}^{\\oint\\,A}:=\\left\\{\\left(2\\left(\\frac{T}{2\\pi n}\\right)F- \\sigma g_{n}(\\epsilon)\\right)^{2m}\\right\\}_{\\omega}^{\\oint\\,A}=\\sum_{k=0}^{m} \\left(\\begin{array}{c}2m\\\\ k\\end{array}\\right)\\left(2\\left(\\frac{T}{2\\pi n}\\right)F\\right)^{2m-k}\\left( \\sigma g_{n}(\\epsilon)\\right)^{k}.\\] Here, the \\(\\{ \\}_{\\omega}^{\\oint\\,A}\\) symbol has by definition removed all those terms for which at least one \\(\\sigma g_{n}(\\epsilon)\\) term cannot be paired with an \\(F\\) term. This reduces the upper limit of the sum from \\(2m\\) to \\(m\\). Furthermore, we have used that in the constant field case \\([F,\\sigma]=0\\); therefore, \\(F\\) and \\(\\sigma\\) can be arranged in arbitrary order. We decompose this sum further by separating off the \\(k=0\\) and \\(k=m\\) terms, \\[\\{\\%\\}_{\\omega}^{\\oint\\,A}=\\underbrace{\\left(\\frac{TF}{\\pi n}\\right)^{2m}}_{ \\rm(I)}+\\underbrace{\\left(\\begin{array}{c}2m\\\\ m\\end{array}\\right)\\left(\\frac{T}{\\pi n}F\\sigma g_{n}(\\epsilon)\\right)^{m}}_{ \\rm(II)}+\\underbrace{\\sum_{k=1}^{m-1}\\left(\\begin{array}{c}2m\\\\ k\\end{array}\\right)\\left(\\frac{TF}{\\pi n}\\right)^{2m-k}\\!\\!\\left(\\sigma g_{n}( \\epsilon)\\right)^{k}.}_{\\rm(III)}. \\tag{31}\\] The first term (I) carries no spin information; this scalar part obviously corresponds to the contribution that we would equally encounter in scalar QED. The second term (II) represents a perfect pairing of spin factor and field-strength contribution; it will turn out to contain the entire spinorial information. The remaining sum (III) has always at least one unpaired \\(F\\) term, even for \\(k=m-1\\). As will be demonstrated below, this sum vanishes completely in the \\(\\epsilon\\) limit, owing to its too-weak singularity structure. Let us now compute the various pieces of Eq. (31) separately. Let us first consider the scalar part, substituting the first term (I) of Eq. (31) into Eq. (30); we take over the result of this standard calculation from [8, 9], \\[\\mbox{(I)}:\\qquad\\exp\\left[\\sum_{n=1}^{\\infty}\\,\\,\\mbox{tr}_{L}\\sum_{m=1}^{ \\infty}\\frac{1}{2m}\\left(\\frac{TF}{\\pi n}\\right)^{2m}\\right]=\\det^{-\\frac{1}{ 2}}\\left(\\frac{\\sin(FT)}{FT}\\right),\\] where the remaining determinant refers to the Lorentz structure. For instance, for the constant \\(B\\) field case, this reduces to \\(BT/\\sinh BT\\). Next, we consider the spinor contributions in some detail; this part of the spin-factor-based Heisenberg-Euler calculation is genuinely new. The spinor part induced by substitution of the second term (II) of Eq. (31) into Eq. (30) can be written as \\[\\mbox{(II)}:\\qquad\\exp\\left[\\sum_{m=1}^{\\infty}\\frac{(2m-1)!}{(m!)^{2}}\\left( \\frac{T}{\\pi}\\right)^{m}\\mbox{tr}_{L}\\left(F\\sigma\\right)^{m}\\sum_{n=1}^{ \\infty}\\frac{g_{n}(\\epsilon)}{n^{m}}\\right]. \\tag{32}\\] Let us discuss the Fourier sum for different values of \\(m\\), using the definition of \\(g_{n}(\\epsilon)\\) in Eq. (29), \\[S_{m}:=\\lim_{\\epsilon\\to 0}\\sum_{n=1}^{\\infty}\\frac{g_{n}(\\epsilon)}{n^{m}}= \\lim_{\\epsilon\\to 0}\\sum_{n=1}^{\\infty}\\frac{\\left(\\left(\\frac{2\\pi n \\epsilon}{T}\\right)\\cos\\!\\frac{2\\pi n}{T}\\epsilon-\\sin\\!\\frac{2\\pi n}{T} \\epsilon\\right)}{n^{m}}. \\tag{33}\\] For \\(m=1\\), we have \\[S_{1} = \\lim_{\\epsilon\\to 0}\\left[\\sum_{n=1}^{\\infty}\\epsilon\\;\\frac{d}{d \\epsilon}\\frac{\\sin\\frac{2\\pi n}{T}\\epsilon}{n}-\\sum_{n=1}^{\\infty}\\frac{\\sin \\frac{2\\pi n}{T}\\epsilon}{n}\\right] \\tag{34}\\] \\[= \\lim_{\\epsilon\\to 0}\\left[\\epsilon\\;\\frac{d}{d\\epsilon}\\left( \\frac{\\pi-\\frac{2\\pi}{T}\\epsilon}{2}\\right)-\\left(\\frac{\\pi-\\frac{2\\pi}{T} \\epsilon}{2}\\right)\\right]\\] \\[= -\\frac{\\pi}{2}.\\] Let us stress that this nonzero contribution survives the \\(\\epsilon\\) limit, since the Fourier sum results in a nonanalytic function (resembling a saw-tooth profile). This agrees with our general observation that the spin information is encoded in the nonanalytic behavior of the worldline trajectory in spacetime. In fact, the \\(m=1\\) contribution is the only nonvanishing term; all \\(S_{m}\\) for \\(m>1\\) as well as all contributions arising from term (III) in Eq. (31) are zero in the limit \\(\\epsilon\\to 0\\), as is shown in Appendix B. The whole spinor contribution is that of Eq. (32), boiling down to \\(\\exp\\left(-(T/2)\\,\\mbox{tr}_{\\rm L}[F\\sigma]\\right)\\). The spinorial Wilson-loop expectation value thus becomes \\[\\langle W_{\\rm spin}\\rangle=\\int d^{D}x\\,\\mbox{det}^{-\\frac{1}{2}}\\left( \\frac{\\sin(FT)}{FT}\\right)\\,\\mbox{tr}_{\\gamma}\\,e^{-\\frac{T}{2}\\,\\mbox{tr}_{ \\rm L}[F\\sigma]}=4\\int d^{D}x\\,\\mbox{det}^{-\\frac{1}{2}}\\left(\\frac{\\tan(FT)} {FT}\\right), \\tag{35}\\] where the Dirac trace has been taken in the last step. For instance, for a constant \\(B\\) field, the last line reads \\(4\\int d^{D}x\\,BT/\\tanh BT\\). Inserting this final result into Eq. (4), we arrive at the (unrenormalized) Heisenberg-Euler action [17, 31], \\[\\Gamma^{1}_{\\rm eff}[A]=\\frac{2}{(4\\pi)^{D/2}}\\int_{0}^{\\infty}\\frac{dT}{T^{(1 +D/2)}}\\;e^{-m^{2}T}\\,\\mbox{det}\\,^{-1/2}\\left(\\frac{\\tan FT}{FT}\\right). \\tag{36}\\] We would like to stress that the present derivation of this well-known result is independent of other standard calculational techniques, as far as the spinor part is concerned. The spinor contribution arises from the subtle interplay between the purely geometric spin factor and the Wilson loop. Non-zero contributions arise only from terms with a particular singularity structure. Since these singularities cannot arise from smooth worldlines, we conclude that the random zigzag course of the worldlines is an essential ingredient for the coupling between spin and fields. ### Spin factor with Grassmann variables In the standard approaches to describing fermionic degrees of freedom, spin information is encoded in additional Grassmann-valued path integrals. One motivation for the spin-factor representation has been to find a purely bosonic description devoid of both an explicit spin-field coupling and additional Grassmann variables. But since the latter two criteria are independent of each other, we can combine our spin-factor representation with Grassmann variables, in order to make use of the elegant formulation of the Dirac algebra and the path ordering by means of anti-commuting worldline variables. For instance, the standard worldline formulation for the one-loop effective action of QED in terms of a Grassmannian path integral is given by [8, 9] \\[\\Gamma^{1}_{\\rm eff}[A]=\\frac{1}{2}\\int_{0}^{\\infty}\\frac{dT}{T}\\ e^{-m^{2}T} \\int_{\\rm p.}{\\cal D}x\\int_{\\rm a.p.}{\\cal D}\\psi\\ e^{-\\int_{0}^{T}d\\tau L_{\\rm spin }}, \\tag{37}\\] with \\[L_{\\rm spin}=\\frac{1}{4}\\dot{x}^{2}+\\frac{1}{2}\\psi_{\\mu}\\dot{\\psi}^{\\mu}+i\\ \\dot{x}^{\\mu}A_{\\mu}-i\\ \\dot{\\psi}^{\\mu}F_{\\mu\ u}\\psi^{\\mu}. \\tag{38}\\] The path integrals satisfy either periodic (p.) or anti-periodic (a.p.) boundary conditions, depending on their statistics. Starting from this representation, our line of reasoning can immediately be applied, resulting in the following new expression for the QED action: \\[\\Gamma^{1}_{\\rm eff}[A]=-\\frac{1}{2}\\int_{0}^{\\infty}\\frac{dT}{T}\\ e^{-m^{2}T} \\int_{\\rm p.}{\\cal D}x\\int_{\\rm a.p.}{\\cal D}\\psi\\ e^{-\\int d\\tau\\frac{\\dot{x} ^{2}}{4}}e^{-i\\oint dxA}e^{-\\int d\\tau\\frac{\\dot{\\psi}\\dot{\\psi}}{2}}:e^{-\\int d \\tau\\ \\dot{\\psi}\\omega\\psi}:. \\tag{39}\\] Normal ordering takes care of the removal of \\(\\omega\\) self-contractions of the spin factor, whereas the path ordering is automatically guaranteed by the Grassmann integral. An interesting question of this representation concerns the fate of supersymmetry. Whereas the standard representation has a worldline supersymmetry, the supersymmetry is not manifest in the present formulation (the Wilson-loop exponent and the Pauli term are supersymmetric partners in Eq. (38)). ### Nonperturbative worldline dynamics The derivation of nonperturbative worldline expressions is an application where our spin-factor representation becomes highly advantageous. So far, we have considered perturbative diagrams involving one charged fermion loop, but no photon fluctuations. Promoting the fermions to a Dirac spinor with \\(N_{\\rm f}\\) flavor components, the functional integral over photon fluctuations becomes Gaussian in leading nontrivial order in a small-\\(N_{\\rm f}\\) expansion. In scalar QED, this gauge-field integral can be done straightforwardly, since the worldline-gauge-field coupling occurs simply in the form of the Wilson loop, which is a bosonic-current interaction. In the literature, the leading-order \\(N_{\\rm f}\\) expansion has already been used in early works on worldline techniques for scalar QED [1, 34]. For instance, thenonperturbative effective action of Heisenberg-Euler type in this approximation reads for scalar QED, \\[\\Gamma^{\\rm Scalar\\ QED}_{\\rm QA}[A_{\\mu}]=\\int_{x}\\frac{1}{4e^{2}}F_{\\mu\ u}F_{ \\mu\ u}-\\frac{N_{\\rm f}}{(4\\pi)^{D/2}}\\int_{0}^{\\infty}\\frac{dT}{T^{1+D/2}} \\Big{\\langle}e^{-i\\oint dx\\cdot A}\\,e^{-\\frac{e^{2}}{2}\\int_{0}^{T}d\\tau_{1}d \\tau_{2}\\dot{x}_{1_{\\mu}}\\Delta_{\\mu\ u}(x_{1},x_{2})\\dot{x}_{2_{\ u}}}\\Big{\\rangle}, \\tag{40}\\] where \\(\\langle\\dots\\rangle\\) again represents the worldline average as defined in Eq. (5). For a detailed derivation of Eq. (40), see [35]. The subscript \"QA\" refers to the leading-order \\(N_{\\rm f}\\) expansion as the \"quenched approximation\", since diagrams with further charged loops are neglected. In Eq. (40), we have abbreviated \\(x_{1,2}\\equiv x(\\tau_{1,2})\\) and employed the photon propagator, \\[\\Delta_{\\mu\ u}(x_{1},x_{2})=\\frac{\\Gamma(\\frac{D-2}{2})}{4\\pi^{D/2}}\\left[ \\frac{1+\\alpha}{2}\\frac{1}{|x_{1}-x_{2}|^{D-2}}+(\\tfrac{D}{2}-1)(1-\\alpha) \\frac{(x_{1}-x_{2})_{\\mu}(x_{1}-x_{2})_{\ u}}{|x_{1}-x_{2}|^{D}}\\right], \\tag{41}\\] in \\(D\\) dimensions with gauge parameter \\(\\alpha\\). The additional insertion term involving the photon propagator in the worldline average corresponds to all possible internal photon lines in the charged loop and carries the nonperturbative contribution. It can be shown that the quenched approximation is reliable for weak external fields, but for arbitrary values of the coupling.6 Footnote 6: In nonabelian gauge theories with \\(N_{\\rm c}\\) colors, the quenched approximation has also been shown to hold to leading order in a large-\\(N_{\\rm c}\\) expansion [12]. Applying the strategy of the quenched approximation to spinor QED, a further technical complication arises from the Pauli term. Even though the photon integral remains Gaussian, the worldline current becomes Dirac-algebra valued which has to be treated with greater care [37], see, e.g., [36] for Grassmann-valued representations. At this point, our spin-factor approach becomes elegant, since the worldline-gauge-field coupling is reduced to the Wilson loop. The derivation of the corresponding nonperturbative representations in spinor QED becomes identical to scalar QED. We can immediately write down the effective action to leading order in \\(N_{\\rm f}\\): \\[\\Gamma^{\\rm Spinor\\ QED}_{\\rm QA}[A_{\\mu}] = \\int_{x}\\frac{1}{4e^{2}}F_{\\mu\ u}F_{\\mu\ u}+\\frac{N_{\\rm f}}{2} \\frac{1}{(4\\pi)^{D/2}}\\int_{0}^{\\infty}\\frac{dT}{T^{1+D/2}}\\] \\[\\qquad\\times\\Big{\\langle}e^{-i\\oint dx\\cdot A}\\,e^{-\\frac{e^{2}} {2}\\int_{0}^{T}d\\tau_{1}d\\tau_{2}\\,\\dot{x}_{1\\mu}\\Delta_{\\mu\ u}\\dot{x}_{2\ u }}\\,{\\rm tr}_{\\gamma}{\\cal P}\\ :e^{\\frac{i}{2}\\int_{0}^{T}d\\tau\\sigma\\omega}:\\Big{\\rangle}.\\] This representation can now serve as the basis for nonperturbative studies of strong-coupling QED [38] in quenched approximation along the lines proposed in [35]. Further interesting versions of this nonperturbative formula may be obtained by trading the spin factor backwards for loop derivatives, acting now on the Wilson loop as well as the photon insertion; this will be the subject of future work. ## 4 Conclusions In this work, we have used the worldline approach to quantum field theory for a study of couplings between spinors and external gauge fields. Guided by the idea that gauge-field information can solely be covered by holonomies (Wilson loops), we have investigateda reformulation of the familiar Pauli term in spinorial QED. In this instance, we have shown that the Pauli term can be re-expressed in terms of a spin factor which is a purely geometric quantity in the sense that it depends only on the worldline trajectory. Our final representation of the fermionic fluctuation determinant, i.e., the one-loop effective action for QED, has the following form: \\[\\Gamma^{1}_{\\rm eff}[A]=\\frac{1}{2}\\frac{1}{(4\\pi)^{\\frac{D}{2}}}\\int_{0}^{T} \\frac{dT}{T^{1+\\frac{D}{2}}}\\ e^{-m^{2}T}\\int{\\cal D}x(\\tau)\\ e^{-\\int d\\tau \\frac{\\dot{x}^{2}(\\tau)}{4}}\\,e^{-i\\oint dxA}\\,{\\rm tr}_{\\gamma}{\\cal P}:e^{ \\frac{i}{2}\\int d\\tau\\sigma\\omega}:. \\tag{43}\\] The last factor represents the spin factor in the fermionic second-order formalism with \\(\\omega=\\omega[x]\\) defined in Eq. (22). Loosely speaking, the exponent \\(\\sigma_{\\mu\ u}\\omega_{\\mu\ u}[x]\\) replaces the spin-field coupling \\(\\sim\\sigma_{\\mu\ u}F_{\\mu\ u}\\) of the standard representation of the fermionic effective action. The spin factor deviates in a number of aspects from the Polyakov spin factor, occurring in the first-order formalism. These differences, which have been missed so far in the literature [22, 23], are rooted in the fact that the worldlines in the two formalisms obey different velocity distributions: in the first-order formalism, the worldlines are propertime-parameterized, \\(|\\dot{x}|=1\\), whereas their velocity is Gaussian-distributed in the second order formalism. A consequence for the spin factors is, for instance, that smooth differentiable worldlines give zero contribution to our spin-factor exponent; \\(\\omega\\) has only nonzero support for worldlines of \"zigzag\" shape, inducing a particular singularity structure. By contrast, the Polyakov spin factor is not sensitive to the analytic properties of the worldlines; on the contrary, it has not only a geometric but also a topological meaning (e.g., counting the twists of a worldline in \\(D=2\\)). We have not been ably to identify a topological meaning for our spin factor. Even if there was one, its relevance would be unclear, since the spin factor enters the worldline integrand with a normal-ordering prescription. As a consequence, the spin factor in itself does not appear to have a particular meaning; only contractions of the spin factor with other observables such as the Wilson loop in the integrand become meaningful. For practical perturbative calculations, we have developed a spin-factor calculus that reduces the amount of analytical computational steps to roughly the same amount as in the standard approach. The main advantage of our formulation consists in the fact that the dependence on the external gauge field occurs solely in the form of the Wilson loop. Particularly in computer-algebraic realizations of high-order amplitude calculations, this may lead to algorithmic simplifications compared to the standard approach. On the other hand, we have to mention that the isolation of all those terms with the required singularity structure for the \\(\\epsilon\\) limit might lead to algorithmic complications. We have demonstrated all these aspects in the concrete example of the classic Heisenberg-Euler effective action. Our spin-factor formalism becomes truly advantageous for the analysis of nonperturbative worldline dynamics based on the small-\\(N_{\\rm f}\\) expansion, i.e., quenched approximation. Here, the spin factor dispenses with all complications for the photon-fluctuation integrations induced by direct spin-field couplings. We have presented a closed-form worldline expression for the leading-order-\\(N_{\\rm f}\\) nonperturbative effective action of Heisenberg-Euler-type that can serve as a starting point for strong-coupling investigations. We believe that our work paves the way to further studies of spin-factor representations. Our techniques are, for instance, directly applicable to diagrams with open fermionic lines, such as propagators, etc. We expect that our approach will become particularly powerful in the case of nonabelian gauge fields, since the gluonic spin-field coupling can also be traded for a spin factor. Nonabelian gauge-field dependencies will then be described only in terms of holonomies. In this sense, our work can be viewed as a bottom-up approach to a loop-space formulation of gauge theories [18, 19, 20]. Since our work was also motivated by the development of worldline numerics [24, 25], we have to face the problem of a numerical implementation of our formalism. An immediate numerical realization seems inhibited by the normal-ordering prescription. This requires the study of possible alternatives. If the nature of our spin factor turns out to be topological, it might be possible to classify the worldlines in terms of their topological properties. This would facilitate the implementation of an algorithm that performs a Monte-Carlo sampling for each individual topological sector separately. To summarize, we have performed a first detailed analysis of the spin factor in the second-order formalism of QED. We believe that this opens the door to many further studies of the interrelation between spin and external fields in a geometric language. ## Acknowledgment We are grateful to G.V. Dunne, J. Heinonen, K. Klingmuller, K. Langfeld, J. Sanchez-Guillen, M.G. Schmidt, C. Schubert, and R. Vazquez for many useful discussions. This work was supported by the Deutsche Forschungsgemeinschaft (DFG) under contract Gi 328/1-3 (Emmy-Noether program). ## Appendix A Singularity cancellations: an explicit example Here, we demonstrate by an explicit calculation to second order that the Wick self-contractions of the spin factor cancel against the \\(D[\\omega]\\) term defined in Eq. (17). This cancellation also guarantees the absence of severe singularities. To be precise, we show explicitly that \\[1=\\left\\langle\\mathrm{tr}_{\\gamma}\\mathcal{P}\\ e^{\\frac{i}{2}\\int_{0}^{T}d \\tau\\sigma\\omega}+D[\\omega]\\right\\rangle \\tag{44}\\] holds to second order (the counting of orders can formally be defined by the number of \\(\\sigma_{\\mu\ u}\\) matrices involved). First, we observe that the zeroth order on the RHS trivially reproduces the LHS. The first order vanishes by virtue of the Dirac trace. The second-order calculation requires to show that (cf. Eq. (15)) \\[\\left\\langle\\mathcal{P}\\left(\\int d\\tau\\sigma\\omega\\right)^{2}+\\mathcal{P} \\!\\!\\int d\\tau_{2}d\\tau_{1}\\sigma_{\\lambda\\kappa}\\sigma_{\\mu\ u}\\!\\left[ \\frac{\\delta\\omega_{\\mu\ u}(\\tau_{1})}{\\delta s_{\\lambda\\kappa}(\\tau_{2})}+ \\lim_{\\epsilon\\to 0}\\ \\int\\limits_{-\\epsilon}^{\\epsilon}d\\eta\\eta\\frac{\\delta \\omega_{\\mu\ u}(\\tau_{1})}{\\delta x_{\\kappa}(\\tau_{2}-\\frac{\\eta}{2})}\\ddot{ x}_{\\lambda}(\\tau_{2}+\\frac{\\eta}{2})\\right]\\right\\rangle=0. \\tag{45}\\]Since the cancellation will turn out to hold already for the \\(\\tau_{1},\\tau_{2}\\) integrands, we can suppress the path-ordering symbol in the following. Let us first compute the derivatives of \\(\\omega\\), beginning with \\[\\frac{\\delta\\omega_{\\mu\ u}(\\tau_{1})}{\\delta s_{\\lambda\\kappa}(\\tau_{2})}=\\frac {1}{2}\\ \\lim_{\\epsilon_{1},\\epsilon_{2}\\to 0}\\int\\limits_{-\\epsilon_{1}}^{ \\epsilon_{1}}\\int\\limits_{-\\epsilon_{2}}^{\\epsilon_{2}}d\\eta d\\rho\\ \\rho\\eta\\ \\delta_{\\mu\\lambda}\\delta_{\ u\\kappa}\\ \\ddot{ \\delta}\\left[\\tau_{1}+\\tfrac{e}{2}-(\\tau_{2}+\\tfrac{n}{2})\\right]\\ \\ddot{\\delta}\\left[\\tau_{1}-\\tfrac{e}{2}-(\\tau_{2}- \\tfrac{n}{2})\\right], \\tag{46}\\] where we have already used the antisymmetry properties of \\(\\omega\\). Furthermore, we encounter \\[\\frac{\\delta\\omega_{\\mu\ u}(\\tau_{1})}{\\delta x_{\\kappa}(\\tau_{2}-\\tfrac{n}{2 })}=\\frac{1}{2}\\ \\lim_{\\epsilon\\to 0}\\int\\limits_{-\\epsilon}^{\\epsilon}d \\rho\\rho\\ \\delta_{\\mu\\kappa}\\ddot{x}_{\ u}(\\tau_{1}-\\tfrac{e}{2})\\ \\ddot{\\delta}\\left[\\tau_{1}+\\tfrac{e}{2}-(\\tau_{2}- \\tfrac{n}{2})\\right]. \\tag{47}\\] In order to carry out the Wick contractions, we need the worldline propagator, \\[\\langle x_{\\mu}(\\tau_{1})x_{\ u}(\\tau_{2})\\rangle=-\\delta_{\\mu\ u}\\ |\\tau_{2}-\\tau_{1}|+\\delta_{\\mu\ u}\\ \\frac{(\\tau_{2}-\\tau_{1})^{2}}{T}, \\tag{48}\\] and, in particular, its propertime derivative of the form \\[\\langle\\ddot{x}_{\\mu}(\\tau_{1})\\ddot{x}_{\ u}(\\tau_{2})\\rangle=-2\\ \\ddot{\\delta}(\\tau_{1}-\\tau_{2})\\ \\delta_{\\mu\ u}. \\tag{49}\\] Finally, we have to compute the contraction of the first term of Eq. (45), which involves \\[\\frac{1}{4}\\big{\\langle}\\ddot{x}_{\\mu}(\\tau_{1}+\\tfrac{e}{2})\\ \\ddot{x}_{\ u}(\\tau_{1}-\\tfrac{e}{2})\\ \\ddot{x}_{\\lambda}(\\tau_{2}+\\tfrac{n}{2})\\ \\ddot{x}_{\\kappa}(\\tau_{2}-\\tfrac{n}{2})\\big{\\rangle} \\tag{50}\\] \\[=\\delta_{\\mu\ u}\\delta_{\\lambda\\kappa}\\ \\ddot{\\delta}\\left[\\tau_{1}+ \\tfrac{e}{2}-(\\tau_{1}-\\tfrac{e}{2})\\right]\\ \\ddot{\\delta}\\left[\\tau_{2}+\\tfrac{n}{2}-(\\tau_{2}- \\tfrac{n}{2})\\right]\\] \\[\\ \\ \\ \\ +\\delta_{\ u\\lambda}\\delta_{\\mu\\kappa}\\ \\ddot{\\delta}\\left[\\tau_{1}- \\tfrac{e}{2}-(\\tau_{2}-\\tfrac{n}{2})\\right]\\ \\ddot{\\delta}\\left[\\tau_{1}+\\tfrac{e}{2}-(\\tau_{2}- \\tfrac{n}{2})\\right]\\] \\[\\ \\ \\ \\ +\\delta_{\\mu\\lambda}\\delta_{\ u\\kappa}\\ \\ddot{\\delta}\\left[\\tau_{1}+ \\tfrac{e}{2}-(\\tau_{2}+\\tfrac{n}{2})\\right]\\ \\ddot{\\delta}\\left[\\tau_{1}- \\tfrac{e}{2}-(\\tau_{2}-\\tfrac{n}{2})\\right].\\] Now, inserting Eqs. (46) and (47) into the LHS of Eq. (45) and performing all Wick contractions with the aid of Eq. (49) and (50), it is straightforward to observe that Eq. (45) holds as an identity. Some terms vanish because of the contraction of \\(\\delta_{\\mu\ u}\\) with \\(\\sigma_{\\mu\ u}\\), such as the first term on the RHS of Eq. (50); all remaining terms cancel each other exactly under the parameter integrals. This verifies the identity (44) to second order which has been proved to all orders in Subsect. 2.3. ## Appendix B Explicit calculations of spinor parts In the following, we show that possible further spinor parts, occurring during the calculation of the Heisenberg-Euler action, vanish, since they do not support a sufficient nonanalyticity. Let us first consider the cases of \\(m>1\\) of the sum \\(S_{m}\\), defined in Eq. (33) and appearing in the computation of term (II) in Eq. (32). For this, we use an integral representation of the function \\(g_{n}(\\epsilon)\\) which is defined in Eq. (29), \\[S_{m}=\\lim_{\\epsilon\\to 0}\\sum_{n=1}^{\\infty}\\frac{g_{n}(\\epsilon)}{n^{m}}=-i \\lim_{\\epsilon\\to 0}\\frac{2\\pi^{2}}{T^{2}}\\int_{-\\epsilon}^{\\ \\epsilon}d\\rho\\rho\\sum_{n=1}^{\\infty}\\frac{e^{-\\left(\\frac{2i\\pi\\rho}{T} \\right)n}}{n^{m-2}}=-\\frac{\\pi}{T}\\lim_{\\epsilon\\to 0}\\int_{-\\epsilon}^{ \\ \\epsilon}d\\rho\\sum_{n=1}^{\\infty}\\frac{e^{-\\left(\\frac{2i\\pi\\rho}{T} \\right)n}}{n^{m-1}},\\]where we have integrated by parts in the last step. In the \\(\\epsilon\\to 0\\) limit, any non-zero contribution requires the \\(n\\) sum to exhibit a \\(\\delta(\\rho)\\) singularity. As shown in the main text, this is exactly the case for the \\(m=1\\) term. For \\(m\\geq 3\\), the \\(n\\) sum corresponds to a poly-logarithm of degree \\(m-1\\geq 2\\), which is an analytic function for \\(\\rho\\to 0\\). Hence all \\(m\\geq 3\\) terms vanish. The \\(m=2\\) term is more subtle. Here we encounter \\[\\sum_{n=1}^{\\infty}\\frac{e^{-\\left(\\frac{2i\\pi\\rho}{T}\\right)n}}{n^{1}}=\\sum_{ n=1}^{\\infty}\\frac{\\cos\\left(\\frac{2\\pi\\rho}{T}\\right)}{n}+i\\sum_{n=1}^{ \\infty}\\frac{\\sin\\left(\\frac{2\\pi\\rho}{T}\\right)}{n}.\\] The second sum is \\(\\sim\\frac{\\pi-\\rho}{2}\\) and vanishes by the \\(\\epsilon\\) limit. The first sum can be carried out: \\[\\sum_{n=1}^{\\infty}\\frac{\\cos(n\\frac{2\\pi\\rho}{T})}{n}=\\frac{1}{2}\\,\\ln\\left( \\frac{1}{2(1-\\cos\\!\\frac{2\\pi\\rho}{T})}\\right).\\] Therefore the \\(\\rho\\) integral becomes \\[-\\frac{\\pi}{2T}\\int_{-\\epsilon}^{\\;\\;\\epsilon}d\\rho\\,\\ln\\!\\frac{1}{2(1-\\cos\\! \\rho)}\\approx\\frac{\\pi}{T}\\int_{-\\epsilon}^{\\;\\;\\epsilon}d\\rho\\,\\ln\\!\\rho \\to 0\\,.\\] Even though there is a nonanalyticity, the singular structure of the integrand is not sufficient, and the integral vanishes in the \\(\\epsilon\\to 0\\) limit. This proves our first statement in the main text that \\(S_{m}\\) contributes to the effective action only in the case of \\(m=1\\). Finally, we discuss the remaining sum (III) of Eq. (31). Similarly to the preceding discussion, a nonzero contribution can only arise if the result of Fourier sum over \\(n\\) is sufficiently singular. Concentrating on the \\(n\\) dependence, the terms of the Fourier sum are of the form \\[\\frac{1}{n^{2m-k}}\\ g_{n}^{k}(\\epsilon)\\sim\\int_{-\\epsilon}^{\\;\\;\\epsilon}d \\rho\\rho\\frac{e^{in\\rho}}{n^{2m-k}},\\quad k=1, ,m-1,\\quad m>1.\\] For all \\(k<m\\), we end up with Fourier sums of the same type as discussed before in this appendix; all go to zero in the \\(\\epsilon\\to 0\\) limit. Hence, the whole part (III) of Eq. (31) makes no contribution to the effective action, as claimed in the main text. ## References * [1] R. P. Feynman, Phys. Rev. **80** (1950) 440; **84**, 108 (1951). * [2] M. B. Halpern and W. Siegel, Phys. Rev. D **16**, 2486 (1977); M. B. Halpern, A. Jevicki and P. Senjanovic, Phys. Rev. D **16**, 2476 (1977). * [3] E. S. Fradkin and A. A. Tseytlin, Phys. Lett. B **163** (1985) 123. * [4] R. R. Metsaev and A. A. Tseytlin, Nucl. Phys. B **298** (1988) 109. * [5] A. M. Polyakov, \"Gauge Fields And Strings,\" Harwood, Chur (1987). * [6] Z. Bern and D. A. Kosower, Nucl. Phys. **B362**, 389 (1991); **B379**, 451 (1992). * [7] M. J. Strassler, Nucl. Phys. B **385** (1992) 145 [arXiv:hep-ph/9205205]. * [8] M. G. Schmidt and C. Schubert, Phys. Lett. **B318**, 438 (1993) [hep-th/9309055]; M. Reuter, M. G. Schmidt and C. Schubert, Annals Phys. **259**, 313 (1997) [arXiv:hep-th/9610191]; R. Shaisultanov, Phys. Lett. B **378**, 354 (1996) [arXiv:hep-th/9512142]. * [9] For a review, see C. Schubert, Phys. Rept. **355**, 73 (2001) [arXiv:hep-th/0101036]. * [10] C. Itzykson and J. M. Drouffe, \"Statistical Field Theory,\" Cambridge, UK: Univ. Pr. (1989). * [11] F. A. Berezin and M. S. Marinov, Annals Phys. **104**, 336 (1977); L. Brink, S. Deser, B. Zumino, P. Di Vecchia and P. S. Howe, Phys. Lett. B **64**, 435 (1976); A. Barducci, R. Casalbuoni and L. Lusanna, Nuovo Cim. A **35**, 377 (1976); Nucl. Phys. B **124**, 93 (1977); Nucl. Phys. B **124**, 521 (1977); A. Barducci, F. Bordi and R. Casalbuoni, Nuovo Cim. B **64**, 287 (1981). * [12] A. Strominger, Phys. Lett. B **101**, 271 (1981). * [13] A.M. Polyakov, Mod. Phys. Lett. A **3**, 335 (1988). * [14] P. Orland, Int. J. Mod. Phys. A **4**, 3615 (1989); P. Orland and D. Rohrlich, Nucl. Phys. B **338**, 647 (1990). * [15] G. P. Korchemsky, Phys. Lett. B **275** (1992) 375; I. A. Korchemskaya and G. P. Korchemsky, J. Phys. A **24** (1991) 4511 [Sov. J. Nucl. Phys. **54** (1991 YAFIA,54,1718-1731.1991) 1053]; M. A. Nowak, M. Rho and I. Zahed, Phys. Lett. B **254**, 94 (1991). * [16] C. D. Fosco, J. Sanchez-Guillen and R. A. Vazquez, Phys. Rev. D **69**, 105022 (2004) [arXiv:hep-th/0310191]. * [17] W. Heisenberg and H. Euler, Z. Phys. **98** (1936) 714; V. Weisskopf, K. Dan. Vidensk. Selsk. Mat. Fy. Medd. **14**, 1 (1936). * [18] A. M. Polyakov, Nucl. Phys. B **164** (1980) 171. * [19] Y. M. Makeenko and A. A. Migdal, Phys. Lett. B **88**, 135 (1979) [Erratum-ibid. B **89**, 437 (1980)]; Nucl. Phys. B **188**, 269 (1981) [Sov. J. Nucl. Phys. **32**, 431.1980 YAFIA,32,838 (1980 YAFIA,32,838-854.1980)]; A. A. Migdal, Phys. Rept. **102** (1983) 199. * [20] R. Gambini and J. Pullin, Loops, Knots, Gauge Theories and Quantum Gravity, Cambridge University Press (1996); H. Reinhardt, Lecture notes on Yang-Mills theories, Tubingen U. (1998). * [21] A. Ashtekar, Phys. Rev. Lett. **57** (1986) 2244; Phys. Rev. D **36** (1987) 1587; C. Rovelli, Class. Quant. Grav. **8** (1991) 1613; M. Gaul and C. Rovelli, Lect. Notes Phys. **541** (2000) 277 [arXiv:gr-qc/9910079]; H. Nicolai, K. Peeters and M. Zamaklar, arXiv:hep-th/0501114. * [22] A. I. Karanikas and C. N. Ktorides, Phys. Rev. D **52**, 5883 (1995); JHEP **9911** (1999) 033 [arXiv:hep-th/9905027]; Phys. Lett. B **500**, 75 (2001) [arXiv:hep-th/0008078]. * [23] S. D. Avramis, A. I. Karanikas and C. N. Ktorides, Phys. Rev. D **66** (2002) 045017 [arXiv:hep-th/0205272]. * [24] H. Gies and K. Langfeld, Nucl. Phys. B **613**, 353 (2001) [arXiv:hep-ph/0102185]; Int. J. Mod. Phys. A **17**, 966 (2002) [arXiv:hep-ph/0112198]. * [25] M. G. Schmidt and I. O. Stamatescu, arXiv:hep-lat/0201002. * [26] K. Langfeld, L. Moyaerts and H. Gies, Nucl. Phys. B **646** (2002) 158 [arXiv:hep-th/0205304]. * [27] H. Gies, K. Langfeld and L. Moyaerts, JHEP **0306** (2003) 018 [arXiv:hep-th/0303264]; arXiv:hep-th/0311168. * [28] M. G. Schmidt and I. O. Stamatescu, Mod. Phys. Lett. A **18** (2003) 1499. * [29] W. Dittrich and M. Reuter, Lect. Notes Phys. **220** (1985) 1;W. Dittrich and H. Gies, Springer Tracts Mod. Phys. **166** (2000) 1; G. V. Dunne, arXiv:hep-th/0406216. * [30] E. D'Hoker and D. G. Gagne, Nucl. Phys. B **467**, 297 (1996) [arXiv:hep-th/9512080]; Nucl. Phys. B **467**, 272 (1996) [arXiv:hep-th/9508131]. * [31] J. S. Schwinger, Phys. Rev. **82**, 664 (1951). * [32] L. Brink, P. Di Vecchia and P. S. Howe, Nucl. Phys. B **118**, 76 (1977); J. W. van Holten, Nucl. Phys. B **457**, 375 (1995) [arXiv:hep-th/9508136]. * [33] J. Haemmerling, \"Geometry of spin-field couplings on the worldline,\" diploma thesis, Heidelberg U. (2004). * [34] I. K. Affleck, O. Alvarez and N. S. Manton, Nucl. Phys. B **197**, 509 (1982). * [35] H. Gies, J. Sanchez-Guillen and R. A. Vazquez, arXiv:hep-th/0505275. * [36] C. Alexandrou, R. Rosenfelder and A. W. Schreiber, Phys. Rev. A **59**, 1762 (1999) [arXiv:hep-th/9809101]; Phys. Rev. D **62**, 085009 (2000) [arXiv:hep-th/0003253]. * [37] N. Brambilla and A. Vairo, Phys. Rev. D **56**, 1445 (1997) [arXiv:hep-ph/9703378]. * [38] M. Gockeler, R. Horsley, V. Linke, P. Rakow, G. Schierholz and H. Stuben, Phys. Rev. Lett. **80**, 4119 (1998) [arXiv:hep-th/9712244]; H. Gies and J. Jaeckel, Phys. Rev. Lett. **93**, 110405 (2004) [arXiv:hep-ph/0405183].
We derive a geometric representation of couplings between spin degrees of freedom and gauge fields within the worldline approach to quantum field theory. We combine the string-inspired methods of the worldline formalism with elements of the loop-space approach to gauge theory. In particular, we employ the loop (or area) derivative operator on the space of all holonomies which can immediately be applied to the worldline representation of the effective action. This results in a spin factor that associates the information about spin with \"zigzag\" motion of the fluctuating field. Concentrating on the case of quantum electrodynamics in external fields, we obtain a purely geometric representation of the Pauli term. To one-loop order, we confirm our formalism by rederiving the Heisenberg-Euler effective action. Furthermore, we give closed-form worldline representations for the all-loop order effective action to lowest nontrivial order in a small-\\(N_{\\rm f}\\) expansion. HD-THEP-05-08, [http://arXiv.org/abs/hep-ph/0505072](http://arXiv.org/abs/hep-ph/0505072)
Give a concise overview of the text below.
arxiv-format/0505099v2.md
# Pair production in inhomogeneous fields Holger Gies and Klaus Klingmuller _Institut fur theoretische Physik, Universitat Heidelberg Philosophenweg 16, D-69120 Heidelberg, Germany_ ## 1 Introduction Pair production was first proposed for electron-positron pairs in strong, temporally and spatially constant electric fields [1, 2, 3]. Today it is often referred to as the Schwinger [4] mechanism. As a nonperturbative mechanism, pair production is of great theoretical interest. From a phenomenological point of view, it corresponds to probing the theory in the domain of strong fields. Consequently, we encounter pair production in many topics of contemporary physics, for instance, black hole evaporation [5] and \\(e^{+}e^{-}\\) creation in the vicinity of charged black holes [6, 7] as well as particle production in hadronic collisions [8] and in the early universe [9, 10]. Since QED pair production in strong fields represents the conceptually simplest case, it can serve as a theoretical laboratory for all these cases. A sizeable rate for spontaneous pair production requires extraordinary strong electric fields, comparable in size to the so-called critical field strength, which corresponds to the electron-mass scale, \\(E_{\\rm cr}=m^{2}/e\\approx 1.3\\cdot 10^{18}\\frac{V}{m}\\). For a long time, it seemed inconceivable to produce macroscopic electric fields of the required strength in the laboratory, but today, with the development of strong lasers, there are several promising experiments in progress [11, 12, 13]; for a discussion of experimental requirements, see [14]. Many different theoretical methods, such as the propertime method [4, 15], WKB techniques [16, 17, 18, 19], the Schrodinger-Functional approach [20], functional techniques [21, 22], kinetic equations [23, 24, 25, 26], various instanton techniques [27, 28, 29, 30], Borel summation [31, 32], and propagator constructions [33, 34], have been developed to study pair production in external fields. Also, finite-temperature contributions have been determined which first occur at the two-loop level [35, 36]. Of particular conceptual interest is the production rate in terms of the effective action for a given background, which is also used in this work. Owing to an intimate relation between the effective action and the vacuum-persistence amplitude, it is the imaginary part of the effective action that encodes information about pair production which, in this context, isinterpreted as spontaneous vacuum decay. This approach yields the instantaneous production rate, neglecting back-reactions and memory effects. However, this rate can serve as a source term for kinetic equations, which can then take back-reactions and memory effects into account [23, 24, 25, 26]. Even though the existing methods follow a well defined and technically stringent concept, their application often faces serious technical and conceptual difficulties. Up to now, no reliable and universal method--be it analytic or numeric--is available for the calculation of pair-production rates in inhomogeneous electric fields. In standard approaches, functional traces have to be evaluated with the knowledge of the spectrum of the corresponding differential operator, which is only available for special cases. Moreover, controlling the divergencies that possibly occur upon summing up the eigenvalues is a delicate task. In the present work, we solve these problems by using the recently developed numerical worldline techniques [37, 38, 39, 40, 41] which are based on the string-inspired worldline formalism [42, 43, 44, 45, 46, 47, 48, 49, 50]. The important advantage compared to other approaches lies in the fact that worldline numerics can be formulated independently of any symmetry of the background. The identification of and the summation over the spectrum of quantum fluctuations are done in one single and finite step. For simplicity, we confine ourselves to scalar QED; generalization to spinor QED is, in principle, straightforward and will be discussed below. Beyond the computational advantages of worldline techniques, the worldline picture also helps to understand conceptual aspects in more depth. In particular, the nature and the role of nonlocalities become highly transparent from the worldline viewpoint, since the worldlines themselves represent extended virtual trajectories of the fluctuating particles in coordinate space. In the present context, we are aiming at the quantum effective action which, of course, receives nonlocal contributions in general. However, many standard approximation methods suppress (or shade) nonlocalities by construction, as, e.g., the derivative expansion. Hence, pair production as described by the Schwinger formula is often recognized as a nonperturbative phenomenon, but not so much as a nonlocal phenomenon. Nevertheless, the latter property is crucial, as the following heuristic argument elucidates: in order for a virtual pair to become real, i.e., on-shell, the pair must gain at least the amount of \\(2m\\) of energy; this is only possible by propagating in opposite directions in the electric field. This delocalization of the pair wave function is mandatory for gaining sufficient energy. In constant electric fields, this delocalization remains invisible in the final result. By contrast, in inhomogeneous fields the spacetime dependence of the delocalized wave function matters a great deal and can even dominate the resulting effect, as our results demonstrate. In the worldline picture, the nonlocal effects already become transparent on the level of the formalism, since the extended worldlines exactly describe the delocalization of a virtual pair. At this point, we would like to stress the difference of the present work to earlier applications of worldline numerics. Whereas the algorithms developed so far in [37, 38, 39, 40, 41] have proven their capabilities for computing the real part of the effective action (and action densities), the computation of the imaginary part is by no means a straightforward generalization. The reason for this lies in the truly Minkowskian nature of the problem of pair production: vacuum decay only occurs for real, i.e., Minkowskian, electric fields. This contrasts with the indispensable necessity of a Euclidean formulation for solving the worldline integrals by a statistical Monte Carlo algorithm. In practice, this results in an overlap problem: the finite Euclidean worldline ensemble can have little overlap with those worldlines that contribute dominantly to Minkowski-valued observables. We solve this fundamental problem by resorting to a technique developed in [51] in the different context of nonperturbative Euclidean worldline numerics: we fit a suitable cumulative density function (CDF) of the Euclidean ensemble to a physically motivated ansatz that can be continued analytically to Minkowski space. We should emphasize that this continuation represents an extrapolation of certain ensemble properties to Minkowski space which is an a priori uncontrolled procedure resulting in systematic errors. We check this extrapolation carefully against various analytically known results and find negligibly small systematic errors compared to the statistical Monte Carlo errors. Hence, we regard the overlap problem as solved for the present problem. This solution is obtained at the expense of numerical cost; moreover, the algorithm can, in principle, not be made arbitrarily precise, in contrast to former applications of worldline numerics. Nevertheless, for the problem of pair production and as far as the experimentally required accuracy is concerned, we believe that our algorithm is sufficiently powerful. ## 2 Worldline formalism for pair production The vacuum-persistence amplitude can be related to the effective action \\(\\Gamma_{\\rm M}\\) in Minkowski space, \\[\\langle\\Omega|e^{-iHT}|\\Omega\\rangle=e^{i\\Gamma_{\\rm M}}.\\] The corresponding probability for the vacuum to decay spontaneously is \\[P=1-e^{-2{\\rm Im}\\Gamma_{\\rm M}}.\\] In the case of QED with electric background fields, vacuum decay occurs in the form of spontaneous pair production, the production rate per unit time and volume of which is directly proportional to the imaginary part of the effective action density (effective Lagrangian). In scalar QED, the one-loop contribution to the Euclidean effective action \\(\\Gamma_{\\rm E}\\) reads \\[\\Gamma_{\\rm E}^{1}[A]=\\ln\\det\\left(-(\\partial+ieA)^{2}+m^{2}\\right), \\tag{1}\\] where \\(\\Gamma_{\\rm M}\\) and \\(\\Gamma_{\\rm E}\\) differ by a minus sign, \\(\\Gamma_{\\rm M}=-\\Gamma_{\\rm E}\\). In the worldline approach, the logarithm of the determinant in \\(D\\)-dimensional spacetime is represented by a path integral [50], \\[\\Gamma_{\\rm E}^{1}[A]=-\\frac{1}{(4\\pi)^{D/2}}\\int_{0}^{\\infty}\\frac{dT}{T^{1+ D/2}}\\ e^{-m^{2}T}\\int_{x(0)=x(T)}{\\cal D}x(\\tau)\\ e^{-\\int_{0}^{T}d\\tau}\\ \\left(\\frac{e^{2}}{4}+ie\\dot{x}A(x)\\right), \\tag{2}\\] where the integration parameter \\(T\\) is called the propertime. The path integral runs over all closed worldlines, parameterized by the propertime. The worldlines can be viewed as the trajectories of the virtual fluctuations in coordinate space. The path integral is normalized to give 1 in the limit of zero gauge potential. We split the path integral into an integral over all paths with a common center of mass \\(x_{0}\\) and an ordinary integral over all \\(x_{0}\\), \\(x(\\tau)\\to x_{0}+x(\\tau)\\), where \\(\\int_{0}^{T}d\\tau\\ x(\\tau)=0\\). Introducing the _Wilson loop_, \\[W_{x_{0}}[x(\\tau)]:=e^{-ie\\int_{0}^{T}d\\tau}\\ \\dot{x}A(x_{0}+x(\\tau)), \\tag{3}\\] and its _expectation value_, \\[\\langle W_{x_{0}}\\rangle:=\\int_{{x(0)=x(T)\\atop{\\rm CM}}}{\\cal D}x(\\tau)\\ W_{x_{0}}[x(\\tau)]\\,e^{-\\int_{0}^{T}d\\tau}\\ \\frac{e^{2}}{4}, \\tag{4}\\] we can write: \\[\\Gamma_{\\rm E}^{1}[A]=-\\frac{1}{(4\\pi)^{D/2}}\\int d^{D}x_{0}\\int_{0}^{\\infty} \\frac{dT}{T^{1+D/2}}\\ e^{-m^{2}T}\\langle W_{x_{0}}\\rangle+{\\rm c.t.} \\tag{5}\\] Here we have added counterterms (c.t.) which have to be fixed by renormalization of physical parameters. If the electric field is nonzero, \\(\\Gamma^{1}[A]\\) obtains an imaginary part, arising from poles of the Wilson-loop expectation value \\(\\langle W_{x_{0}}\\rangle\\) on the real \\(T\\) axis. Surrounding the poles by halfcircles in the upper half plain in agreement with causality leads to \\[{\\rm Im}\\Gamma_{\\rm E}^{1}[A]=-\\frac{1}{(4\\pi)^{D/2}}\\int d^{D}x_{0}\\ {\\rm Im}\\sum_{T_{\\rm pol}}\\frac{1}{{T_{\\rm pol}}^{1+D/2}}e^{-m^{2}T_{\\rm pol}} (-\\pi i){\\rm Res}\\left(\\langle W_{x_{0}}\\rangle,T_{\\rm pol}\\right), \\tag{6}\\] where the sum goes over all poles with positions \\(T_{\\rm pol}\\). The exponential factor with Gaussian velocity weight in the path integral in Eq. (4) suppresses the contribution of long paths. Therefore, the integral is dominated by paths that tightly wiggle around the common center of mass. This gives rise to the picture of a _loop cloud_ sitting at \\(x_{0}\\) and scanning the background field in the neighborhood of \\(x_{0}\\). Hence, the nonlocal nature of the phenomenon is already apparent in the formalism. Let us mention in passing that the path integral for a constant \\(E\\) background is Gaussian, can thus be done exactly, and results in \\(\\langle W_{x_{0}}\\rangle=eET/\\sin(eET)\\); see below. Summing over the pole positions of the inverse sine results in the famous Schwinger formula (for scalar QED in this case), \\[\\mbox{Im}\\Gamma^{1}_{\\mbox{\\scriptsize M}}[E=\\mbox{const.}]=-\\frac{V}{16\\pi^ {3}}\\,(eE)^{2}\\sum_{n=1}^{\\infty}\\frac{(-1)^{n}}{n^{2}}\\,e^{-\\frac{m^{2}}{eE} \\,\\pi n}, \\tag{7}\\] displaying the nonperturbative dependence on \\(eE\\)[4]; here, \\(V\\) denotes the space-time volume. Each term in the sum corresponds to production of \\(n\\) coherent pairs. ## 3 Worldline numerics ### Worldline discretization The worldline numerical algorithm for the present problem partly resembles closely those developed in detail in [37, 38, 39, 40, 41], the essential steps of which we will recall in the following for completeness. As a first step, we introduce the _unit loop_\\(y(t)\\), \\[y(t):=\\frac{1}{\\sqrt{T}}x(Tt). \\tag{8}\\] The Wilson-loop expectation value then reads \\[\\langle W_ link centers actually corresponds to effectively shrinking the loop cloud. Of course, this difference becomes irrelevant in the propertime continuum limit \\(N\\to\\infty\\). However, for small \\(N\\), this effect leads to sizeable systematic deviations from the continuum limit. We avoid this systematics by evaluating the gauge field at the sites instead, \\[\\int_{0}^{1}dt\\,\\dot{y}\\,A(\\sqrt{T}y+x_{0})\\to\\sum_{k=1}^{N}(y_{k+1}-y_{k})\\,A( \\sqrt{T}y_{k}+x_{0}). \\tag{12}\\] It turns out that possible violations of gauge invariance for smooth gauges such as the Lorenz gauge remain much smaller than other systematic and statistical errors for the background fields studied in this work.2 Footnote 2: In the general case, we, of course, recommend the gauge-invariant link variable discretization. In order to reduce the systematic error mentioned above, order \\(1/N\\) improvements of the action may be useful. For the effective action and the pair-production rate, the \\(T\\) integration in Eq. (5) has to be performed. For the simple case of a constant field, this can be done elegantly by a fast-Fourier transform (FFT) after the \\(T\\) integration has been rotated onto the imaginary axis. Thereby, the pair production is obtained for a whole spectrum of masses and field strengths, respectively, all at once. This procedure and its limitations will be discussed in the Appendix. However, for more general field configurations, an overlap problem arises: when performing the \\(T\\) integration, one faces situations in which the path integral is dominated by very elongated loops, despite the exponential suppression by the weight factor. Physically, those virtual pairs that delocalize strongly gain more energy and have a larger probability of becoming real. In this case, the finite loop ensemble with only a few elongated worldlines is no longer representative for the over-countably many paths of the path integral. To solve this problem, we have developed the routine presented in the following. Its cornerstone is a probability distribution analysis of particular worldline-ensemble properties along the lines suggested in [51]. ### CDF fit for pair production In order to motivate our algorithm, let us first consider the case of a constant homogeneous electric field \\(E\\) in Minkowski space.3 This is related to the Euclidean gauge potential by \\(A|_{\\rm E}=(0,0,0,-iEx_{1})^{\\top}\\). The corresponding Wilson loop can be written as Footnote 3: \\(E\\) denotes the _Minkowskian_, i.e., _physical_, field strength. \\[W(I)=e^{-TeEI},\\quad\\mbox{where}\\;\\;I:=\\int_{0}^{1}dt\\;\\dot{y}_{4}y_{1}. \\tag{13}\\] The scalar quantity \\(I\\) contains all relevant information about the unit loop for the present case. The probability density function (PDF) of \\(I\\) for our loop ensembles is defined by \\[P(I)=\\int_{{}_{\\rm V(0)=y(1)}\\atop{\\rm CM}}{\\cal D}y\\,\\delta\\left(I-\\int_{0}^ {1}dt\\,\\dot{y}_{4}y_{1}\\right)\\,e^{-\\int_{0}^{1}dt\\,\\dot{y}^{2}}. \\tag{14}\\] With the aid of a Fourier representation of the \\(\\delta\\) function, the path integral becomes Gaussian and yields \\[P(I)=\\frac{\\pi}{4}\\cosh^{-2}\\left(\\frac{\\pi}{2}I\\right) \\tag{15}\\] for constant fields. In terms of the PDF, the Wilson-loop expectation value can be written as \\[\\langle W\\rangle=\\int_{-\\infty}^{\\infty}dI\\;P(I)W(I), \\tag{16}\\] resulting in \\(\\langle W\\rangle=TeE/\\sin(TeE)\\) in agreement with the Schwinger pair-production rate for constant fields, cf. Eq (7). For inhomogeneous field configurations, \\(\\langle W_{x_{0}}\\rangle\\) can be computed in a similar way. Generalizing the definition of \\(I\\), \\[I_{x_{0}}:=\\frac{i\\int_{0}^{1}dt\\ \\dot{y}A(\\sqrt{T}y+x_{0})}{\\sqrt{T}E_{0}}, \\tag{17}\\] the PDF becomes space-time and proper-time dependent, \\[P_{x_{0}}(I)=\\int_{{y^{(0)}=y^{(1)}\\atop\\rm CM}}\\mathcal{D}y\\,\\delta\\left(I-I_{ x_{0}}\\right)\\ e^{-\\int_{0}^{1}dt\\ \\frac{\\dot{y}^{2}}{4}}. \\tag{18}\\] But for each space-time point, the Wilson-loop average can still be computed analogously to Eq. (16), with \\(W(I)=e^{-TeE_{o}I}\\) similar to Eq. (13). The reference field strength \\(E_{0}\\) is a priori arbitrary and has been introduced to obtain a dimensionless quantity. In most cases, we may use the local field strength \\(E_{0}:=|E(x_{0})|\\), or some averaged value. For the constant \\(E\\) field, our generalized definition of \\(I_{x_{0}}\\) conforms to the previous one. The PDF of \\(I_{x_{0}}\\) is generally not known analytically but will be computed numerically from a finite loop ensemble. Nevertheless, analytical knowledge about \\(P_{x_{0}}(I)\\) is required, owing to the following reasons: * The use of a Monte Carlo algorithm does not only demand the worldline spacetime metric to be Euclidean, but also requires the contour of the propertime integral to run along the real \\(T\\) axis. However, as is already obvious for the constant-field case, the integral in Eq. (16) is well defined only for \\(|TeE|<\\pi\\). At \\(|TeE|=\\pi\\), the first pole \\(T_{\\rm pol}\\) of \\(\\langle W_{x_{0}}\\rangle\\) is hit. For larger values of \\(|eET|\\), the \\(I\\) integral has to be replaced by its analytic continuation, which can only be constructed if \\(P_{x_{0}}(I)\\) is known analytically. * By using _finite_ loop ensembles, we already face an overlap problem for small \\(T\\) values: the majority of loops have a small \\(I\\) value, whereas those few loops with large \\(I\\) dominate the \\(I\\) integral in Eq. (16); see the Appendix. A controlled extrapolation of the PDF to large \\(I\\) values from reasonably big worldline ensembles can thus reduce the numerical cost considerably. This can be achieved by fitting the numerical PDF data to an analytical ansatz. The last point is, of course, related to the nonlocal features of pair production. The quantity \\(I\\) on the one hand is connected to the electrostatic energy gain of a virtual pair that propagates in a background field, and on the other hand roughly measures the space-time extent of a worldline. The dominance of large \\(I\\) values in the final result arises from strongly delocalized virtual pairs. To obtain an analytical estimate for the PDF, we generalize the result for the constant field, Eq. (15), by the following ansatz governed by two parameters \\(\\alpha\\) and \\(\ u\\): \\[P_{x_{0}}(I)=N\\cosh^{-2\ u}\\left(\\frac{\\pi}{2}\\alpha I\\right). \\tag{19}\\] The parameters control the two main features of the distribution: width and sheerness. Both parameters depend on the spacetime point \\(x_{0}\\) and on the propertime parameter \\(T\\). The normalization constant \\(N\\) is a function of \\(\\alpha\\) and \\(\ u\\) fixed by \\(\\int dI\\,P_{x_{0}}(I)=1\\). Numerically more convenient is the corresponding cumulative density function (CDF) of \\(|I|\\), \\[D_{x_{0}}(|I|)=\\int_{-|I|}^{|I|}d\\hat{I}\\ P(\\hat{I}).\\] For given values \\(T\\) and \\(x_{0}\\), we determine \\(\\alpha\\) and \\(\ u\\) by a fit of the numerical data to this CDF. Inserting the resulting parameters into Eq. (19) yields the desired analytical expression for \\(P_{x_{0}}(I)\\). Performing the \\(I\\) integral in Eq. (16) gives \\(\\langle W_{x_{0}}\\rangle\\) as function of \\(\\alpha\\) and \\(\ u\\), \\[\\langle W_{x_{0}}\\rangle=N\\frac{4^{\ u}}{\\pi\\alpha}\\frac{\\Gamma(\ u+\\frac{TeE _{0}}{\\pi\\alpha})\\Gamma(\ u-\\frac{TeE_{0}}{\\pi\\alpha})}{\\Gamma(2\ u)}.\\]This result also represents the desired analytical continuation to arbitrary values of \\(T\\) or \\(|eE_{0}T|\\), and solves the problem of Wick rotating the result of the Euclidean path integral back to Minkowski space. We observe that the second Gamma function in the numerator is responsible for the pole structure of \\(\\langle W_{x_{0}}\\rangle\\) on the positive real axis. Poles occur if \\[\ u-\\frac{TeE_{0}}{\\pi\\alpha}=-l,\\quad\\mbox{with }l=0,1,2,\\cdots. \\tag{20}\\] Since \\(\\alpha\\) and \\(\ u\\) depend on \\(T\\), Eq. (20) determines the pole positions \\(T_{\\rm pol}\\) only implicitly; in practice, we solve for \\(T_{\\rm pol}\\) iteratively. At the pole location, the corresponding residue is \\[\\mbox{Res}\\left(\\langle W_{x_{0}}\\rangle,T_{\\rm pol}\\right)=N\\frac{4^{\ u}}{ \\pi\\alpha}\\frac{\\Gamma(2\ u+l)}{\\Gamma(2\ u)}\\frac{(-1)^{l}}{l!\\frac{d}{dT} \\left(\ u-\\frac{TeE_{0}}{\\pi\\alpha}\\right)}\\Bigg{|}_{T_{\\rm pol}},\\] which we plug into Eq. (6) to obtain the pair-production rate. For a reliable control of the statistical error, we perform a jackknife analysis for all secondary quantities. For the systematic error due to the propertime discretization, we increase the number of \\(N\\) ppl to approach the continuum limit at least within the statistical errors. Obviously, the reliability of our results depends crucially on the ansatz (19) for the PDF. Apart from our consistency arguments referring to the shape of the PDF and the resulting pole structure, final support can only be given by nontrivial tests described below. In summary, the sufficiency of the ansatz is confirmed by the following arguments: * The free parameters control the two most essential features of the distribution, the width and the sheerness, which encode, in particular, the important contributions from the strongly delocalized virtual pairs. * The exact functional form for the constant-field limit is supported by the ansatz. Even without further checks, we could thus expect satisfactory results at least for slowly varying fields. * As a nontrivial analytical confirmation, we stress that the ansatz leads to a reasonable pole structure of \\(\\langle W\\rangle\\) that can encode information about coherent \\(n\\) pair production. * The ansatz provides highly convincing results for the Sauter potential including the constant field limit as special case, as presented in the next section. Any systematic deviations from the exact result are negligibly small compared to the statistical error. ## 4 Sauter potential The Sauter potential defines an electric field with solitonic profile in one spatial direction which is constant in all other directions including time. The direction of the field vector is constant and coincides with the solitonic-profile direction. An analytical expression of the corresponding total pair-production rate has been found by Nikishov [54]. In Minkowski space, the Sauter potential reads \\[A^{0}|_{\\rm M}=-a\\tanh(kx^{1}),\\;\\;A^{i}|_{\\rm M}=0,\\quad E^{1}|_{\\rm M}= \\frac{ak}{\\cosh^{2}(kx^{1})}.\\] The parameter \\(k\\) defines the inverse width of the electric field, whereas \\(a\\) governs its maximum, \\(E_{\\rm max}\\equiv ak\\). The constant-field limit is recovered for \\(k\\to 0\\) for fixed \\(ak\\). As an example, Fig. 1 shows the \\(x_{1}\\) dependence of the local pair-production rate \\(\\sim\\mbox{Im}\\mathcal{L}_{\\rm eff}\\) for \\(k=0.4m\\) and \\(E_{\\rm max}\\equiv ak=(m^{2}/e)\\) computed by our algorithm. It is compared to the approximated effective Lagrangian obtained by a derivative expansion to lowest order, i.e., by assuming the field to be locally constant (Schwinger formula). We observe that the local rate predicted by the algorithm is spatially smeared compared to the Schwinger formula. The pair-production density in the center \\(x^{1}=0\\) of the Sauter potential with maximal field strength \\(E_{\\rm max}=(m^{2}/e)\\) is shown in Fig. 2 versus the width parameter \\(k\\); units are set by the electron mass scale. For large width \\(k\\to 0\\), the constant-limit is approached, and our CDF fit algorithm correctly reproduces the Schwinger formula. The more interesting limit occurs for \\(k=m\\) where the production rate vanishes. Even though the electric field is still nonzero, the width of the Sauter potential is equal to the Compton wavelength. Therefore, even if a virtual pair delocalizes completely along the direction of field lines with the \\(e^{-}\\) going to \\(x^{1}\\to\\infty\\) and the \\(e^{+}\\) going to \\(-\\infty\\), the pair cannot acquire enough energy to become real. This important physical example is missed completely by the locally constant-field approximation, emphasizing the role of nonlocalities. Moreover, the limiting case of \\(k\\to m\\) is an extreme and crucial test for our algorithm based on the PDF ansatz (19): in the vicinity of this limit, there is literally not a single worldline in our finite ensemble that exhibits the strong delocalization required for giving rise to a _direct_ contribution to the final result (the number of sufficiently elongated worldlines is exponentially suppressed). Nevertheless, the overall distribution of \\(I\\) values allows for a controlled extrapolation via the CDF fit, leading to a numerical estimate even for the directly inaccessible regime. As a measure for the resulting error, we mention that our result for the case \\(k=m\\) is not exactly zero, but \\(|{\\rm Im}{\\cal L}_{\\rm eff}|/m^{4}=5.73\\cdot 10^{-8}\\pm 1.03\\cdot 10^{-6}\\). We conclude that possible systematic errors induced by our CDF fit algorithm are negligibly small compared to the statistical error. Finally, Fig. 3 shows the integrated total pair-production rate \\({\\rm Im}\\Gamma\\) compared to the Nikishov Figure 1: Spatial distribution of the effective Lagrangian’s imaginary part for a Sauter potential. The numerical result is compared to the locally constant-field approximation that overestimates the true result by up to \\(\\sim 50\\%\\). Parameters of the Sauter potential: \\(k=0.4m\\), \\(E_{\\rm max}=(m^{2}/e)\\). Parameters of the loop cloud: \\(n_{\\rm L}=100000\\), \\(N=1000\\) ppl. Figure 2: The imaginary part of the effective Lagrangian in the center of a Sauter potential with maximal field strength \\(m^{2}/e\\) versus the inverse width parameter \\(k\\). The dashed line marks the analytically obtained contribution of the first pole to \\(|{\\rm Im}{\\cal L}_{\\rm eff}|\\) for the constant-field limit (higher poles give a 1% correction). \\(n_{\\rm L}=100000\\), \\(N=1000\\) ppl. result. The agreement is satisfactory and the vanishing pair production for \\(ea=m\\) is reproduced within the error bars. ## 5 Sine-modulated potential In this section, we study the superposition of a spatially varying sine potential with a constant field. This configuration is of general interest, as it is representative for a class of field configurations which are superpositions of a slowly varying field--in our example the constant field--and higher-oscillation modes. A very important aspect is the dependence of the pair-production rate on the spatial oscillation frequency of the small-scale field structures. We consider this example as a paradigm for the role of nonlocal phenomena in pair production. In Minkowski space, the potential is given by \\[A^{0}|_{\\rm M}=-a\\sin(kx^{1})-E_{0}x^{1},\\ \\ A^{i}|_{\\rm M}=0.\\] It corresponds to an \\(E\\) field in \\(x^{1}\\) direction with field strength \\[E^{1}|_{\\rm M}=E_{0}+ak\\cos(kx^{1}),\\] which has extremal field strength of \\(E_{\\rm max,min}=E_{0}\\pm ak\\). As an example, we study a field with \\(E_{0}=0.2(m^{2}/e)\\) and \\(E_{\\rm max}=0.3(m^{2}/e)\\). Figure 4 shows the position of the first pole \\(T_{\\rm pol}\\) of the Wilson-loop expectation value on the real propertime axis for \\(x^{1}\\) in the center of a maximum of the field strength. For small \\(k\\), the pole position of the constant-field limit \\(E\\equiv E_{\\rm max}\\) is reproduced. For large \\(k\\), the pole position converges to the result of the averaged field \\(E\\equiv E_{0}\\). In between, the curve is not monotonically increasing, as one might have expected, but reaches \\(T\\) values which are significantly larger than in both limiting cases. As a consequence, the corresponding local production rate will be smaller than in the constant-field limit \\(E\\equiv E_{0}\\). This behavior is, of course, a consequence of nonlocalities and can be easily understood in the worldline picture in terms of loop clouds: Starting with the limit \\(k\\to 0\\), a loop cloud sitting at a maximum detects a constant field of strength \\(E_{\\rm max}\\). A sketch of this scenario if given in Fig. 5(a). If \\(k\\) is increased and the wavelength of the sine becomes shorter, the loop cloud overlaps more and more with the minima on either side of the maximum and the pole moves to larger \\(T\\) values. If \\(k\\) exceeds a certain value, in our example at about \\(k=0.8m\\), the two minima close by dominate the Wilson-loop expectation value, Fig. 5(b). Despite the maximum in the center of the loop cloud, the pole is at a larger \\(T\\) value than for the averaged field. Not until the loop cloud approaches the adjacent maxima, Fig. 5(c), do the \\(T\\) values become smaller again, finally converging to the value of the averaged field, Fig. 5(d). Figure 3: The imaginary part of the effective action for a Sauter potential as fraction of the locally constant-field approximation \\(\\rm ImT_{lc}\\) versus the width parameter \\(k\\) in units of \\(m\\): comparison of the numerical result with Nikishov’s analytic expression. \\(n_{\\rm L}=100000\\), \\(N=1000\\) ppl. Since the Wilson-loop expectation value at a maximum of the field strength can be dominated by the adjacent minima, the inverse situation can also occur at a minimum where the result can be dominated by the two adjacent maxima. In this case, the first pole of \\(\\langle W\\rangle\\) is at a _smaller_\\(T\\) value than for the averaged field, leading to a _larger_ imaginary part of the effective Lagrangian. This inversion is shown in Fig. 6, where the spatial distribution of the imaginary part of the effective action for \\(k=1.8m\\) is plotted in comparison to the constant-field limit \\(E\\equiv E_{0}\\). We observe that the nonlocalities induce a seemingly paradoxical phenomenon in this case: the maxima of the local pair-production rate occur at the minima of the electric field strength and vice versa. Figure 7 depicts the imaginary part of the total effective action per space-time volume for our example configuration versus the frequency \\(k\\). In contrast to its density at \\(x_{0}\\), \\(\\mathrm{Im}\\Gamma\\) does not fall below the result for the averaged field. For oscillation frequencies near \\(k=0\\), we observe that the locally constant-field approximation based on the derivative expansion fails rather early by an order of magnitude for \\(k\\simeq 0.5m\\); this is remarkable, since the effective expansion parameter \\(k^{2}/m^{2}\\simeq 0.25\\) might have been considered as small enough. In the opposite limit, for large frequencies \\(k\\), we obtain the averaged constant-field limit Figure 4: Position of the first pole \\(T_{\\mathrm{pol}}\\) of \\(\\langle W\\rangle\\) on the real proper-time axis at a maximum of the field strength. With increasing frequency \\(k\\), the pole moves from the constant-field limit \\(E\\equiv E_{\\mathrm{max}}\\) to the limit \\(E\\equiv E_{0}\\). Parameters of the field: \\(E_{0}=0.2(m^{2}/e)\\), \\(E_{\\mathrm{max}}=0.3(m^{2}/e)\\). In between, it develops an unexpected maximum corresponding to a minimum of the local production rate. Parameters of the loop cloud: \\(n_{\\mathrm{L}}=100000\\), \\(N=1000\\) ppl. Figure 5: An artist’s view on a loop cloud (worldline ensemble) at a maximum of the field strength. For small frequencies, it detects only the maximum (a). After increasing the frequency, the two nearest minima dominate (b). For larger frequencies the cloud encounters further maxima (c), until it perceives an averaged field (d). \\(E_{0}\\). It is remarkable that the imaginary part of the effective action reaches the value of the averaged field for \\(k\\) values as small as about \\(k=m\\), whereas its density still fluctuates spatially for even larger \\(k\\) values, as seen in Fig. 6. The fluctuations cancel each other, so that they have no effect on the integrated quantity. The numerical accuracy does not eliminate the possibility of a \\(k\\)-dependent structure for \\(k\\) values larger than \\(m\\). According to the values of Fig. 7, the central values suggest a slight increase of the pair production for \\(k>m\\), until it falls back to the result for the averaged field if \\(k/m\\to\\infty\\). To definitely clarify this, larger loop ensembles are necessary at the expense of CPU time. However, the present result shows that any possible \\(k\\) dependence for \\(k>m\\) has to be relatively small and the averaged-field approximation yields good results in this range. Let us finally compare our results for the spatially sine-modulated field with those for spatially homogeneous fields with time dependencies. Especially the case of an electric field oscillating in time with frequency \\(\\omega\\) has been studied with WKB methods [16, 17, 18, 19] which were originally developed for ionization processes in atomic physics [55]. The nature of pair production in this case depends on the size of the \"adiabaticity parameter\" \\(\\gamma:=m\\omega/(eE)\\); for small \\(\\gamma\\ll 1\\), the result approaches the Schwinger formula and pair production thus is a nonperturbative phenomenon. For large \\(\\gamma\\gg 1\\), the result becomes perturbative in \\((eE)/(m\\omega)\\) and pair production arises from multiphoton scattering. In our case, we can, of course, also form a similar parameter4\\(\\tilde{\\gamma}=mk/eE_{0}\\), with \\(\\tilde{\\gamma}\\) small or large roughly corresponding to the two limiting cases discussed above. However, it Figure 6: Spatial distribution of the imaginary part of the effective-action density for the sine-modulated potential with \\(k=1.8m\\) compared to the constant-field limit \\(E\\equiv E_{0}\\). Nonlocal effects lead to the seemingly paradoxical phenomenon that the pair-production rate is maximal at the field-strength minima and vice versa. \\(n_{\\rm L}=200000\\), \\(N=1000\\) ppl. Figure 7: The imaginary part of the total effective action per space-time volume against the frequency \\(k\\). The dashed lines mark the locally constant-field approximation and the result for the averaged field \\(E\\equiv E_{0}\\), respectively. The former (dashed lines) misses the true result by an order of magnitude already for \\(k/m\\simeq 0.5\\). \\(n_{\\rm L}=200000\\), \\(N=1000\\) ppl. is important to stress that pair production is nonperturbative in both limits for our sine-modulated field. In particular, the large-\\(k/m\\) (or large \\(\\tilde{\\gamma}\\)) case cannot be understood in terms of multiphoton processes. Taking the external field to all orders into account is essential for the final result. ## 6 Conclusion We have developed a new universal approach for computing local production rates for spontaneous pair creation by the Schwinger mechanism in scalar QED. Our method is based on the combination of the worldline formalism with Monte Carlo techniques. As a first result, we have not only rediscovered Nikishov's analytic result for the total pair-production rate in a Sauter potential, but moreover we have computed the local pair-production rate for this classic case for the first time. Most importantly, the algorithm is not restricted to any spatial symmetry of the given background potential but is applicable for arbitrary potentials. As a nontrivial example, we have applied the algorithm to a constant electric field modulated by a spatial sine oscillation. This field configuration is representative for a whole class of fields with large-scale structures and small-scale oscillations. By varying the spatial oscillation frequency, qualitatively different features of pair production can be investigated. For small frequencies, our numerical result agrees with the derivative expansion to lowest order; the latter breaks down completely for spatial variations on the order of a few times the Compton wavelength. On this length scale and below, our results show clearly that another approximation scheme becomes reliable: the local production rate can well be approximated by inserting the _spatially averaged_ field into the Schwinger formula. This averaged-field approximation can be trusted on the few-percent level for spatial variations of the size of the Compton wavelength. We would like to emphasize that the small validity bound of the derivative expansion for the imaginary part of the effective action density is not related to the same observation for the real part, as discovered in [39]; the latter arises from a subtle interplay between nonlocal quantum contributions and local counterterms, whereas the imaginary part is not affected by renormalization counterterms. Furthermore, the derivative expansion for the real part of the _integrated_ effective action works well even for Compton-scale variations [56], whereas it breaks down early for the imaginary part, as displayed in Fig. 7. Apart from these quantitative results for the particular field configurations considered here, our findings emphasize the crucial role of nonlocalities in the phenomenon of pair production. Without the feature of delocalization of a virtual pair, spontaneous vacuum decay would not occur. The worldline picture underlying our algorithm is particularly powerful in capturing these nonlocalities and, moreover, understanding their consequences in an intuitive way. Especially our results for local pair-production rates illustrate the nature and the role of nonlocalities transparently. For instance, the seemingly paradoxical situation that maxima of pair-production rates can occur at minima of the field strength (cf. Sect. 5) cannot be understood from a local approximation. However, the worldline picture identifies a natural explanation of this phenomenon in terms of the delocalization properties of the virtual pairs described by the worldline trajectory. From a technical perspective, we have developed a numerical Monte Carlo algorithm that on the one hand requires a Euclidean formulation for the quantum fluctuations, but on the other hand produces reliable results for truly Minkowski-valued physical observables. The inherent overlap problem is solved in the present context by a physically motivated ansatz for a suitable cumulative distribution function (CDF) to which the numerical data can be fitted and that can be analytically continued to Minkowski space. Even though the success of this procedure depends strongly on the problem at hand, we believe that such techniques can be useful in other Minkowski-valued problems as well. The algorithmic strategy itself has been invented in the context of Euclidean field theory [51], where it has turned out to be highly powerful in a study of nonperturbative worldline dynamics. Several extensions of our work are desirable and possible. So far, we have only considered spatial inhomogeneities, but any realistic field configuration will also exhibit variations in time. In fact, timelike variations bring in a new complication, since our Monte Carlo worldlines live in imaginary time, whereas physical fields depend on real time. Therefore, our algorithm is directly applicable to all those cases where the physical field is known analytically, such that its analytic continuation to imaginary time can be evaluated and plugged into the numerics. For instance, the exact result for a solitonic profile in time direction as solved in [57] will be a benchmark test for such an investigation. Furthermore, our results can, in principle, straightforwardly be generalized to ordinary spinor QED. As a new complication, the Pauli term \\(\\sim\\sigma_{\\mu\ u}F_{\\mu\ u}\\) occurs in the worldline integrand. Since this term depends also on the worldline trajectory, the probability distribution function (PDF) of the ensemble will not only depend on the quantity \\(I\\) as defined in Eq. (17), but also on the worldline averaged Pauli-term exponential; let us denote the latter with \\(J\\), which is also a scalar. Our algorithm might be generalized as follows: first, compute the PDF of \\(J\\) from the ensemble and bin the loops according to their \\(J\\) value. Then, apply the present algorithm to each \\(J\\) bin separately; in particular, the same analytic-continuation technique can be used. Finally, integrate over \\(J\\) with the aid of the PDF of \\(J\\). It is important to note that the \\(J\\) integral can be done last, since the Pauli-term worldline average cannot induce any poles for the proper-time integral. Of course, since each relevant \\(J\\) bin has to contain sufficiently many worldlines, this generalization of our algorithm will at least be an order of magnitude more time consuming than the one for scalar QED. At this point, we should stress that the computations for the present work have still been performed on ordinary desktop PC's. Finally, it is instructive to compare our method to the instanton technique of [27, 30], where the instanton approximation of the worldline integral has been shown to give the leading-order contribution to pair production. For instance, in the constant-field case, the one-pair-production rate is generated by one instanton which is a circular loop. Small fluctuations around this path lead to the correct imaginary prefactor. In comparison to this, our worldlines are extraordinarily complex. Not a single worldline loop in our ensembles resembles a circle or fluctuations thereof. This gives rise to the conjecture that the computation of the imaginary part requires very little information about the shape of the loops. We expect that we should be able to extract the instantonic content of our loops by a suitable cooling procedure that removes large-amplitude fluctuations. In view of the success of the instanton approximation, only instantonic plus small-amplitude-fluctuation information appears to be relevant for pair production. This agrees with our observation that pair production is induced by delocalized \"large\" loops that can acquire enough energy in the \\(E\\) field. Therefore, it is well possible that a different loop discretization which optimizes instantonic properties allows for an even more efficient computation of the imaginary part. A further investigation of this topic may lead to an even deeper understanding of pair production. ## Acknowledgment We are grateful to G.V. Dunne, J. Hammerling, K. Langfeld, J. Sanchez-Guillen, M.G. Schmidt, C. Schubert, I.-O. Stamatescu, and R. Vazquez for many useful discussions, and to G.V. Dunne for helpful comments on the manuscript. This work was supported by the Deutsche Forschungsgemeinschaft (DFG) under contract Gi 328/1-3 (Emmy-Noether program). ## Appendix A Constant field: straightforward approach In the following, we present a straightforward realization of worldline numerics for calculating the pair-production rate in the constant-field case. The algorithm presented here is an immediate generalization of the standard algorithm successfully used for the real part of the effective action [37, 38, 39, 40, 41]. For a constant field in four dimensions, Eq. (5) reads \\[\\Gamma^{1}_{\\rm E}=-\\frac{1}{(4\\pi)^{2}}\\int_{0}^{\\infty}\\frac{dT}{T^{3}}e^{- m^{2}T}\\int d^{4}x_{0}\\left(\\langle e^{-TeEI}\\rangle-1-\\frac{1}{6}T^{2}e^{2}E^{2 }\\right), \\tag{21}\\]where \\(I\\) is defined as in Eq. (13), and the counterterms for on-shell renormalization are included. Rotating the \\(T\\) integration contour onto the imaginary axis and substituting \\(s=-iTeE\\) yields a Fourier integral, \\[\\Gamma^{1}_{\\rm E}=\\left(\\frac{eE}{4\\pi}\\right)^{2}\\int_{0}^{\\infty}\\frac{ds}{s ^{3}}e^{-i\\frac{m^{2}}{eE}s}\\int d^{4}x_{0}\\left(\\langle e^{-iIs}\\rangle-1+ \\frac{1}{6}s^{2}\\right). \\tag{22}\\] If the worldline-ensemble average \\(\\langle e^{-iIs}\\rangle\\) can be computed reliably, Eq. (22) offers a highly efficient algorithm with the aid of the FFT: in this case, \\(\\Gamma^{1}_{\\rm E}\\) can be computed for a whole spectrum of frequencies \\(m^{2}/eE\\) all at once with FFT. The resulting imaginary part is shown in Fig. 8. It is highly remarkable that this numerical procedure gives satisfactory results in a wide range of scales, extending over five orders of magnitude, with little consumption of CPU time. However, the algorithm fails for small field strengths. The precise limit is given by the size of the largest loops in the finite loop ensemble: only loops with \\(|I|\\) values larger than \\(m^{2}/eE\\) contribute to the imaginary part of \\(\\Gamma^{1}_{\\rm E}\\). For weak fields, this implies that only a few or even no loops contribute and the computation fails. Beside this problem which is already relevant for the constant-field case, there is a second limitation. For a different contour in the complex \\(T\\) plane which supports large \\({\\rm Re}T\\) values, the Wilson-loop expectation value is dominated by the loop with the largest \\(I\\) value. The Monte Carlo algorithms break down here, since the error bars become as large as the central value. In general, for inhomogeneous background fields, it is not possible to find a suitable integration contour to avoid this problem. These limitations of the straightforward approach are a manifestation of the fact that the Euclidean worldline ensemble has insufficient overlap with Minkowski-valued observables for weak fields. ## References * [1] F. Sauter, Z. Phys. **69**, 742 (1931). * [2] W. Heisenberg and H. Euler, Z. Phys. **98**, 714 (1936). * [3] V. Weisskopf, Kong. Dans. Vid. Selsk., Mat.-fys. Medd. **XIV**, 6 (1936). * [4] J. S. Schwinger, Phys. Rev. **82**, 664 (1951). * [5] S. W. Hawking, Nature **248**, 30 (1974). * [6] T. Damour and R. Ruffini, Phys. Rev. Lett. **35**, 463 (1975). * [7] S. P. Kim and D. N. Page, arXiv:gr-qc/0401057. * [8] A. Casher, H. Neuberger and S. Nussinov, Phys. Rev. D **20**, 179 (1979). * [9] L. Parker, Phys. Rev. **183**, 1057 (1969). * [10] B. Garbrecht and T. Prokopec, Phys. Rev. D **70**, 083529 (2004) [arXiv:gr-qc/0406114]. Figure 8: Imaginary part of the effective action obtained by FFT. \\(n_{\\rm L}=5000\\), \\(N=1000\\) ppl. * [11] J. Arthur _et al._ [LCLS Design Study Group Collaboration], SLAC-R-0521 (1998). * [12] G. Materlik, (ed.), T. Tschentscher, (ed.), DESY-01-011, DESY-2001-011, DESY-01-011E, DESY-2001-011E, DESY-TESLA-2001-23, DESY-TESLA-FEL-2001-05, ECFA-2001-209 (2001). * [13] (ed. ) Brinkmann, R. et al., DESY-02-167 (2002). * [14] A. Ringwald, arXiv:hep-ph/0304139. * [15] B. S. DeWitt, Phys. Rept. **19**, 295 (1975). * [16] E. Brezin and C. Itzykson, Phys. Rev. D **2**, 1191 (1970). * [17] V. S. Popov, Sov. Phys. JETP **34**, 709 (1972). * [18] V. S. Popov and M. S. Marinov, Yad. Fiz. **16**, 809 (1972). * [19] A. D. Piazza, Phys. Rev. D **70**, 053013 (2004). * [20] J. Hallin and P. Liljenberg, Phys. Rev. D **52**, 1150 (1995) [arXiv:hep-th/9412188]. * [21] H. M. Fried and R. P. Woodard, Phys. Lett. B **524**, 233 (2002) [arXiv:hep-th/0110180]. * [22] J. Avan, H. M. Fried and Y. Gabellini, Phys. Rev. D **67**, 016003 (2003) [arXiv:hep-th/0208053]. * [23] S. A. Smolyansky, G. Ropke, S. M. Schmidt, D. Blaschke, V. D. Toneev and A. V. Prozorkevich, arXiv:hep-ph/9712377. * [24] S. A. Smolyansky, A. V. Prozorkevich, S. M. Schmidt, D. Blaschke, G. Roepke and V. D. Toneev, Int. J. Mod. Phys. E **7**, 515 (1998) [arXiv:nucl-th/9709057]. * [25] Y. Kluger, E. Mottola and J. M. Eisenberg, Phys. Rev. D **58**, 125015 (1998) [arXiv:hep-ph/9803372]. * [26] R. Alkofer, M. B. Hecht, C. D. Roberts, S. M. Schmidt and D. V. Vinnik, Phys. Rev. Lett. **87**, 193902 (2001) [arXiv:nucl-th/0108046]. * [27] I. K. Affleck, O. Alvarez and N. S. Manton, Nucl. Phys. B **197**, 509 (1982). * [28] S. P. Kim and D. N. Page, Phys. Rev. D **65**, 105002 (2002) [arXiv:hep-th/0005078]. * [29] S. P. Kim and D. N. Page, arXiv:hep-th/0301132. * [30] G. V. Dunne and C. Schubert, arXiv:hep-th/0507174. * [31] G. V. Dunne and T. M. Hall, Phys. Rev. D **60**, 065002 (1999) [arXiv:hep-th/9902064]. * [32] G. V. Dunne and C. Schubert, JHEP **0206**, 042 (2002) [arXiv:hep-th/0205005]. * [33] D. D. Dietrich, Phys. Rev. D **68**, 105005 (2003) [arXiv:hep-th/0302229]. * [34] D. D. Dietrich, Phys. Rev. D **70**, 105009 (2004) [arXiv:hep-th/0402026]. * [35] H. Gies, Phys. Rev. D **61**, 085021 (2000) [arXiv:hep-ph/9909500]. * [36] W. Dittrich and H. Gies, Springer Tracts Mod. Phys. **166**, 1 (2000). * [37] H. Gies and K. Langfeld, Nucl. Phys. B **613**, 353 (2001) [arXiv:hep-ph/0102185]. * [38] H. Gies and K. Langfeld, Int. J. Mod. Phys. A **17**, 966 (2002) [arXiv:hep-ph/0112198]. * [39] K. Langfeld, L. Moyaerts and H. Gies, Nucl. Phys. B **646**, 158 (2002) [arXiv:hep-th/0205304]. * [40] H. Gies, K. Langfeld and L. Moyaerts, JHEP **0306**, 018 (2003) [arXiv:hep-th/0303264]. * [41] L. Moyaerts, K. Langfeld and H. Gies, arXiv:hep-th/0311168. * [42] R. P. Feynman, Phys. Rev. **80**, 440 (1950). * [43] M. B. Halpern, A. Jevicki and P. Senjanovic, Phys. Rev. D **16**, 2476 (1977). * [44] A.M. Polyakov, \"Gauge fields and strings,\" Harwood, Chur, (1987). * [45] Z. Bern and D. A. Kosower, Nucl. Phys. B **379**, 451 (1992). * [46] M. J. Strassler, Nucl. Phys. B **385**, 145 (1992) [arXiv:hep-ph/9205205]. * [47] M. G. Schmidt and C. Schubert, Phys. Lett. B **318**, 438 (1993) [arXiv:hep-th/9309055]. * [48] M. G. Schmidt and C. Schubert, Phys. Rev. D **53**, 2150 (1996) [arXiv:hep-th/9410100]. * [49] M. Reuter, M. G. Schmidt and C. Schubert, Annals Phys. **259**, 313 (1997) [arXiv:hep-th/9610191]. * [50] C. Schubert, Phys. Rept. **355**, 73 (2001) [arXiv:hep-th/0101036]. * [51] H. Gies, J. Sanchez-Guillen and R. A. Vazquez, arXiv:hep-th/0505275, to appear in JHEP. * [52] M. G. Schmidt and I. O. Stamatescu, Nucl. Phys. Proc. Suppl. **119**, 1030 (2003) [arXiv:hep-lat/0209120]. * [53] M. G. Schmidt and I. Stamatescu, Mod. Phys. Lett. A **18**, 1499 (2003). * [54] A. I. Nikishov, Nucl. Phys. B **21**, 346 (1970). * [55] L. V.. Keldysh, Sov. Phys. JETP, **20**, 1307 (1965). * [56] N. Graham, V. Khemani, M. Quandt, O. Schroeder and H. Weigel, Nucl. Phys. B **707**, 233 (2005) [arXiv:hep-th/0410171]. * [57] G. V. Dunne and T. Hall, Phys. Rev. D **58**, 105022 (1998) [arXiv:hep-th/9807031].
We employ the recently developed worldline numerics, which combines string-inspired field theory methods with Monte Carlo techniques, to develop an algorithm for the computation of pair-production rates in scalar QED for inhomogeneous background fields. We test the algorithm with the classic Sauter potential, for which we compute the local production rate for the first time. Furthermore, we study the production rate for a superposition of a constant \\(E\\) field and a spatially oscillating field for various oscillation frequencies. Our results reveal that the approximation by a _local_ derivative expansion already fails for frequencies small compared to the electron mass scale, whereas for strongly oscillating fields a derivative expansion for the _averaged_ field represents an acceptable approximation. The worldline picture makes the nonlocal nature of pair production transparent and facilitates a profound understanding of this important quantum phenomenon. HD-THEP-05-07, [http://arXiv.org/abs/hep-ph/0505099](http://arXiv.org/abs/hep-ph/0505099)
Write a summary of the passage below.
arxiv-format/0505127v5.md
Asymptotic analysis of loss probabilities in \\(GI/M/m/n\\) queueing systems as \\(n\\) increases to infinity Vyacheslav M. Abramov Department of Mathematics, University of California, Berkeley, CA 94720, USA [email protected] ## 1. # Introduction It is well-known that queueing systems with many servers are well models for communication systems. Study of queues with many servers and especially analysis of the loss probability have a long history going back to the works of Erlang (see [7]), who in 1917 first gave fundamental results for Markovian queueing systems, and to the works of Palm, Pollaczek and other researchers (e.g. [15], [16], [11], [19], [22]), who then developed the Erlang's results to non-Markovian systems. Since then these results have been developed in a large number of investigations, motivated by growing development of modern telecommunication systems. Nowadays the theory of loss queueing theory is very rich. There is a number of different directions of the theory including management and control, redundancy, analysis of retrials, impatient customers and so on. In the present paper we study the \\(GI/M/m/n\\) queueing system in which parameter \\(n\\), the number of possible waiting places, is a large value. This assumption is typical for real telecommunication systems. Under this assumption the paper studies asymptotic behavior of the loss probability, where the most significant result seems to be related to the case where the load parameter approaches \\(1\\) from the left. The case of a heavy load parameter is the most interesting in practice. In the pre-project study and system design stage an engineer is especially interested to know about the behavior of the loss probability in heavy loaded systems. We consider the \\(GI/M/m/n\\) queue, where \\(m>1\\) is the number of servers, and \\(n\\geq 0\\) is the admissible queue-length. Let \\(A(x)\\) denote the probability distribution function of an interarrival time, and let \\(\\lambda\\) be the reciprocal of the expected interarrival time. For real \\(s\\geq 0\\)we denote \\(\\alpha(s)=\\int_{0}^{\\infty}\\mathrm{e}^{-sx}\\mathrm{d}A(x)\\). The parameter of the service time distribution is denoted \\(\\mu\\), and the load of the system is \\(\\varrho=\\lambda/(m\\mu)\\). (In Theorem 3.2 the parameter \\(\\varrho\\) is assumed to depend on \\(n\\). However we do not write this dependence explicitly, assuming that this dependence is clear from the formulation of the aforementioned theorem.) In the case of \\(GI/M/1/n\\) queueing system for the stationary loss probability \\(p_{n}\\) we have the representation (Abramov [2]) \\[p_{n}=\\frac{1}{\\pi_{n}}, \\tag{1.1}\\] where the generating function \\(\\Pi(z)\\) of \\(\\pi_{j}\\), \\(j=0,1,\\ldots\\) is the following: \\[\\Pi(z)=\\sum_{j=0}^{\\infty}\\pi_{j}z^{j}=\\frac{\\alpha(\\mu-\\mu z)}{\\alpha(\\mu- \\mu z)-z},\\ \\ |z|<\\sigma, \\tag{1.2}\\] \\(\\sigma\\) is the least in absolute value root of the functional equation \\(z=\\alpha(\\mu-\\mu z)\\) (the variable \\(z\\) is assumed to be real). It is well-known (e.g. Takacs [23]), that \\(\\sigma\\) belongs to the open interval (0,1) if \\(\\varrho<1\\), and it is equal to 1 otherwise. Note, that another representation for the loss probability \\(p_{n}\\) is given in Miyazawa [14]. The value \\(\\pi_{n}\\) has the following meaning. This is the expected number of arrivals up to the first loss of a customer arriving to the stationary system. \\(\\pi_{n}\\) satisfies the recurrence relation of convolution type \\[\\pi_{n}=\\sum_{i=0}^{n}r_{i}\\pi_{n-i+1},\\ \\ n=0,1,\\ldots, \\tag{1.3}\\] where the initial value is \\(\\pi_{0}=1\\). Specifically, \\[r_{ where \\(f_{0}>0\\), \\(f_{i}\\geq 0\\) (\\(i\\geq 1\\)) and \\(f_{0}+f_{1}+ =1\\), has been originally studied by Takacs [24], p. 22 and then developed by Postnikov [17], Section 25. For the readers' convenience, some of these results, necessary for the purpose of the paper, are collected in the Appendix. By exploiting (1.3), Abramov [2] studied the asymptotic behavior of the loss probability \\(p_{n}\\), as \\(n\\rightarrow\\infty\\). The analysis of Abramov [2] is based on the mentioned results of Takacs [24] and Postnikov [17]. For other applications of the mentioned results of Takacs [24] and Postnikov [17] see also Abramov [1], [3], [4]. The asymptotic analysis of \\(GI/M/m/n\\) queueing system, as \\(n\\rightarrow\\infty\\), is a much more difficult problem than the same problem for the \\(GI/M/1/n\\) queue. Recently, Choi et al [9] and Kim and Choi [12] obtained some new results related to the \\(GI/M/m/n\\) and \\(GI^{X}/M/m/n\\) queues. In Choi et al [9] the exact estimation for the convergence rate of the stationary \\(GI/M/m/n\\) queue-length distribution to the stationary queue-length distribution of the \\(GI/M/m\\) queueing system, as \\(n\\rightarrow\\infty\\), is obtained. In Kim and Choi [12] detailed analysis of the loss probability of the \\(GI^{X}/M/m/n\\) is provided. The analysis of these two aforementioned papers is based on the development of the earlier results of Choi and Kim [8] in a nontrivial fashion. The analysis of Choi and Kim [8] and Choi et al [10] in turn uses the deepen theory of analytic functions, including an untraditional result of the theory of power series given by a theorem of Wiener. Ramalhoto and Gomez-Corral [18] discuss retrials in \\(M/M/r/d\\) loss queues and present an appropriate decomposition formulae for losses and delay in those queueing systems. In the case of Markovian system with many servers the results of Ramalhoto and Gomez-Corral [18]are useful for analysis of the effect of retrials in asymptotic analysis of losses. The present paper provides the asymptotic analysis of the loss probability of the \\(GI/M/m/n\\) queue as \\(n\\to\\infty\\), by reduction of the sequence \\(\\pi_{m,0,n}^{(n)}\\) to above representation (1.5), and then estimates the loss probability \\(p_{m,n}=1/\\pi_{m,0,n}^{(n)}\\) by using the results of Takacs [24] and Postnikov [17] given in the Appendix. (The precise definition of the sequence \\(\\pi_{m,0,n}^{(n)}\\) is given later.) This is the same idea as in the earlier paper of Abramov [2] related to the \\(GI/M/1/n\\) queue, however it is necessary to underline the following. Whereas in the case of \\(GI/M/1/n\\) the reduction to (1.3) is straightforward, and the representation of the probabilities \\(r_{n}\\) and their generating function is very simple, the reduction to the recurrence relation of (1.5) in the case of \\(GI/M/m/n\\) queue, \\(m>1\\), is not obvious, and representation for probabilities \\(r_{k,m-k,n}\\) and \\(r_{0,m,j}\\) and associated generating function is more difficult. (The aforementioned probabilities \\(r_{k,m-k,n}\\) and \\(r_{0,m,j}\\) are defined later.) Furthermore, the sequence \\(\\pi_{m,0,j}^{(n)}\\), \\(j\\leq n\\), is structured as schema with the series classes, and, as \\(j=n\\), the value \\(p_{m,n}=1/\\pi_{m,0,n}^{(n)}\\) coincides with the desired loss probability. For this reason the class of asymptotic results, that we could obtain here for \\(GI/M/m/n\\) queue, is poorer than that for the \\(GI/M/1/n\\) queue in Abramov [2]. Our approach has the following two essential advantages compared to the pure analytical approaches of Choi et al [10], Kim and Choi [12], Choi et al [9], Simonot [20]: \\(\\bullet\\) The problem reduces to the known classic results (Theorem of Takacs [24] and Tauberian Theorem of Postnikov [17]), permitting us to substantially diminish the cumbersome algebraic calculations and clearly understand the results. \\(\\bullet\\) Along with standard asymptotic results having a quantitative feature we also prove one interesting property related to the case of \\(n\\) increasing to infinity and the load approaching \\(1\\) from the left. (For the more precise assumptions see formulation of Theorem 3.2.) Specifically it is proved that the obtained asymptotic representation is the same for all \\(m\\geq 1\\), i.e. it coincides with asymptotic representation obtained earlier for the \\(GI/M/1/n\\) queueing system in Abramov [2]. As \\(n\\) increases to infinity, this asymptotic property remains true for all fixed \\(\\varrho\\geq 1\\). The conditions of Theorem 3.2 and the first two cases of Theorem 3.1 all fall into the domain of heavy traffic theory (e.g. Borovkov [6], Whitt [25], [26]). Although the aforementioned recent results of Whitt [25], [26] are related to more general models, they however do not cover the results of this paper. Our approach is based on asymptotic analysis of relations (3.4) and (3.5) which is based on asymptotic representation for the root of equation \\(z=\\alpha(\\mu m-\\mu mz)\\) (see formulation of Theorem 3.1) as \\(\\varrho\\) approaches \\(1\\), presented in the book of Subhankulov [21]. (Chapter 9 of this book is devoted to application of Tauberian theorems to a specific moving server problem arising in Operations Research.) The rest of the paper is organized as follows. In Section 2 we give some heuristic arguments preparing the reader to the results of the paper. There are two theorems presenting the main results in Section 3. In Section 4 we derive the recurrence relation of convolution type for the loss probability. These recurrence relations are then used to prove Theorem 3.1 of the paper. Theorem 3.1 is proved in Section 5 and Section 6. In Section 7 we study the behavior of the loss probability as the load approaches 1 from the left. In Section 8 numerical example supporting the theory is provided. The Appendix contains auxiliary results necessary for the purpose of the paper: the theorem of Takacs [24] and the Tauberian theorem of Postnikov [17]. ## 2. **Some heuristic arguments** The \\(GI/M/m/n\\) queueing system, \\(m>1\\), is more complicated than its analog with one server. Letting \\(n\\) be infinity, discuss first the \\(GI/M/1\\) and \\(GI/M/m\\) queueing systems. It is well-known that the stationary queue-length distribution of \\(GI/M/1\\) queue (immediately before arrival of a customer with large order number) is geometric. The same (customer-stationary) queue-length distribution of \\(GI/M/m\\) queue, provided that immediately before arrival at least \\(m-1\\) servers are occupied, is geometric as well. Thus, a typical behavior of the queue-length processes of \\(GI/M/1\\) and \\(GI/M/m\\) queues is similar, if we assume additionally that a customer arriving into \\(GI/M/m\\) queueing system finds at least \\(m-1\\) servers busy (see Kleinrock [13]). The similar situation holds in the case of the \\(GI/M/1/n\\) and \\(GI/M/m/n\\) queues. Specifically, in the case of the \\(GI/M/1/n\\) queue the stationary loss probability satisfies (1.1)-(1.3). In the case of the \\(GI/M/m/n\\) queue the _conditional_ stationary loss probability _provided that upon arrival at least \\(m-1\\) servers are busy_ satisfies (1.3) as well. In the case of the \\(GI/M/m/n\\) queue the only difference is that, the value \\(\\mu\\) in (1.2) and (1.4) should be replaced with \\(\\mu m\\), and \\(\\sigma\\) should be the least in absolute value root of the equation \\(z=\\alpha(\\mu m-\\mu mz)\\) rather than of the equation \\(z=\\alpha(\\mu-\\mu z)\\). In the sequel, the least root of the functional equation \\(z=\\alpha(\\mu m-\\mu mz)\\) is denoted \\(\\sigma_{m}\\). Let us now discuss the stationary probabilities of the \\(GI/M/m/n\\) system, \\(m>1\\), without the condition above. It is clear that eliminating the condition above proportionally changes the stationary probabilities \\(\\mathbf{P}\\{\\) arriving customer meets \\(m+j-1\\) customers in the system \\(\\}\\), \\(j\\geq 0\\). That is, the loss probability is changed proportionally as well. This enables us to anticipate the behavior of the loss probability as \\(n\\to\\infty\\) in some cases. Specifically, in the case \\(\\varrho<1\\) and \\(n\\) large, the loss probability is equal to conditional loss probability, provided that upon arrival at least \\(m-1\\) servers are busy, multiplied by some constant. That is, as \\(n\\) large, the both abovementioned loss probabilities, conditional and unconditional, are of the same order. Following Abramov [2] (see also Choi et al [9]) this order is \\(O(\\sigma_{m}^{n})\\). The precise result is given by Theorem 3.1 below. If \\(n\\) large and \\(\\varrho\\geq 1\\), then the probability that arriving customer meets less than \\(m-1\\) customers in the system is small, and therefore, the loss probability should be approximately the same as the conditional stationary probability that upon arrival of a customer at least \\(m-1\\) servers are busy. That is, following Abramov [2] (see also Choi et al [9]) one can expect, that in the case of \\(\\varrho\\geq 1\\), the limiting stationary loss probability of the \\(GI/M/m/n\\) queue, as \\(n\\to\\infty\\), should be equal to \\((\\varrho-1)/\\varrho\\) for all \\(m\\). Can the last property of asymptotic independence of \\(m\\) be extended as \\(n\\) increases to infinity and \\(\\varrho\\) approaches \\(1\\)? The paper provides the condition for this asymptotic independence in this case. ## 3. **Formulation of the main results** **Theorem 3.1**.: _If \\(\\varrho>1\\) then for any \\(m\\geq 1\\)_ \\[\\lim_{n\\to\\infty}p_{m,n}=\\frac{\\varrho-1}{\\varrho}. \\tag{3.1}\\] _If \\(\\varrho=1\\) and \\(\\varrho_{2}=\\int_{0}^{\\infty}(\\mu x)^{2}dA(x)<\\infty\\) then for any \\(m\\geq 1\\)_ \\[\\lim_{n\\to\\infty}np_{m,n}=\\frac{\\varrho_{2}}{2}. \\tag{3.2}\\] _If \\(\\varrho=1\\) and \\(\\varrho_{3}=\\int_{0}^{\\infty}(\\mu x)^{3}dA(x)<\\infty\\) then for large \\(n\\) and any \\(m\\geq 1\\) we have_ \\[p_{m,n}=\\frac{\\varrho_{2}}{2n}+O\\Big{(}\\frac{\\log n}{n^{2}}\\Big{)}. \\tag{3.3}\\] _If \\(\\varrho<1\\) then for \\(p_{m,n}\\) we have the limiting relation:_ \\[\\lim_{n\\to\\infty}\\frac{p_{m,n}}{\\sigma_{m}^{n}}=K_{m}[1+\\mu m\\alpha^{\\prime}( \\mu m-\\mu m\\sigma_{m})], \\tag{3.4}\\] _where \\(\\alpha^{\\prime}(\\cdot)\\) denotes the derivative of \\(\\alpha(\\cdot)\\),_ \\[K_{m}=\\Big{[}1+(1-\\sigma_{m})\\sum_{j=1}^{m}\\frac{{m\\choose j}C_{j}}{(1-\\varphi _{j})}\\ \\frac{m(1-\\varphi_{j})-j}{m(1-\\sigma_{m})-j}\\Big{]}^{-1}, \\tag{3.5}\\] \\[\\varphi_{j}=\\int_{0}^{\\infty}e^{-\\mu jx}dA(x),\\] \\[C_{j}=\\prod_{i=1}^{j}\\frac{1-\\varphi_{j}}{\\varphi_{j}},\\] _and \\(\\sigma_{m}\\) is the least in absolute value root of functional equation:_ \\[z=\\alpha(\\mu m-\\mu mz).\\] Theorem 3.1 shows that if \\(\\varrho>1\\) then the limiting stationary loss probability is independent of parameter \\(m\\). If \\(\\varrho=1\\) and \\(\\varrho_{2}<\\infty\\) then \\(\\lim_{n\\to\\infty}np_{m,n}\\) is independent of parameter \\(m\\) as well. The proof of (3.1) seems to be given by simple straightforward arguments (extended version of the heuristic arguments of Section 2). Nevertheless, all results are proved by reduction to the abovementioned theorems of Takacs[24] and Postnikov [17] given in the Appendix. The most significant result of Theorem 3.1 is (3.4). This result is then used to prove the statements of Theorem 3.2 on the behavior of the loss probability as the load approaches \\(1\\) from the left. This behavior of the loss probability is given by the following theorem. **Theorem 3.2**.: _Let \\(\\varrho=1-\\varepsilon\\), where \\(\\varepsilon>0\\), and \\(\\varepsilon n\\to C\\) as \\(n\\to\\infty\\) and \\(\\varepsilon\\to 0\\). Assume that \\(\\varrho_{3}=\\varrho_{3}(n)\\) is a bounded sequence in \\(n\\), and there exists \\(\\widetilde{\\varrho}_{2}=\\lim_{n\\to\\infty}\\varrho_{2}(n)\\). In the case where \\(C>0\\) for any \\(m\\geq 1\\) we have_ \\[p_{m,n}=\\frac{\\varepsilon e^{-2C/\\widetilde{\\varrho}_{2}}}{1-e^{-2C/\\widetilde {\\varrho}_{2}}}[1+o(1)]. \\tag{3.6}\\] _In the case where \\(C=0\\) for any \\(m\\geq 1\\) we have_ \\[p_{m,n}=\\frac{\\widetilde{\\varrho}_{2}}{2n}+o\\Big{(}\\frac{1}{n}\\Big{)}. \\tag{3.7}\\] Theorem 3.2 shows that as \\(\\varrho\\) approaches \\(1\\) from the left, the loss probability \\(p_{m,n}\\) becomes independent of parameter \\(m\\) when \\(n\\) large, and the asymptotic behavior of the loss probability is exactly the same as for the \\(GI/M/1/n\\) queue. ## 4. **Derivation of the recurrence equations for the loss probability** For the sake of convenience, in this section we keep in mind that the first \\(m-1\\) states of the \\(GI/M/m/n\\) queue-length process form one special class. If an arriving customer occupies one of servers, then the system is assumed to be in this class, and the states of this class are numbered \\(1\\), \\(2\\), , \\(m\\). Otherwise, the system is in the other class with states \\(m+1\\), \\(m+2\\), , \\(m+n\\), where the last state, \\(m+n\\), is associated with a loss of an arriving customer. For example, if \\(n=0\\), then the second class of the \\(GI/M/m/0\\) queueing system consists of one state only. Let us now build the recurrence relation similar to those of the \\(GI/M/1/n\\) queue. We start from the \\(GI/M/1/0\\) queue. For this queue we have \\[\\pi_{1,0}=\\frac{1}{r_{0,1}}, \\tag{4.1}\\] where \\[r_{0,1}=\\varphi_{1}=\\int_{0}^{\\infty}\\mathrm{e}^{-\\mu x}\\mathrm{d}A(x).\\] Equation (4.1) formally follows from the recurrence relation \\(\\pi_{0,1}=r_{0,1}\\pi_{1,0}\\), where \\(\\pi_{0,1}=1\\). The loss probability for the \\(GI/M/1/0\\) queue is equal to \\[p_{1,0}=\\frac{1}{\\pi_{1,0}}=\\varphi_{1}.\\] Before considering the case of the \\(GI/M/m/n\\) queue, notice that the value \\(\\pi_{m-k,k}\\) has the meaning of the expected number of arrivals to the stationary system up to at the first time an arriving customer finds \\(m-k\\) servers busy and \\(k\\) remaining servers free. In the case of the \\(GI/M/2/0\\) queue, by the total expectation formula we have \\[\\pi_{1,1}=r_{0,2}\\pi_{2,0}+r_{1,1}\\pi_{1,1},\\] where \\[r_{1,1}=2\\int_{0}^{\\infty}[1-\\mathrm{e}^{-\\mu x}]\\mathrm{e}^{-\\mu x}\\mathrm{ d}A(x),\\] \\[r_{0,2}=\\varphi_{2}=\\int_{0}^{\\infty}\\mathrm{e}^{-2\\mu x}\\mathrm{d}A(x).\\] Then, by the total expectation formula, the recurrence relation for the \\(GI/M/m/0\\) queue looks as follows: \\[\\pi_{m-1,1}=\\sum_{k=0}^{m-1}r_{k,m-k}\\pi_{m-k,k}, \\tag{4.2}\\]where \\[r_{k,m-k}=\\binom{m}{k}\\int_{0}^{\\infty}[1-\\mathrm{e}^{-\\mu x}]^{k}\\mathrm{e}^{-(m- k)\\mu x}\\mathrm{d}A(x).\\] It is well-known that \\[\\pi_{m,0}=\\sum_{i=0}^{m}\\binom{m}{i}\\prod_{j=1}^{i}\\frac{1-r_{0,j}}{r_{0,j}}= \\sum_{i=0}^{m}\\binom{m}{i}C_{i}, \\tag{4.3}\\] and the loss probability is \\[p_{m,0}=\\frac{1}{\\pi_{m,0}}=\\Big{[}\\sum_{i=0}^{m}\\binom{m}{i}C_{i}\\Big{]}^{-1} \\tag{4.4}\\] (see Cohen [11], Palm [15], Pollaczek [16], Takacs [22] as well as Bharucha-Reid [5]). A relatively simple proof of (4.3) and (4.4) can be found in Takacs [22]. It is based on another representation than (4.2). For our further purposes, representation (4.2) is preferable. Representation (4.2) is a recurrence relation of the convolution type (1.5), and in the following it helps us to reduce the problem to the abovementioned combinatorial results of Takacs [24]. Once this is done, we apply then the Tauberian theorem of Postnikov [17]. Let us now consider the \\(GI/M/m/n\\) queueing system. In the case of this system with \\(n\\geq 1\\) we add an additional subscript to the notation. Specifically, \\(r_{k,m-k,0}=r_{k,m-k}\\), and for \\(j\\leq n\\) \\[r_{0,m,j}=\\int_{0}^{\\infty}\\mathrm{e}^{-m\\mu x}\\frac{(m\\mu x)^{j}}{j!}\\mathrm{ d}A(x),\\] and \\[r_{k,m-k,n}=\\binom{m}{k}\\int_{0}^{\\infty}\\mathrm{e}^{-(m-k)\\mu x}\\] \\[\\times\\Big{\\{}\\int_{0}^{x}\\frac{(m\\mu u)^{n-1}}{(n-1)!}(\\mathrm{e}^{-\\mu u}- \\mathrm{e}^{-\\mu x})^{k}m\\mu\\mathrm{d}u\\Big{\\}}\\mathrm{d}A(x).\\] In addition, the value \\(\\pi_{m,0,n}^{(n)}\\) denotes the expected number of arrivals into the stationary \\(GI/M/m/n\\) queue up to the first loss. (Replacingthe indexes \\(n\\) with \\(j\\) has the same meaning for the \\(GI/M/m/j\\) queue.) Also, we will use the notation \\(\\pi_{m-k,k,0}^{(0)}\\) instead of the earlier notation \\(\\pi_{m-k,k}\\) for the \\(GI/M/m/0\\) queue. Then the recurrence relation associated with the \\(n\\)th series looks as follows: \\[\\pi_{m,0,j}^{(n)}=\\sum_{l=0}^{j}r_{0,m,l}\\pi_{m,0,j-l+1}^{(n)}+\\sum_{k=1}^{m-1 }r_{k,m-k,j}\\pi_{m-k,k,0}^{(n)},\\] \\[j=0,1, ,n-1,\\] where \\[\\pi_{m-i,i,0}^{(n)}=\\sum_{k=0}^{m-i}r_{k,m-k-i+1,0}\\pi_{m-k-i+1,k+i-1}^{(n)}\\ \\ (\\pi_{0,m,0}^{(n)}=1),\\] \\[i=1,2, ,m-1,\\] and the second sum of (4.5) is equal to zero if \\(m=1\\). Moreover, if \\(m=1\\), then we do not longer need the upper index \\((n)\\), showing the series number, and equation (4.6). It is not difficult to see, that for the given series \\(n\\), the recurrence relations (4.5) and (4.6) form a recurrence relation of the convolution type given by (1.5). In the next section we prove relations (3.1)-(3.3) of Theorem 3.1. ## 5. **The proof of (3.1)-(3.3)** First of all note, that \\[\\lim_{n\\to\\infty}\\Big{[}\\sum_{l=0}^{n}r_{0,m,l}+\\sum_{k=1}^{m-1}r_{k,m-k,n} \\Big{]}=\\sum_{l=0}^{\\infty}r_{0,m,l}\\] \\[=\\sum_{l=0}^{\\infty}\\int_{0}^{\\infty}\\mathrm{e}^{-m\\mu x}\\frac{(m\\mu x)^{l}}{ l!}\\mathrm{d}A(x)\\] \\[=\\int_{0}^{\\infty}\\sum_{l=0}^{\\infty}\\mathrm{e}^{-m\\mu x}\\frac{(m\\mu x)^{l}}{ l!}\\mathrm{d}A(x)=1.\\]Therefore, one can apply the theorem of Takacs [24] (see Appendix). Let \\(\\gamma_{1}\\) denote \\[\\gamma_{1}=\\sum_{l=1}^{\\infty}lr_{0,m,l}. \\tag{5.1}\\] Then also \\[\\lim_{n\\to\\infty}\\Big{[}\\sum_{l=1}^{n}lr_{0,m,l}+\\sum_{k=1}^{m-1}(n+k)r_{k,m-k,n }\\Big{]}=\\gamma_{1}. \\tag{5.2}\\] This is because \\[(n+k)\\binom{m}{k}\\int_{0}^{\\infty}\\mathrm{e}^{-(m-k)\\mu x}\\Big{\\{}\\int_{0}^{x }\\frac{(m\\mu u)^{n-1}}{(n-1)!}(\\mathrm{e}^{-\\mu u}-\\mathrm{e}^{-\\mu x})^{k}m \\mu\\mathrm{d}u\\Big{\\}}\\mathrm{d}A(x)\\] \\[\\leq(n+k)\\binom{m}{k}\\int_{0}^{\\infty}\\mathrm{e}^{-\\mu(m-k)x}\\Big{\\{}\\int_{0}^ {x}\\frac{(m\\mu u)^{n-1}}{(n-1)!}m\\mu\\mathrm{d}u\\Big{\\}}\\mathrm{d}A(x)\\] \\[=(n+k)\\binom{m}{k}\\int_{0}^{\\infty}\\mathrm{e}^{-\\mu(m-k)x}\\frac{(m\\mu x)^{n} }{n!}\\mathrm{d}A(x)\\to 0,\\] as \\(n\\to\\infty\\). According to (5.1) and (5.2) we have \\(\\gamma_{1}=m\\mu/\\lambda\\), and therefore, \\(\\gamma_{1}=1/\\varrho\\). Then according to theorem of Takacs [24], given in the Appendix, in the case of \\(\\varrho>1\\) we obtain \\[\\lim_{n\\to\\infty}\\pi_{m,0,n}^{(n)}=\\frac{1}{1-\\gamma_{1}}=\\frac{\\varrho}{ \\varrho-1}.\\] Then, in this case of the limiting loss probability as \\(n\\to\\infty\\) we obtain \\[\\lim_{n\\to\\infty}p_{m,n}=\\lim_{n\\to\\infty}\\frac{1}{\\pi_{m,n,0}^{(n)}}=\\frac{ \\varrho-1}{\\varrho}.\\] Similarly to (5.1), Let \\(\\gamma_{2}\\) denote \\[\\gamma_{2}=\\sum_{l=2}^{\\infty}l(l-1)r_{0,m,l}.\\] Then also \\[\\lim_{n\\to\\infty}\\Big{[}\\sum_{l=2}^{n}l(l-1)r_{0,m,l}+\\sum_{k=1}^{m-1}(n+k)(n+ k-1)r_{k,m-k,n}\\Big{]}=\\gamma_{2},\\]and \\(\\varrho_{2}<\\infty\\) as \\(\\gamma_{2}<\\infty\\). Indeed, as \\(n\\to\\infty\\), \\[(n+k)(n+k-1)\\] \\[\\times \\binom{m}{k}\\int_{0}^{\\infty}\\mathrm{e}^{-(m-k)\\mu x}\\Big{\\{}\\int_{ 0}^{x}\\frac{(m\\mu u)^{n-1}}{(n-1)!}(\\mathrm{e}^{-\\mu u}-\\mathrm{e}^{-\\mu x})^{ k}m\\mu\\mathrm{d}u\\Big{\\}}\\mathrm{d}A(x)\\] \\[\\leq(n+k)(n+k-1)\\binom{m}{k}\\int_{0}^{\\infty}\\mathrm{e}^{-\\mu(m- k)x}\\frac{(m\\mu x)^{n}}{n!}\\mathrm{d}A(x)\\to 0.\\] Therefore, in the case where \\(\\varrho=1\\) and \\(\\varrho_{2}=\\int_{0}^{\\infty}(\\mu mx)^{2}\\mathrm{d}A(x)<\\infty\\) we obtain \\[\\lim_{n\\to\\infty}np_{m,n}=\\frac{\\varrho_{2}}{2}. \\tag{5.3}\\] Limiting relation (5.3) can be improved with the aid of the Tauberian theorem of Postnikov [17] (see Appendix). In the case where \\(\\varrho=1\\) and \\(\\varrho_{3}=\\int_{0}^{\\infty}(\\mu mx)^{3}\\mathrm{d}A(x)<\\infty\\), for large \\(n\\) we obtain \\[p_{m,n}=\\frac{\\varrho_{2}}{2n}+O\\Big{(}\\frac{\\log n}{n^{2}}\\Big{)}.\\] Indeed, let \\(\\gamma_{3}\\) denote \\[\\gamma_{3}=\\sum_{l=3}^{\\infty}l(l-1)(l-2)r_{0,m,l}.\\] Thus, (3.2) and (3.3) follow. ## 6. **The proof of (3.4)** Whereas (3.1)-(3.3) are proved by immediate reduction to the known results associated with (1.5), the proof of (3.4) requires special analysis. In order to simplify the analysis let us concentrate our attention to the constant \\(K_{m}\\) in relation (3.4). Multiplying this constant by \\((1-\\sigma_{m})\\) we obtain \\[\\widetilde{K}_{m}=(1-\\sigma_{m})K_{m}\\] \\[=\\Big{[}\\frac{1}{1-\\sigma_{m}}+\\sum_{j=1}^{m}\\frac{\\binom{m}{j}C_ {j}}{(1-\\varphi_{j})}\\ \\frac{m(1-\\varphi_{j})-j}{m(1-\\sigma_{m})-j}\\Big{]}^{-1}. \\tag{6.1}\\] The constant \\(\\widetilde{K}_{m}\\), given by (6.1), is well-known from the theory of \\(GI/M/m\\) queueing system. Specifically, let \\(\\widetilde{p}_{j}\\), \\(j=0,1, \\), be the stationary probabilities of the number of customers in this system immediately before arrival of a customer. It is known (e.g. Bharucha-Reid [5], Borovkov [6]) that for all \\(j\\geq m\\) \\[\\widetilde{p}_{j}=\\widetilde{K}_{m}\\sigma_{m}^{j-m}. \\tag{6.2}\\] Now, in order to prove (3.4) let us write a new recurrence relation, alternative to (4.5). For this purpose, join the first \\(m\\) states of the \\(GI/M/m/n\\) process to a single state and label it 0. Other states will be numbered 1, 2, , \\(n\\). In the new terms we have the following recurrence relations \\[\\Pi_{j}^{(n)}=\\sum_{i=0}^{j}r_{0,m,i}\\Pi_{j-i+1}^{(n)}, \\tag{6.3}\\] with some initial value \\(\\Pi_{0}^{(n)}\\) for the given series \\(n\\). For example, for the series \\(n=0\\) we have \\[\\Pi_{0}^{(0)}=\\sum_{i=0}^{m}\\binom{m}{i}C_{i}\\](see relation (4.3)). A formal application of the theorem of Takacs [24] (see Appendix), applied to recurrence relation (6.3), for large \\(n\\) yields: \\[\\lim_{n\\to\\infty}\\frac{\\Pi_{n}^{(n)}\\sigma_{m}^{n}}{\\Pi_{0}^{(n)}}=\\frac{1}{1+ \\mu m\\alpha^{\\prime}(\\mu m-\\mu m\\sigma_{m})}. \\tag{6.4}\\] Let us now find \\(\\lim_{n\\to\\infty}\\Pi_{0}^{(n)}\\). Notice, that for \\(j\\geq m\\) the probability \\(\\widetilde{p}_{j}\\) can be rewritten as follows. From (6.2) we have: \\[\\widetilde{p}_{j}=\\widetilde{K}_{m}\\sigma_{m}^{j-m}=K_{m}\\sigma_{m}^{j-m}(1- \\sigma_{m})=\\frac{K_{m}P_{j}}{\\sigma_{m}^{m}}, \\tag{6.5}\\] where \\(P_{j}\\) is the conditional probability for the \\(GI/M/m\\) queue, that an arriving customer finds \\(j\\) customers in the queue provided that upon arrival at least \\(m-1\\) servers are occupied. The conditional probability \\(P_{j-m}\\) coincides with the stationary queue-length distribution immediately before arrival of a customer in the \\(GI/M/1\\) queue given under the expected service time \\((\\mu m)^{-1}\\). \\(K_{m}\\) is the stationary probability for the \\(GI/M/m\\) queue, that upon arrival at least \\(m-1\\) servers are occupied. From the theory of Markov chains associated with \\(GI/M/1/n\\) queue it is known (e.g. Choi and Kim [8]) that the \\(j\\)-state probability immediately before arrival of a customer is \\((\\pi_{n-j}-\\pi_{n-j-1})/\\pi_{n}\\), where \\(\\pi_{n}\\) is given by (1.3), and in turn the loss probability is determined by (1.1). Then for the same \\(j\\)-state probability of \\(GI/M/1\\) queue with the expected service time \\((\\mu m)^{-1}\\) we have \\[P_{j}=\\lim and \\[\\lim_{n\\to\\infty}\\Pi_{0}^{(n)}=\\frac{1}{K_{m}}. \\tag{6.7}\\] In view of (6.7) and (6.4) and according to Takacs theorem [24] we obtain: \\[\\lim_{n\\to\\infty}\\left[\\Pi_{n}^{(n)}-\\frac{1}{K_{m}\\sigma_{m}^{n}[1+\\mu m\\alpha ^{\\prime}(\\mu m-\\mu m\\sigma_{m})]}\\right]=\\frac{\\varrho K_{m}}{1-\\varrho}. \\tag{6.8}\\] Now, taking into consideration that the loss probability \\[p_{m,n}=\\frac{1}{\\Pi_{n}^{(n)}},\\] we obtain statement (3.4) of the theorem. ## 7. **The proof of Theorem 3.2** It was shown in Subhankulov [21], p. 326, that if \\(\\varrho^{-1}=1+\\varepsilon\\), \\(\\varepsilon>0\\) and \\(\\varepsilon\\to 0\\), \\(\\varrho_{3}(n)\\) is a bounded sequence, and there exists \\(\\widetilde{\\varrho}_{2}=\\lim_{n\\to\\infty}\\varrho_{2}(n)\\), then \\[\\sigma_{m}=1-\\frac{2\\varepsilon}{\\widetilde{\\varrho}_{2}}+O(\\varepsilon^{2}), \\tag{7.1}\\] where \\(\\sigma_{m}=\\sigma_{m}(n)\\) is the minimum in absolute value root of the functional equation \\(z=\\alpha(\\mu m-\\mu mz)\\), \\(|z|\\leq 1\\), and where the parameter \\(\\mu\\) and the function \\(\\alpha(z)\\), both or one of them, are assumed to depend on \\(n\\). (Asymptotic representation (7.1) can be immediately obtained by expanding the equation \\(z-\\alpha(\\mu m-\\mu mz)=0\\) for small \\(z\\).) Then, after some algebra we have \\[[1+\\mu m\\alpha^{\\prime}(\\mu m-\\mu m\\sigma_{m})]=\\varepsilon+o(\\varepsilon), \\tag{7.2}\\] and \\[\\sigma_{m}^{n}=\\mathrm{e}^{-2C/\\varrho_{2}}[1+o(1)]. \\tag{7.3}\\]In view of (7.1), the term \\[(1-\\sigma_{m})\\sum_{j=1}^{m}\\frac{{m\\choose j}C_{j}}{(1-\\varphi_{j})}\\ \\frac{m(1- \\varphi_{j})-j}{m(1-\\sigma_{m})-j}\\] has the order \\(O(\\varepsilon)\\). Therefore, for term (3.5) we have \\[K_{m}=1+O(\\varepsilon), \\tag{7.4}\\] and in the case where \\(C>0\\), in view of (7.2)-(7.4) and (6.8), we obtain: \\[p_{m,n}=\\frac{\\varepsilon\\mathrm{e}^{-2C/\\widetilde{\\varrho}_{2}}}{1-\\mathrm{ e}^{-2C/\\widetilde{\\varrho}_{2}}}[1+o(1)]. \\tag{7.5}\\] (3.6) is proved. The proof of (3.7) follows by expanding the main term of the asymptotic expression of (7.5) for small \\(C\\). Theorem 3.2 is completely proved. ## 8. **Numerical example** In this section a numerical example supporting the theory is provided. Specifically, we simulate \\(D/M/1/n\\) and \\(D/M/2/n\\) queues and check statements (3.7) and (3.6) of Theorem 3.2 numerically. The results of simulation are reflected in the table below. The value \\(\\varrho\\) is taken \\(0.999\\), so that \\(\\epsilon=0.001\\) The value \\(n\\) varies from \\(10\\) to \\(50\\), and parameter \\(C=\\epsilon n\\) varies from \\(0.01\\) to \\(0.05\\) The theoretical values of the loss probability for these \\(n\\) are calculated by (3.7). There are also the loss probabilities for \\(n=100\\). The theoretical value for the loss probability related to this case is calculated by (3.6). The table is structured as follows: Column 1 contains the values of parameter \\(n\\), Column 2 contains the theoretical values for the loss probability given by (3.7) and (3.6), Column 3 and 4 contain the loss probabilities obtained by simulation for the \\(D/M/1/n\\) and \\(D/M/2/n\\) queueing systems respectively. As we can see from this table the difference between the loss probabilities of the single-server and two-server queueing systems obtained by simulation is not large, and difference between these loss probabilities decreases as \\(n\\) increases. As \\(n\\) increases the both simulated loss probabilities approach the theoretical loss probability. ## Acknowledgements The author thanks the referees for useful comments. Especial thank is to the referee calling attention of the author to the results of Whitt [25], [26] having immediate relation to the main result of this paper. \\begin{table} \\begin{tabular}{c||c|c|c} \\hline & Loss probability & Loss probability & Loss probability \\\\ \\(n\\) & theoretical & simulated for & simulated for \\\\ & & \\(D/M/1/n\\) queue & \\(D/M/2/n\\) queue \\\\ \\hline 10 & 0.0501 & 0.0426 & 0.0390 \\\\ 15 & 0.0334 & 0.0292 & 0.0275 \\\\ 20 & 0.0251 & 0.0221 & 0.0211 \\\\ 25 & 0.0200 & 0.0180 & 0.0173 \\\\ 30 & 0.0167 & 0.0151 & 0.0146 \\\\ 35 & 0.0143 & 0.0128 & 0.0124 \\\\ 40 & 0.0125 & 0.0111 & 0.0108 \\\\ 45 & 0.0111 & 0.0098 & 0.0096 \\\\ 50 & 0.0100 & 0.0087 & 0.0085 \\\\ 100 & 0.0045 & 0.0040 & 0.0039 \\\\ \\hline \\end{tabular} \\end{table} Table 1. The comparison table of the loss probabilities for \\(D/M/1/n\\) and \\(D/M/2/n\\) queuesThe initiation of the paper was due to a question (conjecture) of Professor Henk C. Tijms related to asymptotic behavior of the loss probability in the overloaded \\(GI/M/m/n\\) queue. The question had relation to the talk of the author at the First Madrid Conference on Queueing Theory. **APPENDIX** In the appendix we recall the main results on asymptotic behavior of the sequence \\(Q_{n}\\), as \\(n\\to\\infty\\) (see relation (1.5)). Denote \\(f(z)=\\sum_{i=0}^{\\infty}f_{i}z^{i}\\), \\(|z|\\leq 1\\), \\(\\gamma_{i}=\\sum_{j=i}^{\\infty}\\Big{(}\\prod_{k=j-i+1}^{j}k\\Big{)}f_{j}\\). **Theorem A1.** (Takacs [24], p. 22, 23.) _If \\(\\gamma_{1}<1\\) then_ \\[\\lim_{n\\to\\infty}Q_{n}=\\frac{Q_{0}}{1-\\gamma_{1}}.\\] _If \\(\\gamma_{1}=1\\) and \\(\\gamma_{2}<\\infty\\), then_ \\[\\lim_{n\\to\\infty}\\frac{Q_{n}}{n}=\\frac{2Q_{0}}{\\gamma_{2}}.\\] _If \\(\\gamma_{1}>1\\) then_ \\[\\lim_{n\\to\\infty}\\left[Q_{n}-\\frac{Q_{0}}{\\delta^{n}(1-f^{\\prime}(\\delta))} \\right]=\\frac{Q_{0}}{1-\\gamma_{1}},\\] _where \\(\\delta\\) is the least in absolute value root of equation \\(z=f(z)\\)._ **Theorem A2.** (Postnikov [17], Section 25.) _If \\(\\gamma_{1}=1\\) and \\(\\gamma_{3}<\\infty\\), then as \\(n\\to\\infty\\)_ \\[Q_{n}=\\frac{2Q_{0}}{\\gamma_{2}}n+O(\\log n).\\] ## References * 1. Abramov, V.M. (1997). On a property of a refusals stream. _Journal of Applied Probability_ 34: 800-805. * 2. Abramov, V.M. (2002). Asymptotic analysis of the \\(GI/M/1/n\\) queueing system as \\(n\\) increases to infinity. _Annals of Operations Research_ 112: 35-41. * [3]Abramov, V.M. (2004). Asymptotic behavior of the number of lost messages. _SIAM Journal on Applied Mathematics_ 64: 746-761. * [4]Abramov, V.M. (2005). Optimal control of a large dam. arXiv: math/PR 0512118. * [5]Bharucha-Reid, A.T. (1960). _Elements of the Theory of Markov Processes and Their Application_. McGraw-Hill, New York. * [6]Borovkov, A.A. (1976). _Stochastic Processes in Queueing Theory_. Springer-Verlag, Berlin. * [7]Brockmeyer, E., Halstrom, H.L. and Jensen, A. (1948). _The Life and the Works of A.K.Erlang_. The Copenhagen Telephone Company, Copenhagen. * [8]Choi, B.D. and Kim, B. (2000). Sharp results on convergence rates for the distribution of the \\(GI/M/1/K\\) queues as \\(K\\) tends to infinity. _Journal of Applied Probability_ 37: 1010-1019. * [9]Choi, B.D. Kim, B., Kim, J. and Wee, I.-S. (2003). Exact convergence rate for the distributions of \\(GI/M/c/K\\) queue as \\(K\\) tends to infinity. _Queueing Systems_ 44: 125-136. * [10]Choi, B.D. Kim, B. and Wee, I.-S. (2000). Asymptotic behavior of loss probability in \\(GI/M/1/K\\) queue as \\(K\\) tends to infinity. _Queueing Systems_ 36: 437-442. * [11]Cohen, J.W. (1957). The full availability group of trunks with an arbitrary distribution of interarrival times and negative exponential holding time distribution. _Simon Stevin_ 31: 169-181. * [12]Kim, B. and Choi, B.D. (2003). Asymptotic analysis and simple approximation of the loss probability of the \\(GI^{X}/M/c/K\\) queue. _Performance Evaluation_ 54: 331-356. * [13]Kleinrock, L. (1975). _Queueing Systems. Volume 1: Theory_. John Wiley, New York. * [14]Miyazawa, M. (1990). Complementary generating functions for the \\(M^{X}/GI/1/k\\) and \\(GI/M^{Y}/1/k\\) queues and their application to the comparison for loss probabilities. _Journal of Applied Probability_ 27: 682-692. * 15Palm, C. (1943). Intensitatschwankungen im Fernsprechverkehr. _Ericsson Technics_ 44: 1-189. * 16Pollaczek, F. (1953). Generalisation de la theorie probabiliste des systemes telephoniques sans dispositif d'attente. _Comptes Rendus de l'Academie des Sciences_ (Paris), 236: 1469-1470. * 17Postnikov, A.G. (1979-1980). _Tauberian Theory and Its Application._ Trudy Matematicheskogo Instituta Steklova 2 (1979) 1-147 (In Russian). Engl. transl. in: Procedings of the Steklov Mathematical Institute 2 (1980) 1-137. * 18Ramalhoto, M.F. and Gomez-Corral, A. (1998). Some decomposition formulae for \\(M/M/r/r+d\\) queues with constant retrial rate. _Stochastic Models_ 14: 123-145. * 19Sevastyanov, B.A. (1957). An ergodic theorem for Markov processes and its application to telephone systems with refusals. _Theory of Probability and Its Applications_ 2: 104-112. * 20Simonot, F. (1998). A comparison of the stationary distributions of \\(GI/M/c/n\\) and \\(GI/M/c\\). _Journal of Applied Probability_ 35: 510-515. * 21Subhankulov, M.A. (1976). _Tauberian Theorems with Remainder._ Nauka, Moscow. (In Russian.) * 22Takacs, L. (1957). On a probability problem concerning telephone traffic. _Acta Mathematika Academia Scientiarum Hungaricae_ 8: 319-324. * 23Takacs, L. (1962). _Introduction to the Theory of Queues_. Oxford University Press, New York/London. * 24Takacs, L. (1967). _Combinatorial Methods in the Theory of Stochastic Processes._ John Wiley, New York. * 25Whitt, W. (2004). Heavy-traffic limits for loss proportions in single-server queues. _Queueing Systems_ 46: 507-536. * 26Whitt, W. (2005). Heavy-traffic limits for the \\(G/H_{2}^{*}/n/m\\) queue. _Mathematics of Operations Research_ 30: 1-27. ## Bibliography **Vyacheslav M. Abramov** graduated from Tadzhik State University (Dushanbe, Tadzhikistan) in 1977. During the period 1977-1992 he worked at the Research Institute of Economics under the Tadzhikistan State Planning Committee (GosPlan). In 1992 he repatriated to Israel and during 1994-2001 worked in software companies of Israel as a software engineer and algorithms developer. In 2002-2005 he was an assistant and lecturer in Judea and Samaria College, Tel Aviv University and Holon Institute of Technology. In 2004 he received a PhD degree from Tel Aviv University, and since 2005 has been working at School of Mathematical Sciences of Monash University (Australia). The scientific interests of him are mainly focused on the theory and application of queueing systems. He is an author of a monograph and various papers published in Journal of Applied Probability, Annals of Operations Research, Queueing Systems, SIAM Journal on Applied Mathematics and other journals.
The paper studies asymptotic behavior of the loss probability for the \\(GI/M/m/n\\) queueing system as \\(n\\) increases to infinity. The approach of the paper is based on applications of classic results of Takacs (1967) and the Tauberian theorem with remainder of Postnikov (1979-1980) associated with the recurrence relation of convolution type. The main result of the paper is associated with asymptotic behavior of the loss probability. Specifically it is shown that in some cases (precisely described in the paper) where the load of the system approaches \\(1\\) from the left and \\(n\\) increases to infinity, the loss probability of the \\(GI/M/m/n\\) queue becomes asymptotically independent of the parameter \\(m\\). Key words and phrases:Loss probabilities, \\(GI/M/m/n\\) queueing system, asymptotic analysis, Tauberian theorem with remainder 1991 Mathematics Subject Classification: 60K25; 40E05 To appear in: _Quality Technology and Quantitative Management_
Write a summary of the passage below.
arxiv-format/0505215v4.md
Inhomogeneous Equation of State of the Universe: Phantom Era, Future Singularity and Crossing the Phantom Barrier Shin'ichi Nojiri Department of Applied Physics, National Defence Academy, Hashirimizu Yokosuka 239-8686, Japan Sergei D. Odintsov Institucio Catalana de Recerca i Estudis Avancats (ICREA) and Institut d'Estudis Espacials de Catalunya (IEEC), Edifici Nexus, Gran Capita 2-4, 08034 Barcelona, Spain November 3, 2021 ## I Introduction The increasing number of evidences from the observational data indicates that current universe lives in a narrow strip near \\(w=-1\\) (where w is the equation of state (EOS) parameter), quite probably being below \\(-1\\) in so-called phantom region. It is also assumed that modern universe is filled with some mysterious, negative pressure fluid (dark energy) which represents about 70 percents of total energy in the universe. (The simplest, phenomenological approach is to consider that this fluid satisfies to EOS with constant \\(w\\)). The origin of this dark energy is really dark: the proposed explanations vary from the modifications of gravity to the introduction of new fields (scalars, spinors, etc) with really strange properties. Moreover, forgetting for the moment about the origin of dark energy, even more-less satisfactory mechanism of evolving dark energy is missing, so far. At best, each of existing theoretical models for dark energy explains some specific element(s) of late-time evolution, lacking the complete understanding. Definitely, the situation may be improved with the new generation of observational data when they will present the realistic evolving EOS of dark energy during sufficiently large period. The most strange (if realistic) era in the universe evolution is phantom era. There are many attempts to describe the phantom cosmology (see, for instance, [1; 2] and references therein), especially near to future, finite-time singularity (Big Rip) which is the essential element of classical phantom cosmology. (Note that quantum effects may basically provide the escape from future, finite type singularity, for recent discussion, see[3; 4]). Unfortunately, the easiest way to describe the phantom cosmology in the Lagrangian formulation leads to the necessity of the introduction of not very appreciated scalar with negative kinetic energy[5]. Another, easy way is to use some phenomenological EOS which may produce dark epoch of the universe (whatever it is). It is remarkable that such description shows the possibility of other types of future, finite type singularity. For instance, even when EOS is suddenly phantomic (near to rip time where negative pressure diverges), the sudden singularity occurs [6]. There may exist future singularities where energy/pressure is finite at rip time, for classification of future singularities, see [4]. They may occur even in modified gravity at late times, see[7] for explicit examples. Nevertheless, it is remarkable that effective phantom phase may be produced also in string-inspired gravities[8]. The present paper is devoted to study the phantom cosmology and related regimes (for instance, crossing of phantom divide) when phenomenological equation of state of the universe is inhomogeneous. In other words, it contains terms dependent explicitly from Hubble parameter (or, even from its derivatives). Definitely, one needs quite strong motivation for such modification of dark energy EOS. The first one comes from the consideration of time-dependent bulk viscosity[9; 10]. (For earlier discussion of cosmology with time-dependent bulk viscosity, see see also [11].) Actually, it was constructed the specific model of dark energy with possibility of crossing of phantom divide due to time-dependent bulk viscosity [9]. The construction of EOS from symmetry considerations [12] indicates to the necessity of some inhomogeneous correction. Finally, big number of gravities: from low-energy string effective actions to gravity with higher derivative terms or with inverse terms on curvatures modifies the FRW equations in requested form. The paper is organized as follows. In the next section we consider spatially-flat FRW universe filled by the ideal fluid with specific, dark energy EOS [3]. Short review of four types of future singularity for different choices of EOS parameters is given, following to ref.[4]. The inhomogeneous term of specific form is introduced to EOS. The role of such term in the transition of different types of singularity to another ones is investigated. The cosmological regimes crossing phantom barrier due to such terms are explicitly constructed. Finally, the dependence of the inhomogeneous term from Hubble parameter derivatives is briefly discussed as well as emerging oscillating universe. Section three is devoted to the study of similar questions when FRW universe is filled by the interacting mixture of two fluids. The modification of two fluids EOS by inhomogeneous term is again considered. The explicit example of late-time cosmology (which may be oscillating one) quite naturally crossing the phantom divide in such a universe is presented. It is interesting that inhomogeneous term may effectively compensate the interaction between two fluids. In the section four we discuss the FRW cosmology admitting the crossing of barrier \\(w=-1\\) due to specific form of the implicit dark energy EOS proposed in ref.[13]. Again, the generalized, Hubble parameter dependent EOS is considered. Some thermodynamical dark energy model passing the barrier \\(w=-1\\) is constructed, based on above EOS. It is demonstrated that in such a model the universe entropy may be positive even during the phantom era. Some summary and outlook are given in the discussion section. The Appendix deals with couple simple versions of modified gravity which may predict the requested generalization of EOS. ## II FRW cosmology with inhomogeneous dark energy equation of state In the present section we make brief review of FRW cosmology with explicit dark energy equation of state (power law). The modification of EOS by Hubble dependent term (constrained by energy conservation law) is done and its role to FRW cosmology evolution is investigated. The starting FRW universe metric is: \\[ds^{2}=-dt^{2}+a(t)^{2}\\sum_{i=1}^{3}\\left(dx^{i}\\right)^{2}. \\tag{1}\\] In the FRW universe, the energy conservation law can be expressed as \\[0=\\dot{\\rho}+3H\\left(p+\\rho\\right). \\tag{2}\\] Here \\(\\rho\\) is energy density, \\(p\\) is pressure. The Hubble rate \\(H\\) is defined by \\(H\\equiv\\dot{a}/a\\). When \\(\\rho\\) and \\(p\\) satisfy the following simple EOS: \\[p=w\\rho\\, \\tag{3}\\] and if \\(w\\) is a constant, Eq.(2) can be easily integrated: \\[\\rho=\\rho_{0}a^{-3(1+w)}. \\tag{4}\\] Using the first FRW equation \\[\\frac{3}{\\kappa^{2}}H^{2}=\\rho\\, \\tag{5}\\] the well-known solution follows \\[a=a_{0}\\left(t-t_{1}\\right)^{\\frac{2}{3(w+1)}}\\ \\ \\ \\mbox{or}\\ \\ \\ a_{0}\\left(t_{2}-t\\right)^{\\frac{2}{3(w+1)}}\\, \\tag{6}\\] when \\(w\ eq-1\\), and \\[a=a_{0}\\mbox{e}^{\\kappa t\\sqrt{\\frac{\\rho_{0}}{3}}} \\tag{7}\\] when \\(w=-1\\). In (6), \\(t_{1}\\) and \\(t_{2}\\) are constants of the integration. Eq.(7) expresses the deSitter universe. In (6), since the exponent \\(2/3(w+1)\\) is not integer in general, we find \\(t>t_{1}\\) or \\(t<t_{2}\\) so that \\(a\\) should be real number. If the exponent \\(2/3(w+1)\\) is positive, the first solution in (6) expresses the expanding universe but the second one expresses the shrinking universe. If the exponent \\(2/3(w+1)\\) is negative, the first solution in (6) expresses the shrinking universe but the second one expresses the expanding universe. In the following, we only consider the case that the universe is expanding. Then for the second solution, however, there appears a singularity in a finite time at \\(t=t_{2}\\), which is called the Big Rip singularity ( for discussion of phantom cosmology near Big Rip and related questions, see[1; 2] and references therein) when \\[w<-1. \\tag{8}\\] In general, the singularities may behave in different ways. One may classify the future singularities as following[4]: * Type I (\"Big Rip\") : For \\(t\\to t_{s}\\), \\(a\\rightarrow\\infty\\), \\(\\rho\\rightarrow\\infty\\) and \\(|p|\\rightarrow\\infty\\) * Type II (\"sudden\") : For \\(t\\to t_{s}\\), \\(a\\to a_{s}\\), \\(\\rho\\rightarrow\\rho_{s}\\) or \\(0\\) and \\(|p|\\rightarrow\\infty\\) * Type III : For \\(t\\to t_{s}\\), \\(a\\to a_{s}\\), \\(\\rho\\rightarrow\\infty\\) and \\(|p|\\rightarrow\\infty\\) * Type IV : For \\(t\\to t_{s}\\), \\(a\\to a_{s}\\), \\(\\rho\\to 0\\), \\(|p|\\to 0\\) and higher derivatives of \\(H\\) diverge. This also includes the case when \\(\\rho\\) (\\(p\\)) or both of them tend to some finite values while higher derivatives of \\(H\\) diverge. Here \\(t_{s}\\), \\(a_{s}\\) and \\(\\rho_{s}\\) are constants with \\(a_{s}\ eq 0\\). The type I may correspond to the Big Rip singularity [1], which emerges when \\(w<-1\\) in (3). The type II corresponds to the sudden future singularity [6] at which \\(a\\) and \\(\\rho\\) are finite but \\(p\\) diverges. The type III appears for the model with \\(p=-\\rho-A\\rho^{\\alpha}\\)[14], which is different from the sudden future singularity in the sense that \\(\\rho\\) diverges. This type of singularity has been discovered in the model of Ref. [3] where the corresponding Lagrangian model of a scalar field with potential has been constructed. One may start from the dark energy EOS as \\[p=-\\rho-f(\\rho)\\, \\tag{9}\\] where \\(f(\\rho)\\) can be an arbitrary function in general. The function \\(f(\\rho)\\propto\\rho^{\\alpha}\\) with a constant \\(\\alpha\\) was proposed in Ref. [3] and was investigated in detail in Ref.[14]. Using (2) for such choice, the scale factor is given by \\[a=a_{0}\\exp\\left(\\frac{1}{3}\\int\\frac{d\\rho}{f(\\rho)}\\right). \\tag{10}\\] Using (5) the cosmological time may be found \\[t=\\int\\frac{d\\rho}{\\kappa\\sqrt{3\\rho}f(\\rho)}\\, \\tag{11}\\] In case \\[f(\\rho)=A\\rho^{\\alpha}\\, \\tag{12}\\] by using Eq.(10), it follows \\[a=a_{0}\\exp\\left[\\frac{\\rho^{1-\\alpha}}{3(1-\\alpha)A}\\right]. \\tag{13}\\] When \\(\\alpha>1\\), the scale factor remains finite even if \\(\\rho\\) goes to infinity. When \\(\\alpha<1\\), \\(a\\rightarrow\\infty\\) (\\(a\\to 0\\)) as \\(\\rho\\rightarrow\\infty\\) for \\(A>0\\) (\\(A<0\\)). Since the pressure is now given by \\[p=-\\rho-A\\rho^{\\alpha}\\, \\tag{14}\\] \\(p\\) always diverges when \\(\\rho\\) becomes infinite. If \\(\\alpha>1\\), the EOS parameter \\(w=p/\\rho\\) also goes to infinity, that is, \\(w\\rightarrow+\\infty\\) (\\(-\\infty\\)) for \\(A<0\\) (\\(A>0\\)). When \\(\\alpha<1\\), we have \\(w\\rightarrow-1+0\\) (\\(-1-0\\)) for \\(A<0\\) (\\(A>0\\)) as \\(\\rho\\rightarrow\\infty\\). By using Eq.(11) for (12), one finds[4] \\[t=t_{0}+\\frac{2}{\\sqrt{3}\\kappa A}\\frac{\\rho^{-\\alpha+1/2}}{1-2\\alpha}\\,\\ \\ \\ {\\rm for}\\ \\ \\ \\alpha\ eq\\frac{1}{2}\\, \\tag{15}\\] and \\[t=t_{0}+\\frac{\\ln\\left(\\frac{\\rho}{\\rho_{0}}\\right)}{\\sqrt{3}\\kappa A}\\,\\ \\ \\ {\\rm for}\\ \\ \\ \\alpha=\\frac{1}{2}\\,. \\tag{16}\\] Therefore if \\(\\alpha\\leq 1/2\\), \\(\\rho\\) diverges in an infinite future or past. On the other hand, if \\(\\alpha>1/2\\), the divergence of \\(\\rho\\) corresponds to a finite future or past. In case of finite future, the singularity could be regarded as a Big Rip or type I singularity. For the choice (12), the following cases were discussed [4]: * In case \\(\\alpha=1/2\\) or \\(\\alpha=0\\), there does not appear any singularity. * In case \\(\\alpha>1\\), Eq.(15) tells that when \\(t\\to t_{0}\\), the energy density behaves as \\(\\rho\\rightarrow\\infty\\) and therefore \\(|p|\\rightarrow\\infty\\) due to (14). Eq.(13) shows that the scale factor \\(a\\) is finite even if \\(\\rho\\rightarrow\\infty\\). Therefore \\(\\alpha>1\\) case corresponds to type III singularity. * \\(\\alpha=1\\) case corresponds to the case (3) if we replace \\(-1-A\\) with \\(w\\). Therefore if \\(A>0\\), there occurs the Big Rip or type I singularity but if \\(A\\leq 0\\), there does not appear future singularity. * In case \\(1/2<\\alpha<1\\), when \\(t\\to t_{0}\\), all of \\(\\rho\\), \\(|p|\\), and \\(a\\) diverge if \\(A>0\\) then this corresponds to type I singularity. * In case \\(0<\\alpha<1/2\\), when \\(t\\to t_{0}\\), we find \\(\\rho\\), \\(|p|\\to 0\\) and \\(a\\to a_{0}\\) but by combining (13) and (15), we find \\[\\ln a\\sim|t-t_{0}|^{\\frac{\\alpha-1}{\\alpha-1/2}}\\.\\] (17) Since the exponent \\((\\alpha-1)/(\\alpha-1/2)\\) is not always an integer, even if \\(a\\) is finite, the higher derivatives of \\(H\\) diverge in general. Therefore this case corresponds to type IV singularity. * In case \\(\\alpha<0\\), when \\(t\\to t_{0}\\), we find \\(\\rho\\to 0\\), \\(a\\to a_{0}\\) but \\(|p|\\rightarrow\\infty\\). Therefore this case corresponds to type II singularity. Hence, the brief review of FRW cosmology with specific homogeneous EOS as well as its late-time behaviour (singularities) is given (see [4] for more detail). At the next step, we will consider the inhomogeneous EOS for dark energy, so that the dependence from Hubble parameter is included in EOS. The motivation for such EOS comes from including of time-dependent bulk viscosity in ideal fluid EOS [9] or from the modification of gravity (see Appendix). Hence, we suggest the following EOS \\[p=-\\rho+f(\\rho)+G(H). \\tag{18}\\] where \\(G(H)\\) is some function. Then the energy conservation law (2) has the following form: \\[0=\\dot{\\rho}+3H\\left(f(\\rho)+G(H)\\right). \\tag{19}\\] By using the first FRW equation (5) and assuming the expanding universe (\\(H\\geq 0\\)), one finds \\[\\dot{\\rho}=F(\\rho)\\equiv-3\\kappa\\sqrt{\\frac{\\rho}{3}}\\left(f(\\rho)+G\\left( \\kappa\\sqrt{\\rho/3}\\right)\\right). \\tag{20}\\] or \\[G(H)=-f\\left(3H^{2}/\\kappa^{2}\\right)+\\frac{2}{\\kappa^{2}}\\dot{H}. \\tag{21}\\] Hence, one can express \\(G(H)\\) in terms of \\(f\\) as above. As a first example, let assume that EOS(3) could be modified as \\[p=w_{0}\\rho+w_{1}H^{2}. \\tag{22}\\] Using (5), it follows \\[p=\\left(w_{0}+\\frac{\\kappa^{2}w_{1}}{3}\\right)\\rho. \\tag{23}\\] Therefore \\(w\\) is effectively shifted as \\[w\\to w_{\\rm eff}\\equiv w_{0}+\\frac{\\kappa^{2}w_{1}}{3}. \\tag{24}\\]Then even if \\(w_{0}<-1\\), as long as \\(w_{\\rm eff}>-1\\), there does not occur the Big Rip singularity. From another side one can start with quintessence value of \\(w_{0}\\), the inhomogeneous EOS (23) with sufficiently negative \\(w_{1}\\) brings the cosmology to phantom era. As a second example, we assume \\(f(\\rho)\\) (12) is modified as \\[f(\\rho)=A\\rho^{\\alpha}\\to f(\\rho)+G(H)=-A\\rho^{\\alpha}-BH^{2\\beta}. \\tag{25}\\] By using the first FRW equation (5), we find \\(f(\\rho)\\) is modified as \\[f_{\\rm eff}(\\rho)=f(\\rho)+G(H)=-A\\rho^{\\alpha}-B^{\\prime}\\rho^{ \\beta}\\,\\] \\[B^{\\prime}\\equiv B\\left(\\frac{\\kappa^{2}}{3}\\right)^{\\beta}. \\tag{26}\\] If \\(\\beta>\\alpha\\), when \\(\\rho\\) is large, the second term in (26) becomes dominant: \\[f_{\\rm eff}(\\rho)\\to B^{\\prime}\\rho^{\\beta}. \\tag{27}\\] On the other hand, if \\(\\beta<\\alpha\\), the second term becomes dominant and we obtain (27) again when \\(\\rho\\to 0\\). In case of (12) without \\(G(H)\\), when \\(1/2<\\alpha<1\\), there is the type I singularity where \\(\\rho\\) goes to infinity in a finite time. When \\(G(H)\\) is given by (25), if \\(\\beta>\\alpha\\), the second term in (26) becomes dominant and therefore if \\(\\beta>1\\), instead of type I singularity there occurs type III singularity. In case of (12) with \\(\\alpha>1\\), the type III singularity appears before \\(G(H)\\) is included. Even if we include \\(G(H)\\) with \\(\\beta>\\alpha>1\\), we obtain the type III singularity again and the structure of the singularity is not changed qualitatively. For (22) without \\(G(H)\\), when \\(0<\\alpha<1/2\\) or \\(\\alpha<0\\), there appears the type IV or type II singularity where \\(\\rho\\) tends to zero. Since the second term becomes dominant if \\(\\beta<\\alpha\\), if \\(\\beta<0\\), the type IV singularity for \\(0<\\alpha<1/2\\) case becomes the type II singularity but the type II singularity for \\(\\alpha<0\\) is not qualitatively changed. In accordance with the previous cases, one finds * In case \\(\\alpha>1\\), for most values of \\(\\beta\\), there occurs type III singularity. In addition to the type III singularity, when \\(0<\\beta<1/2\\), there occurs type IV singularity and when \\(\\beta<0\\), there occurs type II singularity. * \\(\\alpha=1\\) case, if \\(\\beta>1\\), the singularity becomes type III. \\(\\beta=1\\) case corresponds to (22). If \\(\\beta<1\\) and \\(A>0\\), there occurs the Big Rip or type I singularity. In addition to the type I singularity, we have type IV singularity when \\(0<\\beta<1/2\\) and type II when \\(\\beta<1\\). * In case \\(1/2<\\alpha<1\\), one sees singularity of type III for \\(\\beta>1\\), type I for \\(1/2\\leq\\beta<1\\) (even for \\(\\beta=1/2\\)) or \\(\\beta=1\\) and \\(B^{\\prime}>0\\) (\\(B>0\\)) case. In addition to type I, type IV case occurs for \\(0<\\beta<1/2\\), and type II for \\(\\beta<0\\). * In case \\(\\alpha=1/2\\), we have singularity of type III for \\(\\beta>1\\), type I for \\(1/2<\\beta<1\\) or \\(\\beta=1\\) and \\(B^{\\prime}>0\\) (\\(B>0\\)), type IV for \\(0<\\beta<1/2\\), and type II for \\(\\beta<0\\). When \\(\\beta=1/2\\) or \\(\\beta=0\\), there does not appear any singularity. * In case \\(0<\\alpha<1/2\\), we find type IV for \\(0<\\beta<1/2\\), and type II for \\(\\beta<0\\). In addition to type IV singularity, there occurs singularity of type III for \\(\\beta>1\\), type I for \\(1/2\\leq\\beta<1\\) or \\(\\beta=1\\) and \\(B^{\\prime}>0\\) (\\(B>0\\)) case. * In case \\(\\alpha<0\\), there will always occur type II singularity. In addition to type II singularity, we have a singularity of type III for \\(\\beta>1\\), type I for \\(1/2\\leq\\beta<1\\) or \\(\\beta=1\\) and \\(B^{\\prime}>0\\) (\\(B>0\\)) case. Thus, we demonstrated how the modification of EOS by Hubble dependent, inhomogeneous term changes the structure of singularity in late-time dark energy universe. We now consider general case and assume \\(F(\\rho)\\) in (20) behaves as \\[F(\\rho)\\sim F_{0}\\rho^{\\alpha}\\, \\tag{28}\\] with constant \\(F_{0}\\) and \\(\\alpha\\) in a proper limit (e.g. for large \\(\\rho\\) or small \\(\\rho\\)). Then when \\(\\alpha\ eq 1\\), Eq.(20) can be integrated as \\[F_{0}\\left(t-t_{c}\\right)\\sim\\frac{\\rho^{1-\\alpha}}{1-\\alpha}\\, \\tag{29}\\] that is, \\[\\rho\\sim\\left(\\left(1-\\alpha\\right)F_{0}\\left(t-t_{c}\\right)\\right)^{\\frac{1}{1- \\alpha}}. \\tag{30}\\] Here \\(t_{c}\\) is a constant of the integration. When \\(\\alpha=1\\), the energy becomes \\[\\rho=\\rho_{0}{\\rm e}^{F_{0}t}\\, \\tag{31}\\] with a constant of integration \\(\\rho_{0}\\). By using the first FRW equation (5), the scale factor may be found \\[a=a_{0}{\\rm e}^{\\pm\\frac{2\\alpha}{\\left(3-2\\alpha\\right)\\sqrt{3F_{0}}}\\left( \\left(1-\\alpha\\right)F_{0}\\left(t-t_{c}\\right)\\right)^{\\frac{3-2\\alpha}{2(1- \\alpha)}}}\\, \\tag{32}\\] when \\(\\alpha\ eq 1\\) and \\[a=a_{0}{\\rm e}^{\\frac{2\\alpha}{\\rho_{0}}\\sqrt{\\frac{\\rho_{0}}{3}}{\\rm e}^{ \\frac{F_{0}t}{2}}}\\, \\tag{33}\\] when \\(\\alpha=1\\). In [4], there has been given an explicit example of the EOS where crossing of \\(w=-1\\) phantom divide occurs: \\[a(t)=a_{0}\\left(\\frac{t}{t_{s}-t}\\right)^{n}. \\tag{34}\\] Here \\(n\\) is a positive constant and \\(0<t<t_{s}\\). The scale factor diverges in a finite time (\\(t\\to t_{s}\\)) as in the Big Rip. Therefore \\(t_{s}\\) corresponds to the life time of the universe. When \\(t\\ll t_{s}\\), \\(a(t)\\) evolves as \\(t^{n}\\), which means that the effective EOS is given by \\(w=-1+2/(3n)>-1\\). On the other hand, when \\(t\\sim t_{s}\\), it appears \\(w=-1-2/(3n)<-1\\). The solution (34) has been obtained with \\[f(\\rho)=\\pm\\frac{2\\rho}{3n}\\left\\{1-\\frac{4n}{t_{s}}\\left(\\frac{3}{\\kappa^{2} \\rho}\\right)^{\\frac{1}{2}}\\right\\}^{\\frac{1}{2}}. \\tag{35}\\] Therefore the EOS needs to be double-valued in order for the transition to occur between the region \\(w<-1\\) and the region \\(w>-1\\). Then in general, there could not be one-to-one correspondence between \\(p\\) and \\(\\rho\\) in the above EOS. In such a case, instead of (18), we may suggest the implicit, inhomogeneous equation of the state \\[F(p,\\rho,H)=0. \\tag{36}\\] The following example may be of interest: \\[\\left(p+\\rho\\right)^{2}-C_{0}\\rho^{2}\\left(1-\\frac{H_{0}}{H}\\right)=0. \\tag{37}\\] Here \\(C_{0}\\) and \\(H_{0}\\) are positive constants. Combining (37) with the energy conservation law (19) and the first FRW equation (5), one can delete \\(p\\) and \\(\\rho\\) as \\[\\dot{H}^{2}=\\frac{9}{4}C_{0}H^{4}\\left(1-\\frac{H_{0}}{H}\\right)\\, \\tag{38}\\] which can be integrated as \\[H=\\frac{16}{9C_{0}^{2}H_{0}\\left(t-t_{-}\\right)\\left(t_{+}-t\\right)}. \\tag{39}\\] Here \\[t_{\\pm}=t_{0}\\pm\\frac{4}{3C_{0}H_{0}}\\, \\tag{40}\\] and \\(t_{0}\\) is a constant of the integration. Hence \\[p = -\\rho\\left\\{1+\\frac{3C_{0}^{2}}{4H_{0}}\\left(t-t_{0}\\right)\\right\\}\\,\\] \\[\\rho = \\frac{2^{8}}{3^{3}C_{0}^{4}H_{0}^{2}\\kappa^{2}\\left(t-t_{-} \\right)^{2}\\left(t_{+}-t\\right)^{2}}. \\tag{41}\\] In (39), since \\(t_{-}<t_{0}<t_{+}\\), as long as \\(t_{-}<t<t_{+}\\), the Hubble rate \\(H\\) is positive. The Hubble rate \\(H\\) has a minimum \\(H=H_{0}\\) when \\(t=t_{0}=\\left(t_{-}+t_{+}\\right)/2\\) and diverges when \\(t\\to t_{\\pm}\\). Then we may regard \\(t\\to t_{-}\\) as a Big Bang singularity and \\(t\\to t_{+}\\) as a Big Rip one. As clear from (41), the parameter \\(w=p/\\rho\\) is larger than \\(-1\\) when \\(t_{-}<t<t_{0}\\) and smaller than \\(-1\\) when \\(t_{0}<t<t_{+}\\). Therefore there occurs crossing of phantom divide \\(w=-1\\) when \\(t=t_{0}\\) thanks to the effect of inhomogeneous term in EOS. One more example may be of interest: \\[\\left(\\rho+p\\right)^{2}+\\frac{16}{\\kappa^{4}t_{0}^{2}}\\left(h_{0}-H\\right)\\ln \\left(\\frac{h_{0}-H}{h_{1}}\\right)=0. \\tag{42}\\] Here \\(t_{0}\\), \\(h_{0}\\), \\(h_{1}\\) are constants and \\(h_{0}>h_{1}>0\\). A solution is given by \\[H=h_{1}-h_{1}\\mathrm{e}^{-t^{2}/t_{0}^{2}}\\,\\quad\\rho=\\frac{3}{ \\kappa^{2}}\\left(h_{1}-h_{1}\\mathrm{e}^{-t^{2}/t_{0}^{2}}\\right)^{2}\\,\\] \\[p=-\\frac{3}{\\kappa^{2}}\\left(h_{1}-h_{1}\\mathrm{e}^{-t^{2}/t_{0} ^{2}}\\right)^{2}-\\frac{4h_{1}t}{\\kappa^{2}t_{0}^{2}}\\mathrm{e}^{-t^{2}/t_{0}^ {2}}. \\tag{43}\\] Hence, \\[\\dot{H}=\\frac{2h_{1}t}{t_{0}^{2}}\\mathrm{e}^{-t^{2}/t_{0}^{2}}. \\tag{44}\\] Using the energy conservation law (19) and the first FRW equation (5), the second FRW equation may be found: \\[-\\frac{2}{\\kappa^{2}}\\dot{H}=\\rho+p. \\tag{45}\\] As in (44), \\(\\dot{H}\\) is negative when \\(t<0\\) and positive when \\(t>0\\). Eq.(45) tells that the effective parameter \\(w=p/\\rho\\) of the equation of the state is \\(w>-1\\) when \\(t<0\\) and \\(w<-1\\) when \\(t>0\\). As we find the Hubble rate \\(H\\) goes to a constant \\(h_{0}\\), \\(H\\to h_{0}\\), in the limit of \\(t\\rightarrow\\pm\\infty\\), the universe asymptotically approaches to deSitter phase. Therefore there does not appear Big Rip nor Big Bang singularity. Hence, we presented several examples of inhomogeneous EOS for ideal fluid and demonstrated how the final state of the universe filled with such fluid changes if compare with homogeneous case. The ideal fluid with implicit EOS may be used to construct the cosmologies which cross the phantom divide. The interesting remark is in order (see also Appendix). In principle, the more general EOS may contain the derivatives of \\(\\dot{H}\\), like \\(\\dot{H}\\), \\(\\ddot{H}\\), Then more general EOS than (36) has the following form: \\[F\\left(p,\\rho,H,\\dot{H},\\ddot{H},\\cdots\\right)=0. \\tag{46}\\] Trivial example is that \\[p=w\\rho-\\frac{2}{\\kappa^{2}}\\dot{H}-\\frac{3(1+w)}{\\kappa^{2}}H^{2}. \\tag{47}\\] By using the first (5) or second (45) FRW equations, we find \\[\\rho=\\frac{3}{\\kappa^{2}}H^{2}\\,\\quad p=-\\frac{2}{\\kappa^{2}}\\dot{H}-\\frac{3}{ \\kappa^{2}}H^{2}. \\tag{48}\\] Therefore Eq.(47) becomes an identity, which means that any cosmology can be a solution if EOS (47) is assumed. Another, non-trivial example is \\[p=w\\rho-G_{0}-\\frac{2}{\\kappa^{2}}\\dot{H}+G_{1}\\dot{H}^{2}. \\tag{49}\\] Here it is supposed \\(G_{0}(1+w)>0\\). If \\(G_{1}(1+w)>0\\), there appears a solution which describes an oscillating universe, \\[H=h_{0}\\cos\\omega t\\,\\quad a=a_{0}\\mathrm{e}^{\\frac{h_{0}}{\\omega}\\sin\\omega t}. \\tag{50}\\]Here \\[h_{0}\\equiv\\kappa\\sqrt{\\frac{G_{0}}{3(1+w)}}\\,\\quad\\omega=\\sqrt{\\frac{3(1+w)}{G_{1} \\kappa^{2}}}. \\tag{51}\\] In case \\(G_{1}(1+w)<0\\), another cosmological solution appears \\[H=h_{0}\\cosh\\tilde{\\omega}t\\,\\quad a=a_{0}\\mathrm{e}^{\\frac{h_{0}}{\\omega} \\sinh\\tilde{\\omega}t}. \\tag{52}\\] Here \\(h_{0}\\) is defined by (51) again and \\(\\tilde{\\omega}\\) is defined by \\[\\tilde{\\omega}=\\sqrt{-\\frac{3(1+w)}{G_{1}\\kappa^{2}}}. \\tag{53}\\] One can go further and present many more examples of inhomogeneous EOS cosmology. ## III FRW cosmology with inhomogeneous interacting fluids In the present section, we study FRW universe filled with two interacting fluids. Note that there is some interest to study the cosmology with homogeneous interacting fluids [4; 15]. The inhomogeneous terms for such cosmology may be again motivated by (bulk) viscosity account [16]. Let us consider a system with two fluids, which satisfy the following EOS: \\[p_{1,2}=-\\rho_{1,2}-f_{1,2}\\left(\\rho_{1,2}\\right)-G_{1,2}\\left(H\\right). \\tag{54}\\] For simplicity, the only case is considered that \\[p_{\\pm}=w_{\\pm}\\rho_{\\pm}-G_{\\pm}\\left(H\\right). \\tag{55}\\] In the above equation and in the following, the indexes \\(\\pm\\) instead of \\(1,2\\), as \\(p_{1,2}=p_{\\pm}\\) are used. In a spatially flat FRW universe with a scale factor \\(a\\), the cosmological equations are given by \\[\\dot{\\rho}_{\\pm}+3H(\\rho_{\\pm}+p_{\\pm})=\\mp Q\\, \\tag{56}\\] \\[\\dot{H}=-\\frac{\\kappa^{2}}{2}(\\rho_{+}+p_{+}+\\rho_{-}+p_{-})\\,\\] (57) \\[H^{2}=\\frac{\\kappa^{2}}{3}(\\rho_{+}+\\rho_{-}). \\tag{58}\\] Not all of the above equations are independent, for example, Eqs.(56) and (58) lead to (57). From Eqs.(58), (57), and the equation for \\(\\rho_{+}\\) and \\(p_{+}\\) of (56), one obtains the equation for \\(\\rho_{-}\\) and \\(p_{-}\\) of (56). In [4], the following case has been considered: \\[G_{\\pm}(H)=0\\,\\quad Q=\\delta H^{2}\\,\\quad w_{+}=0\\,\\quad w_{-}=-2 \\tag{59}\\] where \\(\\delta\\) is a constant. Then combining Eq. (58) with Eqs. (56), the explicit solution follows \\[H = \\frac{2}{3}\\left(\\frac{1}{t}+\\frac{1}{t_{s}-t}\\right)\\, \\tag{60}\\] \\[\\rho_{+} = \\frac{4}{3\\kappa^{2}}\\left(\\frac{1}{t}+\\frac{1}{t_{s}-t}\\right) \\frac{1}{t}\\,\\] (61) \\[\\rho_{-} = \\frac{4}{3\\kappa^{2}}\\left(\\frac{1}{t}+\\frac{1}{t_{s}-t}\\right) \\frac{1}{t_{s}-t}\\,, \\tag{62}\\] where \\[t_{s}\\equiv\\frac{9}{\\delta\\kappa^{2}}\\,. \\tag{63}\\] In (60), it is assumed \\(0<t<t_{s}\\). The Hubble rate \\(H\\) diverges in a finite time (\\(t\\to t_{s}\\)) as in the Big Rip singularity. Therefore \\(t_{s}\\) corresponds to the life time of the universe. When \\(t\\ll t_{s}\\), \\(H\\) behaves as \\(2/3t\\), which means that the effective EOS is given by \\(w_{\\rm eff}\\sim 0>-1\\). On the other hand, when \\(t\\sim t_{s}\\), it appears \\(w_{\\rm eff}=-2<-1\\). Therefore the crossing of phantom divide \\(w_{\\rm eff}=-1\\) occurs. From (55 - 58), we obtain \\[\\rho_{\\pm} = \\frac{3}{2\\kappa^{2}}H^{2} \\tag{64}\\] \\[\\pm\\ \\frac{1}{w_{+}-w_{-}}\\bigg{\\{}\\ G_{+}(H)+G_{-}(H)\\] \\[-\\frac{3}{\\kappa^{2}}\\left(1+\\frac{w_{+}+w_{-}}{2}\\right)H^{2}- \\frac{2}{\\kappa^{2}}\\ddot{H}\\bigg{\\}}\\,\\] \\[Q = -\\frac{1}{w_{+}-w_{-}}\\bigg{\\{}\\left(G^{\\prime}_{+}(H)+G^{\\prime} _{-}(H)\\right)\\dot{H}\\] (65) \\[-\\frac{6}{\\kappa^{2}}\\left(1+\\frac{w_{+}+w_{-}}{2}\\right)H\\dot{H }-\\frac{2}{\\kappa^{2}}\\ddot{H}\\bigg{\\}}\\] \\[+3H\\left(1+\\frac{w_{+}+w_{-}}{2}\\right)\\] \\[\\times\\ \\frac{1}{w_{+}-w_{-}}\\bigg{\\{}\\ G_{+}(H)+G_{-}(H)\\] \\[-\\frac{3}{\\kappa^{2}}\\left(1+\\frac{w_{+}+w_{-}}{2}\\right)H^{2}- \\frac{2}{\\kappa^{2}}\\dot{H}\\bigg{\\}}\\] \\[-\\frac{9\\left(w_{+}-w_{-}\\right)}{4\\kappa^{2}}H^{3}\\] \\[-\\frac{3}{2}H\\left(G_{+}(H)-G_{-}(H)\\right)\\.\\] First, the case is considered that the Hubble rate \\(H\\) satisfies the following equation: \\[\\dot{H}=S(H)\\, \\tag{66}\\] where \\(S(H)\\) is a proper function of \\(H\\). Hence, \\(Q\\) can be presented as a function of \\(H\\) as \\[Q = Q(H) \\tag{67}\\] \\[= -\\ \\frac{1}{w_{+}-w_{-}}\\bigg{\\{}\\left(G^{\\prime}_{+}(H)+G^{\\prime}_{ -}(H)\\right)S(H)\\] \\[+3\\left(1+\\frac{w_{+}+w_{-}}{2}\\right)H\\left(G_{+}(H)+G_{-}(H) \\right)\\bigg{\\}}\\] \\[+\\frac{12}{\\kappa^{2}\\left(w_{+}-w_{-}\\right)}\\left(1+\\frac{w_{+ }+w_{-}}{2}\\right)HS(H)\\] \\[-\\frac{9}{\\kappa^{2}\\left(w_{+}-w_{-}\\right)}\\] \\[\\times\\left\\{\\left(1+\\frac{w_{1}+w_{2}}{2}\\right)^{2}+\\frac{ \\left(w_{+}-w_{-}\\right)^{2}}{4}\\right\\}H^{3}\\] \\[+\\frac{2}{\\kappa^{2}\\left(w_{+}-w_{-}\\right)}S^{\\prime}(H)S(H)\\] \\[-\\frac{3}{2}H\\left(G_{+}(H)-G_{-}(H)\\right)\\.\\] If \\(Q\\) is given by (67) for proper \\(G_{p}(H)\\) and \\(S(H)\\), the solution of Eqs. (55 - 58) can be obtained by solving Eq.(66) with respect to \\(H\\). Then from (64), one finds the behavior of \\(\\rho_{\\pm}\\). As an example, if we consider \\(S(H)\\) given by \\[S(H)=-\\frac{1}{h_{1}}\\left(H-h_{0}\\right)\\, \\tag{68}\\] the solution of (67) is given by \\[H=h_{0}+\\frac{h_{1}}{t-t_{0}}\\, \\tag{69}\\] Here \\(t_{0}\\) is a constant of the integration. In the solution (69), as \\(H\\) behaves as \\(H\\sim\\frac{h_{1}}{t-t_{0}}\\) when \\(t-t_{0}\\sim 0\\), the effective \\(w_{\\rm eff}\\) is given by \\(w=-1+\\frac{2}{3h_{1}}\\). On the other hand, as \\(H\\) becomes a constant \\(h_{0}\\) when \\(t\\) is large, we obtain the effective \\(w_{\\rm eff}=1\\). Next the simpler case is considered: \\[w_{\\pm}=-1\\pm w\\,\\quad G_{\\pm}(H)=\\pm G(H). \\tag{70}\\] Then (64) and (65) have the following forms: \\[\\rho_{\\pm} = \\frac{3}{2\\kappa^{2}}H^{2}\\mp\\frac{1}{\\kappa^{2}w}\\dot{H}\\, \\tag{71}\\] \\[Q = \\frac{1}{\\kappa^{2}w}\\ddot{H}-\\frac{9w}{2\\kappa^{2}}H^{3}-3HG(H). \\tag{72}\\] Thus, for example, for an arbitrary \\(G(H)\\), if \\(Q\\) is given by a function of \\(H\\) as \\[Q=\\frac{\\omega^{2}}{\\kappa^{2}w}\\left(H-h_{0}\\right)-\\frac{9w}{2\\kappa^{2}}H^ {3}-3HG(H)\\, \\tag{73}\\] that is, \\[\\ddot{H}=\\omega^{2}\\left(h_{0}-H\\right)\\, \\tag{74}\\] the solution of Eqs.(55 - 58) is given by \\[H = h_{0}+h_{1}\\sin\\left(\\omega t+\\alpha\\right)\\,\\] \\[\\rho_{\\pm} = \\frac{3}{2\\kappa^{2}}\\left(h_{0}+h_{1}\\sin\\left(\\omega t+\\alpha \\right)\\right)^{2} \\tag{75}\\] \\[\\mp\\frac{h_{1}\\omega}{\\kappa^{2}w}\\cos\\left(\\omega t+\\alpha \\right)\\.\\] Here \\(h_{1}\\) and \\(\\alpha\\) are constants of the integration. This demonstrates how the inhomogeneous term modifies late-time cosmology. Choosing \\(G_{\\pm}(H)\\) and \\(Q\\), one may realize a rather general cosmology. As was shown, if we introduce two fluids, even without assuming the non-linear EOS as in (36), the model crossing \\(w=-1\\) effectively can be realized. In fact, from (75) one has \\[\\dot{H}=h_{1}\\omega\\cos\\left(\\omega t+\\alpha\\right)\\, \\tag{76}\\] which changes its sign depending on time. When \\(\\dot{H}>0\\), effectively \\(w<-1\\), and when \\(\\dot{H}<0\\), \\(w>-1\\). Note that, as a special case in (73), we may choose, \\[G(H)=\\frac{\\omega^{2}}{3\\kappa^{2}w}\\left(1-\\frac{h_{0}}{H}\\right)-\\frac{3w}{ 2\\kappa^{2}}H^{2}\\, \\tag{77}\\] which gives \\(Q=0\\). As \\(Q=0\\), from (56), there is no direct interaction between two fluids. As is clear from (75), however, there is an oscillation in the energy densities, which may indicate that there is a transfer of the energy between the fluids. Hence, the \\(G(H)\\) term might generate indirect transfer between two fluids. ## IV Crossing the phantom barrier with inhomogeneous EOS and thermodynamical considerations Let us start from the EOS (9). Assuming that \\(w\\) crosses \\(-1\\), which corresponds to \\(f(\\rho)=0\\), in order that the integrations in (10) and (11) are finite, \\(f(\\rho)\\) should behave as \\[f(\\rho)\\sim f_{0}\\left(\\rho-\\rho_{0}\\right)^{s}\\,\\quad 0<s<1. \\tag{78}\\] Here \\(f(\\rho_{0})=0\\). Since \\(0<s<1\\), \\(f(\\rho)\\) could be multi-valued at \\(\\rho=\\rho_{0}\\), in general. Near \\(\\rho=\\rho_{0}\\), Eq.(11) gives, \\[t-t_{0}\\sim\\frac{\\left(\\rho-\\rho_{0}\\right)^{1-s}}{\\kappa\\sqrt{3}\\rho_{0}f_{0} (1-s)}. \\tag{79}\\] Here \\(t=t_{0}\\) when \\(\\rho=\\rho_{0}\\). Since \\[\\dot{H}=\\frac{\\kappa^{2}}{2}f(\\rho)\\, \\tag{80}\\] from the second FRW Eq.(45), one finds \\[\\dot{H} \\sim \\frac{\\kappa}{2^{2}}f_{0}\\left(\\frac{t-t_{0}}{t_{1}}\\right)^{s/( 1-s)}\\,\\] \\[t_{1} \\equiv \\frac{1}{\\kappa\\sqrt{3}\\rho_{0}f_{0}(1-s)}. \\tag{81}\\]Hence, when \\(s/(1-s)\\) is positive odd integer, the sign of \\(\\dot{H}\\) changes at \\(t=t_{0}\\), which shows the crossing \\(w=-1\\). In recent paper[13], based on consideration of mixture of two fluids: effective quintessence and effective phantom, the following, quite interesting EOS has been suggested: \\[A\\rho^{m}+Bp^{m}=\\left(C\\rho^{m}+Dp^{m}\\right)^{\\alpha}. \\tag{82}\\] Here \\(A\\), \\(B\\), \\(C\\), \\(D\\), and \\(\\alpha\\) are constants and \\(m\\) is an integer. This EOS can be regarded as a special case of (36). By writing \\(p\\) as \\[p=Q(\\rho)\\rho\\, \\tag{83}\\] one obtains \\[\\rho^{m(\\alpha-1)}=F\\left(Q^{m}\\right)\\equiv\\left(A+BQ^{m}\\right)\\left(C+DQ^{ m}\\right)^{-\\alpha}. \\tag{84}\\] Since \\[F^{\\prime}\\left(Q^{m}\\right) = \\left(C+DQ^{m}\\right)^{-\\alpha-1} \\tag{85}\\] \\[\\times\\left(BC-\\alpha AD+(1-\\alpha)BDQ^{m}\\right)\\,\\] it follows \\(F^{\\prime}\\left(Q^{m}\\right)=0\\) when \\[Q^{m}=-\\frac{\\frac{C}{D}-\\alpha\\frac{A}{B}}{1-\\alpha}. \\tag{86}\\] By properly choosing the parameters, we assume \\[\\frac{\\frac{C}{D}-\\alpha\\frac{A}{B}}{1-\\alpha}=1. \\tag{87}\\] When \\(Q\\sim-1\\), \\[F(Q^{m})\\sim q_{0}+q_{2}\\left(Q+1\\right)^{2}. \\tag{88}\\] Here \\[q_{0} = F(1)=(A-B)(C+D)^{-\\alpha}\\,\\] \\[q_{2} = \\frac{1}{2}\\left.\\frac{d^{2}F}{dQ^{2}}\\right|_{Q=-1} \\tag{89}\\] \\[= -\\alpha(\\alpha-1)(C-D)^{-\\alpha-2}D^{2}(A-B)m^{2}\\.\\] In (89), it is supposed \\(m\\) is an odd integer. Solving (84) with (88) with respect to \\(Q\\), one arrives at \\[Q=-1\\pm\\left\\{\\frac{m(\\alpha-1)\\rho_{0}^{m\\alpha-m-1}\\left(\\rho-\\rho_{0} \\right)}{q_{2}}\\right\\}^{1/2}. \\tag{90}\\] Here \\(\\rho_{0}\\) is defined by \\[q_{0}=\\rho_{0}^{m(\\alpha-1)}. \\tag{91}\\] Using (83), the function Q is \\[p\\sim-\\rho\\pm\\rho_{0}\\left\\{\\frac{m(\\alpha-1)\\rho_{0}^{m\\alpha-m-1}\\left(\\rho -\\rho_{0}\\right)}{q_{2}}\\right\\}^{1/2}. \\tag{92}\\] Comparing (92) with (78), we find that the EOS (82) surely corresponds to \\(s=1/2\\) case in (78). For the EOS (82), there are interesting, exactly solvable cases. We now consider such a case and see that there are really the cases of EOS crossing barrier \\(w=-1\\). The energy conservation law (2) may be rewritten as follows: \\[p=-\\rho-V\\frac{d\\rho}{dV}\\,\\ \\ \\ V\\equiv V_{0}a^{3}. \\tag{93}\\] Here \\(V_{0}\\) is a constant with the dimension of the volume. Use of Eq.(83) gives \\[0=V\\frac{d\\rho}{dV}+\\left(1+Q(\\rho)\\right)\\rho. \\tag{94}\\] Using (84), we further rewrite (94) as an equation with respect to \\(Q\\): \\[0 = -\\frac{\\left(BC-\\alpha AD+BD\\left(1-\\alpha\\right)Q^{m}\\right)Q^{ m-1}}{\\left(1-\\alpha\\right)\\left(C+DQ^{m}\\right)\\left(A+BQ^{m}\\right)}V\\frac{dQ}{dV} \\tag{95}\\] \\[+1+Q\\.\\] Assuming Eq.(87), the above Eq.(95) takes a simple form: \\[0=-\\frac{BD\\left(1+Q^{m}\\right)Q^{m-1}}{\\left(C+DQ^{m}\\right)\\left(A+BQ^{m} \\right)}V\\frac{dQ}{dV}+1+Q. \\tag{96}\\] Especially in the simplest case \\(m=1\\), one can easily solve (96) \\[Q=-\\frac{C-A\\left(\\frac{V}{V_{1}}\\right)^{\\beta}}{D-B\\left(\\frac{V}{V_{1}} \\right)^{\\beta}}. \\tag{97}\\] Here \\(V_{1}\\) is a constant of the integration and \\[\\beta\\equiv\\frac{BD}{AD-BC}=\\frac{1}{\\left(1-\\alpha\\right)\\left(\\frac{A}{B}-1 \\right)}. \\tag{98}\\] In the above equation, Eq.(87) is used. Hence, when \\(\\left(V/V_{1}\\right)^{\\beta}\\to 0\\), it follows \\(w=p/\\rho=Q\\rightarrow-C/D\\). On the other hand, when \\(\\left(V/V_{1}\\right)^{\\beta}\\rightarrow\\infty\\), one arrives at \\(w=Q\\rightarrow-A/B\\). Hence, the value of \\(w\\) changes depending on the size of the universe. Especially when \\[\\frac{V}{V_{1}}=\\left(\\frac{C-D}{A-B}\\right)^{1/\\beta}\\, \\tag{99}\\] there occurs the crossing of phantom divide \\(w=Q=-1\\) (compare with [13]). As the inhomogeneous generalization of the EOS (82), we may consider \\[A\\left(\\frac{3}{\\kappa^{2}}H^{2}\\right)^{m}+Bp^{m}=\\left(C\\rho^{m}+Dp^{m} \\right)^{\\alpha}\\, \\tag{100}\\] or \\[A\\rho^{m}+Bp^{m}=\\left(C\\left(\\frac{3}{\\kappa^{2}}H^{2}\\right)^{m}+Dp^{m} \\right)^{\\alpha}\\, \\tag{101}\\]or, more general EOS \\[(A-A^{\\prime})\\rho^{m}+A^{\\prime}\\left(\\frac{3}{\\kappa^{2}}H^{2} \\right)^{m}+Bp^{m}\\] \\[=\\left((C-C^{\\prime})\\rho^{m}+C^{\\prime}\\left(\\frac{3}{\\kappa^{2}} H^{2}\\right)^{m}+Dp^{m}\\right)^{\\alpha}. \\tag{102}\\] By using the first FRW equation (5), it follows that the EOS (100), (101), and (102) are equivalent to (82). Especially if \\(m=1\\) and (87) could be satisfied, one obtains the solution (97). Hence, using the first and second FRW Eqs.(5) and (45), the EOS (82) with \\(m=1\\) can be rewritten as \\[\\frac{d^{2}}{dt^{2}}\\left(a^{\\frac{3}{2}\\left(1-\\frac{A}{B} \\right)}\\right)\\] \\[=\\frac{3\\kappa^{2}(A-B)}{4B^{2}}\\left(\\frac{3\\kappa^{2}(C-D)}{4D^ {2}}\\right)^{-\\alpha}\\] \\[\\quad\\times a^{\\frac{3}{2}\\left\\{1-\\frac{A}{B}-\\alpha\\left(1- \\frac{C}{D}\\right)\\right\\}}\\left\\{\\frac{d^{2}}{dt^{2}}\\left(a^{\\frac{3}{2} \\left(1-\\frac{C}{D}\\right)}\\right)\\right\\}. \\tag{103}\\] When (87) is satisfied, this second order differential Eq. looks as \\[\\frac{d^{2}X}{dt^{2}} = \\left(\\frac{4B}{3\\kappa^{2}(A-B)}\\right)^{\\alpha-1}\\alpha^{ \\alpha}\\left(\\frac{d^{2}X^{\\frac{1}{\\alpha}}}{dt^{2}}\\right)^{\\alpha}\\,\\] \\[X \\equiv a^{\\frac{3}{2}\\left(1-\\frac{A}{B}\\right)}\\,, \\tag{104}\\] which also admits, besides the solution crossing \\(w=-1\\) (97), a flat universe solution \\[a=a_{0}\\,\\quad(a_{0}:\\text{constant})\\, \\tag{105}\\] and deSitter universe solution \\[a=a_{0}\\mathrm{e}^{\\frac{2}{\\sqrt{\\frac{B}{\\kappa^{2}}-B}\\alpha^{\\frac{B}{2( 1-\\alpha)}}t}}. \\tag{106}\\] As next generalization of (82), one may consider the following EOS: \\[A\\rho+Bp-\\frac{A-B}{\\kappa^{2}}H^{2}\\] \\[=\\left(C\\rho+Dp-\\frac{C-D}{\\kappa^{2}}H^{2}\\right)^{\\alpha(H)}. \\tag{107}\\] Here \\(\\alpha\\) is assumed to be a function of \\(H\\). Then by using the first and second FRW Eqs.(5) and (45), the EOS (107) can be rewritten as \\[-\\frac{2B}{\\kappa^{2}}\\dot{H}=\\left(-\\frac{2D}{\\kappa^{2}}\\dot{H}\\right)^{ \\alpha(H)}\\, \\tag{108}\\] which gives \\[-\\frac{\\kappa^{2}}{2D}t=\\int^{H}dH\\mathrm{e}^{-\\frac{\\ln\\frac{B}{\\kappa(t)-1} }{\\alpha(t)-1}}. \\tag{109}\\] As an example, for the solution (75) \\[\\omega t=\\frac{1}{h_{1}}\\int^{H}\\frac{dH}{\\sqrt{1-\\left(\\frac{H-h_{0}}{h_{1}} \\right)^{2}}}. \\tag{110}\\] Comparing (109) with (110), in case that \\[h_{1}\\omega=-\\frac{\\kappa^{2}}{2D}\\,\\quad\\alpha(H)=1+\\frac{2\\ln\\frac{B}{D}}{ \\ln\\left(1-\\left(\\frac{H-h_{0}}{h_{1}}\\right)^{2}\\right)}\\, \\tag{111}\\] the solution (75) follows from the EOS (107). As another generalization of (82), we may consider the following EOS: \\[A\\rho^{m}+Bp^{m}=G(H)\\left(C\\rho^{m}+Dp^{m}\\right)^{\\alpha}. \\tag{112}\\] Here \\(G(H)\\) is a function of the Hubble rate. For simplicity, the following case is considered \\[m=1\\,\\quad G(H)=\\left(\\frac{3}{\\kappa^{2}}H^{2}\\right)^{\\gamma}. \\tag{113}\\] Then, writing \\(p\\) as (83) and using \\(Q\\), the energy looks like \\[\\rho=(A+BQ)^{\\frac{1}{\\gamma+\\alpha-1}}(C+DQ)^{-\\frac{\\alpha}{\\gamma+\\alpha-1} }\\, \\tag{114}\\] which corresponds to (84). Assuming Eq.(87), by using (83), instead of (96), one gets \\[0=-\\frac{(1-\\alpha)BD}{\\left(1-\\alpha-\\gamma\\right)\\left(C+DQ\\right)\\left(A+BQ \\right)}V\\frac{dQ}{dV}+1\\, \\tag{115}\\] which can be solved as \\[Q=-\\frac{C-A\\left(\\frac{V}{V_{1}}\\right)^{\\beta}}{D-B\\left(\\frac{V}{V_{1}} \\right)^{\\beta}}. \\tag{116}\\] Here \\(V_{1}\\) is again a constant of the integration and \\[\\tilde{\\beta}\\equiv\\frac{(1-\\alpha)BD}{\\left(1-\\alpha-\\gamma\\right)AD-BC}. \\tag{117}\\] Then as in (97), when \\(\\left(V/V_{1}\\right)^{\\beta}\\to 0\\), we have \\(w=p/\\rho=Q\\rightarrow-C/D\\) and when \\(\\left(V/V_{1}\\right)^{\\beta}\\rightarrow\\infty\\), we have \\(w=Q\\rightarrow-A/B\\). The power of \\(V\\), however, is changed in Eq.(116) if compare with Eq.(97). Thus, we presented number of FRW cosmologies (including oscillating universes) filled by cosmic fluid with inhomogeneous EOS where phantom divide is crossing. Definitely, one can suggest more examples or try to fit the astrophysical data with more precise model of above sort. In [17], the thermodynamical models of the dark energy have been constructed. Especially it has been shown that, for the fluid with constant \\(w\\), the free energy \\(F(T,V)\\) is generally given by \\[F(T,V)=T\\hat{F}\\left((T/T_{0})^{1/w}(V/V_{0})\\right). \\tag{118}\\]Here \\(T\\) is the temperature and \\(V\\) is the volume of the universe. For the dimensional reasons, the positive parameters \\(T_{0}\\) and \\(V_{0}\\) are introduced. The interesting question is what happens with the entropy when the value of \\(w\\) crosses \\(-1\\). As a model, the case that \\(w=Q\\) depends on \\(V\\) as in (97) may be considered: \\[w=w(V)=\\frac{w_{0}+w_{1}\\left(\\frac{V}{V_{0}}\\right)^{\\beta}}{1+\\left(\\frac{V} {V_{0}}\\right)^{\\beta}}. \\tag{119}\\] When \\(\\beta>0\\), \\(w\\to w_{0}\\) for small universe and \\(w\\to w_{1}\\) for large universe. The specific dependence of free energy may be taken as below \\[F=\\frac{f_{0}T}{T_{0}}\\left\\{\\left(\\frac{T}{T_{0}}\\right)^{\\frac{1}{w(V)}}\\frac {V}{V_{0}}\\right\\}^{\\gamma}. \\tag{120}\\] Here \\(\\gamma\\) is a constant. When \\(\\gamma=1\\) and \\(w\\) is a constant, the free energy is proportional to the volume. For usual matter, due to self-interaction and related effects, \\(\\gamma\\) is not always unity. Then, the pressure \\(p\\), the energy density \\(\\rho\\), and the entropy \\({\\cal S}\\) are given by \\[p = -\\frac{\\partial F}{\\partial V}\\] \\[= -\\frac{f_{0}\\gamma}{V_{0}}\\left(\\frac{T}{T_{0}}\\right)^{1+\\frac{ \\gamma}{w(V)}}\\left(\\frac{V}{V_{0}}\\right)^{\\gamma-1}\\] \\[\\times\\left\\{1+\\gamma\\ln\\left(\\frac{T}{T_{0}}\\right)\\frac{\\left( w_{1}-w_{0}\\right)\\beta\\left(\\frac{V}{V_{0}}\\right)^{\\beta}}{\\left(w_{0}+w_{1} \\left(\\frac{V}{V_{0}}\\right)^{\\beta}\\right)^{2}}\\right\\}\\,\\] \\[\\rho = \\frac{1}{V}\\left(F-T\\frac{\\partial F}{\\partial T}\\right)\\] \\[= -\\frac{f_{0}\\gamma}{wV_{0}}\\left(\\frac{T}{T_{0}}\\right)^{1+\\frac{ \\gamma}{w(V)}}\\left(\\frac{V}{V_{0}}\\right)^{\\gamma-1}\\,\\] \\[{\\cal S} = -\\frac{\\partial F}{\\partial T} \\tag{121}\\] \\[= -\\frac{f_{0}}{T_{0}}\\left(1+\\frac{\\gamma}{w}\\right)\\left(\\frac{T }{T_{0}}\\right)^{\\frac{\\gamma}{w(V)}}\\left(\\frac{V}{V_{0}}\\right)^{\\gamma}\\.\\] In the pressure \\(p\\), the second term in large \\(\\{\\}\\) comes from \\(V\\) dependence of \\(w\\) in (119), which vanishes for large or small universe (\\(V\\rightarrow\\infty\\) or \\(V\\to 0\\)). Hence, for small or large universe \\(p/\\rho\\to w(V)\\to w_{0,1}\\). As seen from the expression for \\({\\cal S}\\), the sign of the entropy changes at \\[w=-\\gamma. \\tag{122}\\] If \\(\\gamma=1\\), the sign of the entropy \\({\\cal S}\\) changes when crossing \\(w=-1\\) (the entropy becomes negative when \\(w\\) is less than \\(-1\\) as it was observed in [17]), but in the case that \\[\\gamma<|w_{0}|,\\,|w_{1}|\\, \\tag{123}\\] the entropy does not change its sign. We should note that the expressions (121) are not well-defined, unless \\(\\gamma=0\\), when \\(w=0\\), which corresponds to dust. One may assume \\(0<\\gamma<w_{0}\\ll 1\\) and \\(w_{1}\\lesssim-1\\). Then as clear from (119), \\(w\\) changes from \\(w_{0}\\sim 0\\) for small universe to \\(w_{1}\\lesssim-1\\) for large universe and crosses \\(-1\\). Since we always have \\(|\\gamma/w_{0}|<1\\) and therefore \\(1+\\gamma/w>0\\), the entropy \\({\\cal S}\\) (121) is always positive and does not change its sign as long as \\(f_{0}<0\\). This explicitly demonstrates very beautiful phenomenon: there exist thermodynamical models for dark energy with crossing of phantom divide. Despite the preliminary expectations, the entropy of such dark energy universe even in its phantom phase may be positive! ## V Discussion In summary, the effect of modification of general EOS of dark energy ideal fluid by the insertion of inhomogeneous, Hubble parameter dependent term in the late-time universe is considered. Several explicit examples of such term which is motivated by time-dependent bulk viscosity or deviations from general relativity are considered. The corresponding late-time FRW cosmology (mainly, in its phantom epoch) is described. It is demonstrated how the structure of future singularity is changed thanks to generalization of dark energy EOS. The number of FRW cosmologies admitting the crossing of phantom barrier are presented. The inhomogeneous term in EOS helps to realize such a transition in a more natural way. It is interesting that in the case when universe is filled with two interacting fluids (for instance, dark energy and dark matter) the Hubble parameter dependent term may effectively absorb the coupling between the fluids. Again, in case of two dark fluids the phantom epoch with possibility of crossing of \\(w=-1\\) barrier occurs is constructed. It is also very interesting that there exists thermodynamical dark energy model where despite the preliminary expectations[17] the entropy in phantom epoch may be positive. This is caused by crossing of phantom barrier. As it was demonstrated making the dark energy EOS more general, this extra freedom in inhomogeneous term brings a number of new possibilities to construct the late-time universe. One can go even further, assuming that inhomogeneous terms in EOS are not restricted by energy conservation law (as it is often the case in braneworld approach). Nevertheless, only more precise astrophysical data will help to understand which of number of EOS of the universe under consideration (in other words, dark energy models) is realistic. ## Acknowledgements We thank S. Tsujikawa for participation at the early stage of this work. The research by SDO has been partially supported by RFBR grant 03-01-00105 and LRSSgrant 1252.2003.2. ## Appendix A Inhomogeneous terms from modified gravity Let us consider the possibility to obtain the inhomogeneous EOS from the modified gravity. As an illustrative example, the following action is considered: \\[S=\\int d^{4}x\\sqrt{-g}\\left(\\frac{1}{2\\kappa^{2}}+{\\cal L}_{\\rm matter}+f(R) \\right). \\tag{100}\\] Here \\(f(R)\\) can be an arbitrary function of the scalar curvature \\(R\\) and \\({\\cal L}_{\\rm matter}\\) is the Lagrangian for the matter. In the FRW universe, the gravitational equations are: \\[0 = -\\frac{3}{\\kappa^{2}}H^{2}+\\rho-f\\left(R=6\\dot{H}+12H^{2}\\right) \\tag{101}\\] \\[+6\\left(\\dot{H}+H^{2}-H\\frac{d}{dt}\\right)\\] \\[\\times f^{\\prime}\\left(R=6\\dot{H}+12H^{2}\\right)\\,\\] \\[0 = \\frac{1}{\\kappa^{2}}\\left(2\\dot{H}+3H^{2}\\right)+p+f\\left(R=6 \\dot{H}+12H^{2}\\right)\\] (102) \\[+2\\left(-\\dot{H}-3H^{2}+\\frac{d^{2}}{dt^{2}}+2H\\frac{d}{dt}\\right)\\] \\[\\times f^{\\prime}\\left(R=6\\dot{H}+12H^{2}\\right)\\.\\] Here \\(\\rho\\) and \\(p\\) are the energy density and the pressure coming from \\({\\cal L}_{\\rm matter}\\). They may satisfy the equation of state like \\(p=w\\rho\\). One may now define the effective energy density \\(\\tilde{\\rho}\\) and \\(\\tilde{p}\\) by \\[\\tilde{\\rho} \\equiv \\rho-f\\left(R=6\\dot{H}+12H^{2}\\right) \\tag{103}\\] \\[+6\\left(\\dot{H}+H^{2}-H\\frac{d}{dt}\\right)\\] \\[\\times f^{\\prime}\\left(R=6\\dot{H}+12H^{2}\\right)\\,\\] \\[\\tilde{p} = p+f\\left(R=6\\dot{H}+12H^{2}\\right)\\] (104) \\[+2\\left(-\\dot{H}-3H^{2}+\\frac{d^{2}}{dt^{2}}+2H\\frac{d}{dt}\\right)\\] \\[\\times f^{\\prime}\\left(R=6\\dot{H}+12H^{2}\\right)\\.\\] Thus, it follows \\[\\tilde{p} = w\\tilde{\\rho}+(1+w)f\\left(R=6\\dot{H}+12H^{2}\\right) \\tag{105}\\] \\[+2\\left(\\left(-1-3w\\right)\\dot{H}-3\\left(1+w\\right)H^{2}+\\frac{d ^{2}}{dt^{2}}\\right.\\] \\[+\\left(2+3w\\right)H\\frac{d}{dt}\\right)f^{\\prime}\\left(R=6\\dot{H }+12H^{2}\\right)\\.\\] In the situation where the derivative of \\(H\\) can be neglected as \\(\\dot{H}\\ll H^{2}\\) or \\(\\dot{H}\\ll H^{3}\\), we find \\[\\tilde{p} \\sim w\\tilde{\\rho}+G(H)\\,\\] \\[G(H) \\equiv \\left(1+w\\right)f\\left(R=12H^{2}\\right) \\tag{106}\\] \\[-3\\left(1+w\\right)H^{2}f^{\\prime}\\left(R=12H^{2}\\right)\\.\\] Typically \\(H\\) has a form like \\(H\\sim h_{0}/\\left(t-t_{1}\\right)\\) or \\(H\\sim h_{0}/\\left(t_{2}-t\\right)\\), with \\(h_{0}=2/3(w+1)\\), corresponding to (6). Hence, the condition \\(\\dot{H}\\ll H^{2}\\) or \\(\\ddot{H}\\ll H^{3}\\) requires \\(h_{0}\\gg 1\\), which shows \\(w\\sim-1\\) as in the modern universe. This supports our observation that inhomogeneous terms may be the effective ones which are predicted due to currently modified gravity theory. The modification of the EOS by \\(G(H)\\) terms might come also from the braneworld scenario. Indeed, the single brane model is described by the following simple action \\[S = \\frac{M_{\\rm Pl}^{2}}{r_{c}}\\int d^{4}xdy\\sqrt{-g^{(5)}}R^{(5)} \\tag{107}\\] \\[+\\int d^{4}x\\sqrt{-g}\\left(M_{\\rm Pl}^{2}R+{\\cal L}_{\\rm matter} \\right)\\.\\] Here \\(M_{\\rm Pl}^{2}=1/8\\pi G\\), \\(y\\) is the coordinate of the extra dimension, and \\({\\cal L}_{\\rm matter}\\) is the Lagrangian density of the matters on the brane. The five-dimensional quantities are denoted by suffix \"(5)\". In ref.[18] it has been shown that the FRW equation for 4d brane universe could be given by \\[\\frac{3}{\\kappa^{2}}\\left(H^{2}\\pm\\frac{H}{r_{c}}\\right)=\\rho. \\tag{108}\\] Here \\(\\rho\\) is the matter energy density coming from \\({\\cal L}_{\\rm matter}\\). More general case is considered in ref.[19] where the FRW equation is modified as \\[\\frac{3}{\\kappa^{2}}\\left(H^{2}-\\frac{H^{\\alpha}}{r_{c}^{2-\\alpha}}\\right)= \\rho. \\tag{109}\\] Here \\(\\alpha\\) is a constant. One may assume that the matter energy density \\(\\rho\\) satisfies the energy conservation as in (2). Then from (108), we find \\[-\\frac{2}{\\kappa^{2}}\\left(1-\\frac{\\alpha H^{\\alpha-2}}{2r_{c}^{2-\\alpha}} \\right)\\dot{H}=\\rho+p. \\tag{110}\\] By comparing (109) with the first FRW equation (5) and (110) with the second FRW equation (45), one may define the effective energy density \\(\\tilde{\\rho}\\) and pressure \\(\\tilde{p}\\) as \\[\\tilde{\\rho}\\equiv\\rho+\\frac{3H^{\\alpha}}{\\kappa^{2}r_{c}^{2-\\alpha}}\\,\\quad\\tilde{p}\\equiv-\\frac{3H^{\\alpha}}{\\kappa^{2}r_{c}^{2-\\alpha}}-\\frac{ \\alpha H^{\\alpha-2}\\dot{H}}{\\kappa^{2}r_{c}^{2-\\alpha}}\\, \\tag{111}\\] They satisfy the first (5) and second (45) FRW equations: \\[\\frac{3}{\\kappa^{2}}H^{2}=\\tilde{\\rho}\\,\\quad-\\frac{2}{\\kappa^{2}}\\dot{H}= \\tilde{\\rho}+\\tilde{p}. \\tag{112}\\]If it is also assumed the matter energy density \\(\\rho\\) and the matter pressure \\(p\\) satisfy the EOS like \\(p=w\\rho\\), the effective EOS for \\(\\tilde{\\rho}\\) and \\(\\tilde{p}\\) is given by \\[\\tilde{p}=w\\tilde{\\rho}-(1+w)\\frac{3H^{\\alpha}}{\\kappa^{2}r_{c}^{2-\\alpha}}- \\frac{\\alpha H^{\\alpha-2}\\dot{H}}{\\kappa^{2}r_{c}^{2-\\alpha}}. \\tag{100}\\] Especially if one can neglect \\(\\dot{H}\\), it follows \\[\\tilde{p}\\sim w\\tilde{\\rho}-(1+w)\\frac{3H^{\\alpha}}{\\kappa^{2}r_{c}^{2-\\alpha }}. \\tag{101}\\] This shows that brane-world scenario may also suggest various forms of inhomogeneous modification for effective EOS of matter on the brane. ## References * (1) R. R. Caldwell, M. Kamionkowski and N. N. Weinberg, Phys. Rev. Lett. **91**, 071301 (2003) [arXiv:astro-ph/0302506]; B. McInnes, JHEP **0208**, 029 (2002) [arXiv:hep-th/0112066]; hep-th/0502209; V. Faraoni, Int. J. Mod. Phys. D **11**, 471 (2002); A. E. Schulz, Martin J. White, Phys. Rev. D **64**, 043514 (2001); S. Nojiri and S. D. Odintsov, Phys. Lett. B **562**, 147 (2003) [arXiv:hep-th/0303117]; Phys. Lett. B **571**, 1 (2003) [arXiv:hep-th/0306212]; P. Singh, M. Sami and N. Dadhich, arXiv:hep-th/0305110; P. Gonzalez-Diaz, Phys. Lett. **B586**, 1 (2004) [arXiv:astro-ph/0312579]; hep-th/0408225; H. Stefancic, Eur. Phys. J. C **36**, 523 (2004) [arXiv:astro-ph/0312484]; M. Sami and A. Toporensky, Mod. Phys. Lett. **A19**, 1509 (2004) [arXiv:gr-qc/0312009]; X. Meng and P. Wang, arXiv:hep-ph/0311070; Z. Guo, Y. Piao and Y. Zhang, arXiv:astro-ph/0404225; S. M. Carroll, A. De Felice and M. Trodden, arXiv:astro-ph/0408081; C. Csaki, N. Kaloper and J. Terning, arXiv:astro-ph/0409596; S. Tsujikawa and M. Sami, arXiv:hep-th/0409212; P. Gonzales-Diaz and C. Siguenza, Nucl. Phys. **B697**, 363 (2004) [arXiv:astro-ph/0407421]; L. P. Chimento and R. Lazkoz, Mod. Phys. Lett. **A19**, 2479 (2004) [arXiv:gr-qc/0405020]; gr-qc/0307111; J. Hao and X. Li, arXiv:astro-ph/0404154; G. Calcagni, Phys. Rev. D **71**, 023511 (2005) [arXiv:gr-qc/0410027]; P. Wu and H. Yu, arXiv:astro-ph/0407424; J. Lima and J. S. Alcaniz, arXiv:astro-ph/0402265; S. Nesseris and L. Perivolaropoulos, Phys. Rev. D **70**, 123529 (2004) [arXiv:astro-ph/0410309]; M. Bento, O. Bertolami, N. Santos and A. Sen, arXiv:astro-ph/0412638; P. Scherrer, arXiv:astro-ph/0410508; Z. Guo,Y. Piao, X. Zhang and Y. Zhang, Phys. Lett. B **608** 177 (2005) [arXiv:astro-ph/0410654]; E. Elizalde, S. Nojiri and S. D. Odintsov, Phys. Rev. D **70**, 043539 (2004) [arXiv:hep-th/0405034]; E. Babichev, V. Dokuchaev and Yu. Eroshenko, arXiv:astro-ph/0407190; S. Sushkov, arXiv:gr-qc/0502084; K. Bronnikov, arXiv:gr-qc/0410119; L. Perivolaropoulos, arXiv:astro-ph/0412308; A. Vikman, Phys. Rev. D **71**, 023515 (2005) [arXiv:astro-ph/0407107]; X. Zhang, H. Li, Y. Piao and X. Zhang, arXiv:astro-ph/ 0501652; M. Bouhmadi-Lopez and J. Jimenez-Madrid, arXiv:astro-ph/0404540; Y. Wei, arXiv:gr-qc/0410050; gr-qc/0502077; S. K. Srivastava, arXiv:hep-th/0411630; V. K. Onemli and R. Woodard, arXiv:gr-qc/0406098; M. Dabrowski and T. Stachowiak, arXiv:hep-th/0411199. I. Ya. Arefeva, A. S. Koshelev and S. Yu. Vernov, arXiv:astro-ph/0412638; E. Elizalde, S. Nojiri, S. D. Odintsov and P. Wang, Phys. Rev. D **71**, 103504 (2005) [arXiv:hep-th/0502082]; V. Sahni, arXiv:astro-ph/0502032; H. Wei, R.-G. Cai and D. Zeng, arXiv:hep-th/0501160; R. Curbelo, T. Gonzalez and I. Quiros, arXiv:astro-ph/0502141; B. Gumjudgia, T. Naskar, M. Sami and S. Tsujikawa, arXiv:hep-th/0502191; F. Lobo, arXiv:gr-qc/0502099; R. Lazkoz, S. Nesseris and L. Perivolaropoulos, arXiv:astro-ph/0503230; H. Lu, Z. Huang and W. Fang, arXiv:hep-th/0504038. X. Zhang, arXiv:astro-ph/0501160; F. Bauer, arXiv:gr-qc/0501078; A. Anisimov, E. Babichev, A. Vikman, arXiv:astro-ph/0504560; J. Sola and H. Stefancic, arXiv:astro-ph/0505133; A. Andrianov, F. Cannata and A. Kamenshchik, arXiv:gr-qc/0505087. * (3) S. Nojiri and S. D. Odintsov, Phys. Rev. D **70**, 103522 (2004) [arXiv:hep-th/0408170]. * (4) S. Nojiri, S. D. Odintsov and S. Tsujikawa, Phys. Rev. D **71**, 063004 (2005) [arXiv:hep-th/0501025]. * (5) R. Caldwell, Phys. Lett. B **545**, 23 (2002). * (6) J. D. Barrow, Class. Quant. Grav. **21**, L79 (2004) [arXiv:gr-qc/0403084]; S. Nojiri, S. D. Odintsov, Phys. Lett. B **595**, 1 (2004), [arXiv:hep-th/0405078]; J. D. Barrow, Class. Quant. Grav. **21**, 5619 (2004) [arXiv:gr-qc/0409062]; M. C. B. Abdalla, S. Nojiri and S. D. Odintsov, Class. Quant. Grav. **22**, L35 (2005), [arXiv:hep-th/0409177]; S. Cotsakis and I. Klaoudatou, arXiv:gr-qc/0409022; V. Sahni and Yu. Shtanov, JCAP **0311**, 014 (2003) [arXiv:astro-ph/0202346];K. Lake, Class. Quant. Grav. **21**, L129 (2004) [arXiv:gr-qc/0407107]; M. Dabrowski, arXiv:gr-qc/0410033; L. Fernandez-Jambrina and R. Lazkoz, Phys. Rev. D **70**, 121503 (2004) [arXiv:gr-qc/0410124]; J. D. Barrow and C. Tsagas, arXiv:gr-qc/0411045. * (7) S. Nojiri and S. D. Odintsov, Phys. Lett. B **599**, 137 (2004) [arXiv:astro-ph/0403622]; G. Allemandi, A. Borowiec, M. Francaviglia and S. D. Odintsov, arXiv:gr-qc/0504057. * (8) S. Nojiri, S. D. Odintsov and M. Sasaki, arXiv:hep-th/0504052; M. Sami, A. Toporensky, P. Trejakov and S. Tsujikawa, arXiv:hep-th/0504154. * (9) I. Brevik and O. Gorbunova, arXiv:gr-qc/0504001. * (10) I. Brevik, arXiv:gr-qc/0404095; O. Gron, Astrophys. Space Sci. **173**, 191 (1990); S. Weinberg, _Gravitation and Cosmology_, John Wiley& Sons, 1972. * (11) J. D. Barrow, Phys. Lett. B **180**, 335 (1986); Nucl. Phys. B **310**, 743 (1988). * (12) M. Szydlowski, W. Godlowski and R. Wojtak, arXiv:astro-ph/0505202. * (13) H. Stefancic, arXiv:astro-ph/0504518. * (14) H. Stefancic, arXiv:astro-ph/0411630. * (15) L. Amendola, Phys. Rev. D**62**, 043511 (2000); W. Zimdahl, D. Pavon and L. P. Chimento, Phys. Lett. B **521**, 133 (2001); G. Mangano, G. Miele and V. Pettorino, Mod. Phys. Lett. A **18**, 831(2003); G. Farrar and P. J. E. Peebles, Astrophys. J. **604**, 1 (2004); S. del Campo, R. Herrera and D. Pavon, Phys. Rev. D **70**, 043540 (2004); R.-G. Cai and A. Wang, JCAP **0503**, 002 (2005); D. Pavon and W. Zimdahl, arXiv:gr-qc/0505020; L. Chimento and D. Pavon, gr-qc/0505096. * (16) M. Giovannini, arXiv:gr-qc/0504132; arXiv:astro-ph/0504655. * (17) I. Brevik, S. Nojiri, S. D. Odintsov, L. Vanzo, Phys. Rev. D **70**, 043520 (2004) [arXiv:hep-th/0401073]. * (18) C. Deffayet, G. Dvali and G. Gabadadze, Phys. Rev. D **65**, 044023 [arXiv:astro-ph/0105068]. * (19) G. Dvali and M. S. Turner, arXiv:astro-ph/0301510.
The dark energy universe equation of state (EOS) with inhomogeneous, Hubble parameter dependent term is considered. The motivation to introduce such a term comes from time-dependent viscosity considerations and modifications of general relativity. For several explicit examples of such EOS it is demonstrated how the type of future singularity changes, how the phantom epoch emerges and how crossing of phantom barrier occurs. Similar cosmological regimes are considered for the universe with two interacting fluids and for universe with implicit EOS. For instance, the crossing of phantom barrier is realized in easier way, thanks to the presence of inhomogeneous term. The thermodynamical dark energy model is presented where the universe entropy may be positive even at phantom era as a result of crossing of \\(w=-1\\) barrier. pacs: 98.70.Vc
Summarize the following text.
arxiv-format/0505244v2.md
Evidence of a Weak Galactic Center Magnetic Field from Diffuse Low Frequency Nonthermal Radio Emission T. N. LaRosa1, C. L. Brogan2, S. N. Shore3, T. J. W. Lazio4, N. E. Kassim4, & M. E. Nord45 Footnote 1: affiliation: Department of Biological & Physical Sciences, Kennesaw State University, 1000 Chastain Rd., Kennesaw, GA 30144; [email protected] Footnote 2: affiliation: Institute for Astronomy, 640 North A’ohoku Place, Hilo, HI 96720; [email protected]. Footnote 3: affiliation: Dipartimento di Fisica “Enrico Fermi”, Università di Pisa and INFN, Sezione di Pisa, Largo B. Pontecorvo 3, I-56127 Pisa, Italy; [email protected] Footnote 4: affiliation: Remote Sensing Division, Naval Research Laboratory, Washington DC 20375-5351; [email protected]; [email protected]; [email protected]. Footnote 5: affiliation: Department of Physics and Astronomy, University of New Mexico, Albuquerque, NM 87131. ## 1 Introduction An outstanding question in Galactic center (GC) studies concerns the origin, strength, and role of magnetic fields in the region. Large-scale filamentary non-thermal structures (twodozen confirmed), with lengths of up to tens of parsecs, have been observed in the vicinity of the GC for over the last two decades (e.g. Morris & Serabyn 1996, and references therein; Lang et al. 1999; LaRosa, Lazio, & Kassim 2001; Nord et al. 2004; LaRosa et al. 2004; Yusef-Zadeh, Hewitt, & Cotton 2004). The spatial distribution of the non-thermal filaments (NTFs) is confined to within \\(\\sim 1.5^{\\circ}\\) of the GC, and this phenomenon seems to be unique to the GC region. The NTFs are widely believed to be magnetic field lines illuminated by the injection of relativistic particles. Within the context of this picture, it has been suggested that a pervasive, strong (\\(\\sim 1\\) mG) magnetic field must permeate the entire GC region in order to confine the NTFs (e.g., Morris & Serabyn 1996 and references therein). This Letter reports the discovery of a previously unrecognized diffuse, nonthermal structure in the GC and shows that its properties are not consistent with a strong, space-filling magnetic field. ## 2 Observations and Results We have used the Very Large Array1 (VLA) in all four configurations to image the Galactic center region at 74 MHz. The resulting image presented in Figure 1\\(a\\) has a resolution of \\(125^{\\prime\\prime}\\), an rms noise of \\(\\sim 0.2\\) Jy beam\\({}^{-1}\\), and a dynamic range of \\(\\sim 200\\). The details of the 74 MHz data reduction are discussed more thoroughly in Brogan et al. (2003; C. L. Brogan et al. 2005, in preparation). This is the highest resolution and sensitivity image of the Galactic center region at frequencies below 300 MHz yet created. Even so, the data used to make Figure 1\\(a\\) were tapered to provide surface brightness sensitivity optimized to the large-scale emission which is the focus of this Letter. Future improvements in ionospheric calibration offer the opportunity to achieve the full resolving power of the VLA at 74 MHz and to produce a map more suited to studying smaller, discrete sources. Footnote 1: The Very Large Array and the Green Bank Telescope are facilities of the National Radio Astronomy Observatory operated under a cooperative agreement with the National Science Foundation. A wide range of discrete emission and thermal absorption features are evident in the 74 MHz image (Brogan et al. 2003; 2005), as well as a large-scale region of diffuse emission surrounding Sgr A and extending \\(\\pm 3^{\\circ}\\) in longitude and \\(\\pm 1^{\\circ}\\) in latitude. At a distance of 8 kpc this angular scale corresponds to a region 840 by 280 pc. Interestingly, none of the NTFs are detected in the 74 MHz image, and away from known supernova remnants, the emission is quite smooth. The extent of the large-scale low-frequency structure is well matched in size to that of the central molecular zone (CMZ); the region surrounding the Galactic center where the molecular gas density, temperature, and velocity dispersion are high (Morris & Serabyn 1996; Brogan et al. 2003). We hereafter refer to the low'frequency structure as the Galacticcenter diffuse nonthermal source (DNS). The even larger scale, smooth Galactic background is resolved out of this image due to the spatial filtering afforded by the interferometer; only structures smaller than \\(\\sim 5.5^{\\circ}\\) (the angular scale corresponding to the shortest baseline) down to the resolution limit of \\(125^{\\prime\\prime}\\) are sampled. Although the size of the DNS is comparable to the largest angular scale to which our measurements are sensitive, two of its characteristics strengthen our confidence that it is a distinct source: (1) the strong thermal absorption in the vicinity of the GC itself (Fig. 1_a_) effectively segregates the DNS into two pieces, each of which is considerably smaller than \\(5.5^{\\circ}\\), and (2) the DNS is also detected in higher frequency (330 MHz) single-dish data (see below). To complement our 74 MHz data, we have imaged a \\(15^{\\circ}\\times 15^{\\circ}\\) region centered on the GC with the Green Bank Telescope (GBT) at 330 MHz. The GBT data were obtained with a 20 MHz bandwidth, 1024 spectral channels, and a scanning rate that more than Nyquist sampled the beam (\\(38.9^{\\prime}\\)). The GC region is bright enough at 330 MHz that attenuators are required to avoid saturating the detectors. The effect of this high attenuation setting was carefully calibrated out of the flux calibration scans on 3C 286 and the observations of the off-positions. In order to correct for the non zero temperature of the off-positions located at \\((l,b)=(\\pm 20^{\\circ},0^{\\circ})\\), estimates for the sky brightness at these two locations were estimated from the 408 MHz survey image (Haslam et al. 1982) assuming a spectral index of \\(-2.7\\) (Platania et al. 2003) and added back to the 330 MHz image. Finally, the image was converted to a flux density scale using the observed antenna temperature of 3C 286 compared to its expected brightness temperature (Ott et al. 1994) and the gain of the GBT (approximately 2 K/Jy at low frequencies). A sub image of the resulting GBT 330 MHz image with a resolution of \\(38.9^{\\prime}\\) is shown in Figure 1\\(b\\). This image includes contributions from all of the discrete sources apparent in the high-resolution VLA 330 MHz image by LaRosa et al. (2000), the DNS, and the smooth Galactic background synchrotron emission, with the latter contributing much of the total flux. Thus, the Galactic background must be removed and the discrete source contribution accounted for before the 330 MHz properties of the DNS can be assessed. For background subtraction we chose a constant longitude slice free of discrete sources (and well outside of the DNS) near \\(\\ell=354.5^{\\circ}\\) and subtracted it from every other constant longitude plane (median weight filtering was not used since it can introduce structures on size scales equal to the filter). The image resulting from this procedure is shown in Figure 1\\(c\\). Despite the resolution difference, there is excellent agreement between the diffuse structure visible in the background-subtracted GBT 330 MHz image (Fig. 1_c_) and the 74 MHz VLA image (Fig. 1_a_). Having established the reality of the DNS, we now estimate its integrated 74 and 330 MHz flux and spectral index. Due to the copious thermal absorption at 74 MHz apparent in Fig. 1\\(a\\), a simple integration of the total flux within the DNS region is impossible. Instead, we have used the average 74 MHz flux density at locations within the DNS that appear free of both discrete emission sources and thermal absorption, together with its apparent size (an ellipse of dimension \\(6^{\\circ}\\times 2^{\\circ}\\)) to calculate its total flux. Using this method we find an integrated 74 MHz flux density for the DNS of \\(16.2\\pm 1\\) kJy. This estimate is likely to be a lower limit, however, as we show below an underestimate of the total 74 MHz flux density of the DNS does not significantly affect our arguments regarding the weakness of the large-scale GC magnetic field. The integrated 330 MHz flux density within the boundaries of the DNS (Fig. 1_c_) is \\(\\sim 8000\\) Jy, while the total flux density contained in discrete sources from the VLA 330 MHz image (LaRosa et al., 2000) is \\(\\sim 1000\\) Jy.2 Thus, we estimate that the integrated 330 MHz flux density of the DNS is \\(\\sim 7000\\) Jy. Except for the immediate vicinity of the Galactic center itself (where the _thermal_ ionized gas density is very high), thermal absorption effects should not be a significant effect at 330 MHz (e.g., Nord et al., 2004) Combined with the lower limit to the 74 MHz DNS flux, we find that the spectral index of the DNS must be steeper than \\(-0.7\\) (assuming \\(S_{\ u}\\propto\ u^{\\alpha}\\)) or \\(-2.7\\) if brightness temperatures are considered. This value is comparable to the brightness temperature spectral indices measured for the Galactic plane synchrotron background emission (\\(-2.55\\) to \\(-2.7\\); see Platania et al., 2003). Footnote 2: The VLA 330 MHz image is not sensitive to structures larger than \\(\\sim 1^{\\circ}\\) and thus contains little or no contribution from the Galactic background or the DNS. ### Minimum Energy Analysis Using standard synchrotron theory (Moffat, 1975), the minimum energy and magnetic field are given by \\(U_{min}=0.5(\\phi AL)^{4/7}V^{3/7}\\) and \\(B(U_{min})=2.3(\\phi AL/V)^{2/7}\\). Here \\(A\\) is a function of the spectral index, \\(L\\) is the luminosity, \\(V\\) is the source volume, and \\(\\phi\\) is the ratio of energy in protons compared to electrons. Using a 74/330 MHz spectral index of \\(-0.7\\ the equipartition particle energy density is \\(1.2(\\phi/f)^{2/7}\\) eV cm\\({}^{-3}\\). A filling factor as low as 1% will only increase the derived field strength and particle density by a factor of \\(\\sim 4\\). A large proton to electron energy ratio of 100 could increase these estimates by another factor of \\(\\sim 4\\). Thus, even with these extreme parameters, the magnetic field on size scales larger than the 74 MHz beam (\\(125^{\\prime\\prime}\\)) must be \\(\\lesssim 100\\)\\(\\mu\\)G. The above calculation applies to the entire \\(6^{\\circ}\\times 2^{\\circ}\\) region spanned by the DNS. However, the nonthermal filaments are found only in the inner \\(1.5^{\\circ}\\times 0.5^{\\circ}\\). The integrated 330 MHz flux in this smaller region is \\(\\sim 1000\\) Jy and yields a minimum energy of \\((\\phi^{4/7}f^{3/7})\\times 10^{51}\\) ergs (electron energy density of \\(\\sim 7.2\\) eV cm\\({}^{3}\\)) and magnetic field of \\(11(\\phi/f)^{2/7}\\)\\(\\mu\\)G. Thus, the gradient in the DNS brightness does not translate into a significantly larger magnetic field in the innermost region compared to the region as a whole. ## 3 Discussion ### Energy Requirements For the minimum energy parameters the radiative lifetime of electrons generating the DNS is of order \\(5\\times 10^{7}\\) yr while electrons in a strong 1 mG magnetic field would have a much shorter synchrotron lifetime, only \\(10^{5}\\) yr. The energy contained in a typical supernova remnant in particles and magnetic field is \\(5\\times 10^{49}\\) ergs (Duric et al 1995), so that \\(\\sim 200\\) supernovae (SNe) within the last \\(5\\times 10^{7}\\) yr are required to power the DNS. The SN rate scales with the star formation rate, which is a factor of \\(\\approx 250\\) higher in the inner 50 pc than in the disk (Figer et al 2004). Given this enhanced star formation rate, the GC SN rate of about _one_ SN per \\(10^{5}\\) yr is sufficient to power the DNS and maintain it in equilibrium. Alternatively the DNS could be due to a single extreme event (with an energy of \\(\\sim 10^{52}\\) ergs) or was formed during a _recent_ starburst. Indeed, the presence of a large bipolar wind structure centered on the GC, as detected in dust and radio emission studies, suggests that the GC has undergone periods of more prolific star formation in the past (e.g., Sofue & Handa 1984; Bland-Hawthorn & Cohen 2003). However, Bland-Hawthorn & Cohen (2003) estimate that the GC \"Omega lobe\" was formed during a starburst event \\(10^{7}\\) yr ago, inconsistent with the 1 mG synchrotron lifetime. Additionally, the agreement between the derived DNS spectral index (\\(-0.7\\)) and that of the diffuse Galactic plane emission suggests that the acceleration mechanism is similar. This steep spectral index also suggests that the emission arises from a relatively old population of electrons. This scenario for the generation of the DNS is also consistent with the diffuse soft X-ray emission from the GC region. Muno et al. (2004) report the presence of diffuse hard (kT\\(\\sim 8\\)keV) and soft (kT\\(\\sim 0.8\\) keV) X-ray emission from the inner 20 pc. The soft component requires input of the energy equivalent of one SN every \\(10^{5}\\) yr, in good agreement with the rate required to explain the DNS. The hard component is more difficult to explain with this injection rate (requiring \\(\\sim 1\\) SN per 3000 yr), but Muno et al. suggest that a significant fraction of the hard emission may arise from unresolved cataclysmic variables and other compact objects, so that the high implied rate may be overestimated. ### Constraints on GC Cosmic Ray Energy Density By construction, the minimum energy procedure constrains the field and particle energies together but they can also be analyzed separately. Consider the inner \\(1.5^{\\circ}\\times 0.5^{\\circ}\\) were the magnetic field if often assumed to be 1 mG. The integrated flux of the DNS in this region is \\(\\sim 1000\\) Jy. Assuming a 1 mG magnetic field, this flux requires a cosmic-ray (CR) electron energy density of 0.04 eV cm\\({}^{-3}\\). In contrast, the energy density of electrons in the local interstellar medium (ISM) is a factor of 5 larger (0.2 eV cm\\({}^{-3}\\); Webber 1998). Thus, unless the electron energy density in the GC region is significantly lower than that in other parts of the Galaxy, a 1 mG field is inconsistent with the 330 MHz data. The cosmic-ray energy density in a particular region is determined by the local balance between CR production and escape rates. In the disk, particles escape with a typical turbulent diffusion velocity of order 10-15 km s\\({}^{-1}\\) (e.g., Wentzel 1974). In contrast, it is likely that particles in the GC region escape much more rapidly due to strong winds of order a few thousand kilometers per second (e.g., Suchkov, Allen, & Heckman 1993; Koyama et al. 1996). However, since the production rate scales with the SN rate, which exceeds that of the disk by a factor of \\(\\sim 250\\) in the GC region, the higher escape rate in the GC could be balanced by the higher production rate. Thus, the cosmic-ray energy density in the Galactic center region may well be similar to that measured in the disk. Unfortunately, no direct _measurements_ of the GC cosmic-ray energy density currently exist. There are, however, a number of indirect indicators that can be used to constrain its properties. Diffuse \\(\\gamma\\)-ray emission in our Galaxy is produced by the interaction of high-energy cosmic rays with interstellar material, as well as a relatively uncertain contribution from unresolved compact sources (e.g. pulsars, X-ray binaries, etc.). Thus, \\(\\gamma\\)-ray emission can be expected to peak where the cosmic-ray density is high, the gas density is high, or there is a high concentration of compact objects. Recent reanalysis of all available EGRET data toward the GC region by Mayer-Hasslewander et al. (1998) suggest a number of pertinent conclusions: (1) the level of \\(\\gamma\\)-ray emission within the DNS region is entirely consistent with that predicted by models of the Galactic disk (Hunter et al. 1997) with the exception of a \\(\\lesssim 0.6^{\\circ}\\) radius region of enhanced emission centered on the GC itself. (2) Taking into account newer estimates of the GC gas mass, this finding disputes earlier claims that there is a _deficit_ of cosmic rays toward the GC region (e.g. Blitz 1985). (3) No excess is observed toward the Sgr B, Sgr C, or Sgr D molecular cloud complexes as might be expected if gas density played a large role in the GC \\(\\gamma\\)-ray emissivity. (4) The spectrum of the _excess_ GC \\(\\gamma\\)-ray emission is significantly harder than that of the Galactic disk and most likely arises from the GC radio arc and/or compact objects like pulsars. Galactic center molecular clouds have significantly higher temperatures, densities, and turbulent velocities than their disk counterparts (i.e. the CMZ; Morris & Serabyn 1996; Rodriguez-Fernandez et al. 2004). In the past, heating by a larger than average CR energy flux has been invoked to explain the high temperatures of the GC clouds (Suchkov et al. 1993). More recently, a number of studies have shown that the required heating can be produced by turbulent dissipation, shear due to the intense GC gravitational potential, and large-scale shocks (e.g., Rodriguez-Fernandez et al. 2004; Gusten, & Philipp 2004). Moreover, recent observations of the GC region in infrared transitions of H\\({}_{3}^{+}\\) by Goto et al. (2002) suggest that the GC CR ionization rate (\\(\\zeta\\)) is consistent with that of the disk \\(\\zeta\\sim 3\\times 10^{-17}\\) s\\({}^{-1}\\)cm\\({}^{-3}\\). It has also been suggested that an enhanced GC CR density might reveal itself through enhanced abundances of lithium and boron. These atoms can be formed through spallation reactions between cosmic rays and the ISM. Lubowich, Turner, & Hobbs (1998) have searched the GC region for hyperfine-structure lines of neutral Li and B with no success. From their detection upper limits, these authors suggest that the GC cosmic ray energy density cannot exceed the disk value by more than a factor of 13. Together these results ranging from \\(\\gamma\\)-ray emission to the non-detection of lithium and boron provide strong circumstantial evidence that the GC cosmic-ray energy density is similar to that of the Galactic disk. ## 4 Conclusions Utilizing new 74 and 330 MHz observations, we have discovered a new Galactic center feature, the diffuse nonthermal source. The DNS is extended along the Galactic plane with a size of \\(\\sim 6^{\\circ}\\times 2^{\\circ}\\) and a spectral index steeper than \\(-0.7\\). A minimum energy analysis of the DNS requires that any _pervasive_ magnetic field must be weak, of order 10 \\(\\mu\\)G (and almost certainly \\(\\lesssim 100\\)\\(\\mu\\)G). This field strength is 2 orders of magnitude less than the commonly cited value of 1 mG inferred from the assumption that the NTFs are tracing a globally organized magnetic field. We find that the minimum energy required to fuel the DNS can be supplied by the enhanced star formation, and hence supernova, rate within the Galacticcenter region. The low global GC region magnetic field derived from this work is supported by a number of other lines of evidence. If the global GC field strength is 1 mG, then the observed emission would imply a very low GC CR energy density. However, EGRET \\(\\gamma\\)-ray observations (Mayer-Hasslewander et al. 1998; Hunter et al. 1997) indicate a GC CR energy density consistent with that measured in the Galactic disk. Light element (Li and B) abundance upper limits, \\(H_{3}^{+}\\) detections, and consideration of molecular cloud heating are also all consistent with a normal Galactic plane cosmic-ray energy density in the GC. This weak field picture is substantially different from the canonical one that has emerged over the past two decades for the \"Galactic center magnetosphere\" (e.g. Morris & Serabyn 1996; Gusten, & Philipp 2004) wherein the entire Galactic center region (approximately bounded by the CMZ) has a globally organized strong magnetic field of order 1 mG. Together, the radio and \\(\\gamma\\)-ray data yield a GC cosmic-ray energy density and magnetic field strength that are comparable to their disk values. Any departures in the energetics from the disk values result from the enhanced level of star formation and associated activity at the Galactic center, an even more extreme version of which may be the nuclear starburst in M82 (Odegard and Seaquist 1991). It is important to note that our analysis constrains the average magnetic field strength on size scales larger than our 74 MHz resolution of 125\\({}^{\\prime\\prime}\\) and _does not_ preclude locally strong magnetic fields on smaller size scales. We thank Doug Finkbeiner for an especially useful comment from a critical reading of the manuscript. Basic research in radio astronomy at the NRL is supported by the Office of Naval Research. SNS thanks P. Caselli and D. Galli for discussions. TNL thanks the INFN/Pisa for travel support. ## References * (1) Bland-Hawthorne, J. & Cohen, M. 2003, ApJ, 582, 246 * (2) Blitz, L., Bloemen, J. B. G. M., Hermsen, W., & Bania, T. M. 1985, A&A, 142, 267 * (3) Brogan, C. L., Nord, M., Kassim, N., Lazio, J., & Anantharamaiah, K. 2003, Astronomische Nachrichten Supplement, 324, 17 * (4) Duric, N., Gordon, S. M., Goss, W. M., Viallefond, F., & Lacey, C. 1995, ApJ, 445, 173 * (5) Goto, M., et al. 2002, PASJ, 54, 951 * (7)Gusten, R. & Philipp, S. D., 2004, in Proc. Fourth Cologne-Bonn-Zermatt Symp., The Dense Interstellar Medium in Galaxies, ed. S. Pfalzner, et al. (Berlin: Springer) 253 * Figer et al. (2004) Figer, D. F., Rich, R. M. Kim, S. S, Morris, M., & Serabyn, E. 2004, ApJ, 601, 319 * Haslam et al. (1982) Haslam, C. G. T., Stoffel, H., Slater, C. J. & Wilson, W. E. 1982, A&AS, 47, 1 * Hunter et al. (1997) Hunter, S. D. et al. 1997, ApJ, 481, 205 * Koyama et al. (1996) Koyama, K., Maeda, Y., Sonobe, T., Takeshima, T., Tanaka, Y. & Yamauchi, S. 1996, PASJ, 48, 249 * Lang et al. (1999) Lang, C. C., Anantharamaiah, K., Kassim, N. E., & Lazio, T. J. W. 1999, ApJ, 521, L41 * LaRosa et al. (2000) LaRosa, T. N., Kassim, N. E., Lazio, T. J. W., & Hyman, S. D. 2000, AJ, 119, 207 * LaRosa et al. (2001) LaRosa, T. N., Lazio, T. J. W., & Kassim, N. E. 2001, ApJ, 563, 163 * LaRosa et al. (2004) LaRosa, T. N., Nord, M. E., Joseph, T., Lazio, W., & Kassim, N. E. 2004, ApJ, 607, 302 * Lubowich et al. (1998) Lubowich, D. A., Turner, B. E., & Hobbs, L. M. 1998, ApJ, 508, 729 * Mayer-Hasslewander et al. (1998) Mayer-Hasslewander, H. A. et al. 1998, A&A, 335, 161 * Moffat (1975) Moffat, A. T. 1975, in _Galaxies and the Universe: Stars and Stellar Systems Vol. 9_ Eds. Sandage, A., Sandage, M., Kristian, J. (Chicago: Univ. of Chicago Press) * Morris & Serabyn (1996) Morris, M., & Serabyn, E. 1996, ARA&A, 34, 645 * Muno et al. (2004) Muno, M. P., et al. 2004, ApJ, 613, 326 * Nord et al. (2004) Nord, M. E., Lazio, T. J. W., Kassim, N. E., Hyman, S. J., LaRosa, T. N., Brogan, C. L. & Duric, N. 2004, AJ, 128, 1646 * Odegard & Seaquist (1991) Odegard., N. & Seaquist, E. R. 1991, ApJ, 369, 320 * Ott et al. (1994) Ott, M., Witzel, A., Quirrenback, A., Krichbaum, T. P., Standke, K. J., Schalinski, C. J. & Hummel, C. A. 1994, A&AS, 284 331 * Platania et al. (2003) Platania, P., Burigana, C., Maino, D., Caserini, E., Bersanelli, M., Cappellini, B., & Mennella, A. 2003, A&A, 410, 847 * Rodriguez-Fernandez et al. (2004) Rodriguez-Fernandez, N. Sofue, Y. & Handa, T. 1984, Nature, 310, 568 * () Suchkov, A., Allen, R. J., & Heckman, T. M. 1993, ApJ, 413, 542 * () Webber, W. R. 1998, ApJ, 506, 329 * () Wentzel, D. G. 1974, Ann. Rev. Astro. Astrophys., 12, 71 * () Yusef-Zadeh, F., Hewitt, J. W.,
New low-frequency 74 and 330 MHz observations of the Galactic center (GC) region reveal the presence of a large-scale (\\(6^{\\circ}\\times 2^{\\circ}\\)) diffuse source of nonthermal synchrotron emission. A minimum energy analysis of this emission yields a total energy of \\(\\sim(\\phi^{4/7}f^{3/7})\\times 10^{52}\\) ergs and a magnetic field strength of \\(\\sim 6(\\phi/f)^{2/7}\\)\\(\\mu\\)G (where \\(\\phi\\) is the proton to electron energy ratio and \\(f\\) is the filling factor of the synchrotron emitting gas). The equipartition particle energy density is \\(1.2(\\phi/f)^{2/7}\\) eV cm\\({}^{-3}\\), a value consistent with cosmic-ray data. However, the derived magnetic field is several orders of magnitude below the 1 mG field commonly invoked for the GC. With this field the source can be maintained with the SN rate inferred from the GC star formation. Furthermore, a strong magnetic field implies an abnormally low GC cosmic-ray energy density. We conclude that the mean magnetic field in the GC region must be weak, of order 10 \\(\\mu\\)G (at least on size scales \\(\\gtrsim 125^{\\prime\\prime}\\)). ISM:Galactic Center -- radio continuum
Write a summary of the passage below.
arxiv-format/0505450v1.md
# A large-deviations analysis of the \\(Gi/gi/1\\) SRPT queue Misja Nuyens\\({}^{*}\\) and Bert Zwart\\({}^{{\\dagger},{\\ddagger}}\\) \\({}^{*}\\)Department of Mathematics Vrije Universiteit Amsterdam De Boelelaan 1081, 1081 HV Amsterdam, The Netherlands [email protected], phone +31 20 5987834, fax +31 20 5987653 \\({}^{\\dagger}\\)CWI P.O. Box 94079, 1090 GB Amsterdam, The Netherlands \\({}^{\\ddagger}\\)Department of Mathematics & Computer Science Eindhoven University of Technology P.O. Box 513, 5600 MB Eindhoven, The Netherlands [email protected], phone +31 40 2472813, fax +31 40 2465995 ###### _2000 Mathematics Subject Classification:_ 60K25 (primary), 60F10, 90B22 (secondary). _Keywords & Phrases:_ busy period, large deviations, priority queue, shortest remaining processing time, sojourn time. _Short title:_ Large deviations for SRPTIntroduction In queueing theory the shortest remaining processing time (SRPT) discipline is famous, since it is known to minimize the mean queue length and sojourn time over all work-conserving disciplines, see for example Schrage [22] and Baccelli & Bremaud [3]. Recent developments in communication networks have led to a renewed interest in queueing models with SRPT. For example, Harchol-Balter _et al._[13] propose the usage of SRPT in web servers. An important issue in such applications is the performance of SRPT for customers with a given service time. Bansal & Harchol-Balter [4] give some evidence against the opinion that SRPT does not work well for large jobs. They base their arguments on mean-value analysis. Some interesting results on the mean sojourn time in heavy traffic were recently obtained by Bansal [5] and Bansal & Gamarnik [6], who show that SRPT significantly outperforms FIFO if the system is in heavy traffic. In the present paper we approach SRPT from a large-deviations point of view. We investigate the probability of a long sojourn time, assuming that service times are light-tailed. For heavy-tailed (more precisely, regularly varying) service-time distributions, Nunez-Queija [18] has shown that the tail of the sojourn-time distribution \\(\\mathbb{P}\\{V_{SRPT}>x\\}\\) and the tail of the service-time distribution \\(\\mathbb{P}\\{B>x\\}\\) coincide up to a constant. This appealing property is shared by several other preemptive service disciplines, for example by Last-In-First-Out (LIFO), Foreground-Background (FB) and Processor Sharing (PS); see [7] for a survey. Non-preemptive service disciplines, like FIFO, are known to behave worse: the tail of the sojourn time behaves like \\(x\\mathbb{P}\\{B>x\\}\\). This is the worst possible case, since it coincides with the tail behavior of a residual busy period; for details see again [7]. For light-tailed service times the situation is reversed. In a fundamental paper, Ramanan & Stolyar [20] showed that FIFO maximizes the decay rate (see Section 2 for a precise definition) of the sojourn-time distribution over all work-conserving service disciplines. Thus, from a large-deviations point of view, FIFO is optimal for light-tailed service-time distributions. Since for any work-conserving service discipline the sojourn time is bounded by a residual busy period, the decay rate of the residual busy period is again the worst possible. Recently, it has been shown that this worst-case decay-rate behavior of the sojourn time is exhibited under LIFO, FB [15], and, under an additional assumption, PS [17]. The present paper shows that a similar result holds for both non-preemptive and preemptive SRPT, under the assumption that the service-time distribution has no mass at its right endpoint. Thus, for many light-tailed service-time distributions, as for example phase-type service times, large sojourn times are much more likely under SRPT than under FIFO. The derivation of this result is based upon a simple probabilistic argument; see Section 4.1. The case where there is mass at the right endpoint of the service-time distribution may be considered to be a curiosity; however, from a theoretical point of view, it actually turns out to be the most interesting case. The associated analysis, carried out in Section 4.2, is based on a relation with a \\(GI/GI/1\\) priority queue. Since we could not find large-deviations results in the literature (an in-depth treatment of the \\(M/G/1\\) priority queue is provided by Abate & Whitt [1]), we analyze this \\(GI/GI/1\\) priority queue in Section 3. Another noteworthy feature of this case is that the resulting decay rate is strictly larger than the one under LIFO, but strictly smaller than under FIFO (with the exception of deterministic service times, for which the FIFO decay rate is attained). A similar result was recently shown in Egorova _et al._[10] for the \\(M/D/1\\) PS queue. However, in general examples of service disciplines that exhibit this \"in-between\" behavior are rare; see Section 5.1 of this paper for an overview. Our results on SRPT suggest that, from a large-deviations point of view, it is not advisable to switch from FIFO to SRPT. However, in Section 6 we show that this suggestion should be handled with care. Specifically, we investigate the decay rate of the _conditional_ sojourn time, i.e., the sojourn time of a customer with service time \\(y\\). We show that there exists a critical service time \\(y^{*}\\) such that SRPT is better than FIFO for service times below \\(y^{*}\\) and worse for service times larger than \\(y^{*}\\). A performance indicator is the fraction of customers with service time exceeding \\(y^{*}\\). We show that this fraction is close to zero for both low and high loads; numerical experiments suggest that this fraction is still very small for moderate values of the load. This paper is organized as follows. Section 2 introduces notation and states some preliminary results. In particular, the decay rates of the workload and busy period are derived in complete generality. Section 3 treats a two-class priority queue with renewal input and investigates the tail behavior of the low-priority waiting time. The results on SRPT are presented in Section 4. Section 5 treats various implications of the results in Sections 3 and 4. First, we compare our results with the decay rates for LIFO and FIFO, and show that the decay rate of the sojourn time under SRPT is strictly in between these two if the service-time distribution has mass at its right endpoint. We then treat the special case of Poisson arrivals; in particular we show that our results for the priority queue agree with those of Abate & Whitt [1]. In addition, we consider the behavior of the decay rates in heavy traffic. Conditional sojourn times are investigated in Section 6. We summarize our results and propose directions for further research in Section 7. ## 2 Preliminaries: workload and busy period In this section we introduce the notation and derive two preliminary results. We consider a stationary, work-conserving \\(GI/GI/1\\) queue, with the server working at unit speed. Generic inter-arrivaland service times are denoted by \\(A\\) and \\(B\\). To avoid trivialities, we assume that \\(\\mathbb{P}\\{B>A\\}>0\\) (otherwise there would be no delays). Define the system load \\(\\rho=\\mathbb{E}\\{B\\}/\\mathbb{E}\\{A\\}<1\\). Since \\(\\rho<1\\), the workload process is positive recurrent and the busy period \\(P\\) has finite mean. The moment generating function of a random variable \\(X\\) is denoted by \\(\\Phi_{X}(s)=\\mathbb{E}\\{\\mathrm{e}^{sX}\\}\\). Throughout the paper we assume that \\(B\\) is light-tailed, i.e., that \\(\\Phi_{B}(s)\\) is finite in a neighborhood of \\(0\\). Let \\(W\\) be the workload seen by a customer upon arrival in steady state. This workload coincides with the FIFO waiting time. Furthermore, let \\(W^{y}\\) be the steady-state workload on arrival epochs in the \\(GI/GI/1\\) queue with service times \\(B^{y}=BI(B<y)\\). Let \\(P^{y}\\) denote the busy period in such a queue. Our first preliminary result concerns the logarithmic tail asymptotics for \\(W\\). **Proposition 2.1**: _As \\(x\\to\\infty\\), we have that \\(\\log\\mathbb{P}\\{W>x\\}\\sim-\\gamma_{w}x\\), with_ \\[\\gamma_{w}=\\sup\\{s:\\Phi_{A}(-s)\\Phi_{B}(s)\\leq 1\\}. \\tag{2.1}\\] We call \\(\\gamma_{w}\\) the _decay rate_ of \\(W\\). Generally, for any random variable \\(U\\), we call \\(\\gamma_{u}\\) the decay rate of \\(U\\) if for \\(x\\to\\infty\\), \\[\\log\\mathbb{P}\\{U>x\\}=-\\gamma_{u}x+\\mathrm{o}(x).\\] If \\(\\Phi_{A}(-\\gamma_{w})\\Phi_{B}(\\gamma_{w})=1\\), several proofs of Proposition 2.1 are available, see e.g. Asmussen [2], Ganesh _et al._[11] and Glynn & Whitt [12]. We believe that the result in its present generality is known as well, but could not find a reference. For completeness, a short proof is included here. **Proof of Proposition 2.1** The upper bound follows from a famous result of Kingman [14]: \\[\\log\\mathbb{P}\\{W>x\\}\\leq-\\gamma_{w}x.\\] For the lower bound we use a truncation argument. From Theorem XIII.5.3 of [2] (the condition of that theorem is easily seen to be satisfied for bounded service times), it follows that \\[\\log\\mathbb{P}\\{W^{y}>x\\} This implies that \\(\\gamma_{w}^{*}\\leq\\gamma_{w}\\), so that \\(\\lim_{y\\to\\infty}\\gamma_{w}^{y}=\\gamma_{w}^{*}=\\gamma_{w}\\). This yields the desired lower limit. \\(\\Box\\) We continue by deriving an expression for the decay rate \\(\\gamma_{p}\\) of the busy period \\(P\\). Sufficient conditions for precise asymptotics of \\(\\mathbb{P}\\{P>x\\}\\), which are of the form \\(Cx^{-3/2}\\mathrm{e}^{-\\gamma_{p}x}\\), are given in Palmowski & Rolski [19]. These asymptotics follow from a detailed analysis, involving a change-of-measure argument. We show that logarithmic asymptotics (which are of course implied by precise asymptotics) can be given without any further assumptions. **Proposition 2.2**: _As \\(x\\to\\infty\\), we have \\(\\log\\mathbb{P}\\{P>x\\}\\sim-\\gamma_{p}x,\\) with_ \\[\\gamma_{p}=\\sup_{s\\geq 0}\\{s-\\Psi(s)\\}, \\tag{2.2}\\] _and \\(\\Psi(s)=-\\Phi_{A}^{-1}\\left(\\frac{1}{\\mathfrak{E}_{B}(s)}\\right)\\)._ **Proof** We first derive an upper bound. Let \\(X(t)\\) be the amount of work offered to the queue in the interval \\([0,t]\\). In Lemma 2.1 of Mandjes & Zwart [17] it is shown that for each \\(s\\geq 0\\), \\[\\Psi(s)=\\lim_{t\\to\\infty}\\frac{1}{t}\\log\\mathbb{E}\\{\\mathrm{e}^{sX(t)}\\}. \\tag{2.3}\\] Using the Chernoff bound, we have for all \\(s\\geq 0\\), \\[\\mathbb{P}\\{P>t\\}\\leq\\mathbb{P}\\{X(t)>t\\}\\leq\\mathrm{e}^{-st+\\log\\mathbb{E}\\{ \\exp\\{sX(t)\\}\\}}.\\] Consequently, \\[\\limsup_{t\\to\\infty}\\frac{1}{t}\\log\\mathbb{P}\\{P>t\\}\\leq-s+\\limsup_{t\\to\\infty }\\frac{1}{t}\\log\\mathbb{E}\\{\\exp\\{sX(t)\\}\\}=-(s-\\Psi(s)).\\] Minimizing over \\(s\\) yields the upper bound for \\(\\mathbb{P}\\{P>t\\}\\). We now turn to the lower bound, for which we again use a truncation argument. First, note that \\[\\mathbb{P}\\{P>x\\}\\geq\\mathbb{P}\\{P^{y}>x\\}.\\] For truncated service times, the assumptions in [19] for the exact asymptotics (cf. Equation (33) in [19]) are satisfied, and we have, with obvious notation, \\[\\liminf_{x\\to\\infty}\\frac{1}{x}\\log\\mathbb{P}\\{P>x\\}\\geq\\lim_{x\\to\\infty} \\frac{1}{x}\\log\\mathbb{P}\\{P^{y}>x\\}=-\\sup_{s\\geq 0}\\{s-\\Psi^{y}(s)\\}=- \\gamma_{p}^{y}.\\] So to prove the theorem, it suffices to show that \\(\\gamma_{p}^{y}\\to\\gamma_{p}\\) for \\(y\\to\\infty\\). Define \\(f^{y}(s)=s-\\Psi^{y}(s)\\). It is clear that \\(f^{y}(s)\\to f(s)=s-\\Psi(s)\\) pointwise as \\(y\\to\\infty\\) and that \\(f^{y}(s)\\) is decreasing in \\(y\\). Consequently, we have that the limit of \\(\\gamma_{p}^{y}\\) for \\(y\\to\\infty\\) exists and that \\[\\gamma_{p}^{*}=\\lim_{y\\to\\infty}\\gamma_{p}^{y}=\\lim_{y\\to\\infty}\\sup_{s\\geq 0 }f^{y}(s)\\geq\\sup_{s\\geq 0}f(s)=\\gamma_{p}.\\]It remains to show that the reverse inequality holds. For this, we use an argument similar to one in the proof of Cramers theorem (cf. Dembo & Zeitouni [9], p. 33). Take \\(y_{0}\\) such that \\(\\mathbb{P}\\{B^{y}>A\\}>0\\) for \\(y>y_{0}\\). Then there exist \\(\\delta,\\eta>0\\) such that \\(\\mathbb{P}\\{B^{y}-A\\geq\\delta\\}\\geq\\eta>0\\) for \\(y\\geq y_{0}\\). Hence, for \\(y\\geq y_{0}\\), \\[\\Phi_{B^{y}}(s)\\Phi_{A}(-s)=\\mathbb{E}\\{\\mathrm{e}^{sB^{y}}\\}\\mathbb{E}\\{ \\mathrm{e}^{-sA}\\}=\\mathbb{E}\\{\\mathrm{e}^{s(B^{y}-A)}\\}\\geq\\eta e^{s\\delta}.\\] For \\(s\\) large enough, we now have \\[\\Phi_{A}(-s)\\geq\\frac{1}{\\Phi_{B^{y}}(s)}.\\] Since \\(\\Phi_{A}^{-1}(s)\\) is increasing in \\(s\\), we find that for \\(s\\) and \\(y\\) large enough, \\[s+\\Phi_{A}^{-1}\\Big{(}\\frac{1}{\\Phi_{B^{y}}(s)}\\Big{)}\\leq s+\\Phi_{A}^{-1} \\big{(}\\Phi_{A}(-s)\\big{)}=0.\\] Since \\(\\Phi_{A}^{-1}\\left(1/\\Phi_{B^{y}}(s)\\right)\\) is decreasing in \\(y\\) and is continuous in \\(s\\), we see that for \\(y\\geq y_{0}\\) the level sets \\(L_{y}=\\{s:f^{y}(s)\\geq\\gamma_{p}^{*}\\}\\) are compact. Moreover, since \\(f^{y}(s)\\) is decreasing in \\(y\\), the level sets are nested with respect to \\(y\\). Consequently, the intersection of the level sets \\(L_{y}\\) contains at least one element, say \\(s_{0}\\). By the definition of \\(s_{0}\\), we have \\(f^{y}(s_{0})\\geq\\gamma_{p}^{*}\\) for every \\(y\\). Thus, since \\(f^{y}\\) converges pointwise, \\[\\gamma_{p}=\\sup_{s\\geq 0}f(s)\\geq f(s_{0})=\\lim_{y\\to\\infty}f^{y}(s_{0})\\geq \\gamma_{p}^{*}.\\] We conclude that \\(\\gamma_{p}^{y}\\to\\gamma_{p}\\) as \\(y\\to\\infty\\), which completes the proof. \\(\\Box\\) ## 3 The \\(Gi/gi/1\\) priority queue In this section, we consider the following \\(GI/GI/1\\) two-class priority queue. Customers arrive according to a renewal process with generic inter-arrival time \\(A\\). An arriving customer is of class 1 with probability \\(p\\), in which case he has service time \\(B_{1}\\). Customers of class 2 have service time \\(B_{2}\\). Class-1 customers have priority over class-2 customers. We assume that \\(0<p<1\\), and that \\(p\\mathbb{E}\\{B_{1}\\}+(1-p)\\mathbb{E}\\{B_{2}\\}<\\mathbb{E}\\{A\\}\\), which ensures that the priority queue is stable. We are interested in the steady-state waiting time \\(W_{2}\\) of a class-2 customer, that is, the time a class-2 customer has to wait before he enters service for the first time. Note that \\(W_{2}\\) is independent of whether the priority mechanism is preemptive or not. Let \\(N_{1}(t)\\) be the renewal process generated by the arrivals of the class-1 customers, i.e., \\(N_{1}(t)=\\max\\{n:A_{1,1}+\\cdots+A_{1,n}\\leq t\\}\\). Here \\(A_{1,i}\\) is the time between the arrival of the \\((i-1)\\)-st and \\(i\\)-thcustomer. A generic class-1 inter-arrival time is denoted by \\(A_{1}\\). Note that \\(A_{1}\\) is a geometric sum of \"original\" inter-arrival times \\(A\\): \\[\\Phi_{A_{1}}(s)=\\mathbb{E}\\{\\mathrm{e}^{sA_{1}}\\}=\\sum_{n=0}^{\\infty}p(1-p)^{n} \\Phi_{A}(s)^{n+1}=\\frac{p\\Phi_{A}(s)}{1-(1-p)\\Phi_{A}(s)}.\\] Define \\[X_{1}(t)=\\sum_{i=1}^{N_{1}(t)}B_{1,i}.\\] Hence, \\(X_{1}(t)\\) is the amount of work of type 1 that has arrived in the system by time \\(t\\). Let \\(P_{1}\\) be a generic busy period of class 1 customers. Finally, let \\(P_{1}(x)\\) be a busy period of class-1 customers with an initial customer of size \\(x\\), so \\[P_{1}(x)\\stackrel{{ d}}{{=}}\\inf\\{t\\geq 0:x+X_{1}(t)\\leq t\\}.\\] Denoting the total workload in the queue at arrivals again by \\(W\\) (cf. Section 2), we have the following fundamental identity: \\[W_{2}\\stackrel{{ d}}{{=}}P_{1}(W), \\tag{3.1}\\] where \\(W\\) and \\(\\{P_{1}(x),x\\geq 0\\}\\) are independent. This identity holds since, using a discrete-time version of PASTA, \\(W\\) is also the workload as seen by an arriving customer of class 2. Set \\[\\Psi_{1}(s)=-\\Phi_{A_{1}}^{-1}\\left(\\frac{1}{\\Phi_{B_{1}}(s)}\\right). \\tag{3.2}\\] The main result of this section is the following. **Theorem 3.1**: _As \\(x\\to\\infty\\), we have \\(\\log\\mathbb{P}\\{W_{2}>x\\}\\sim-\\gamma_{w_{2}}x,\\) with_ \\[\\gamma_{w_{2}}=\\sup_{s\\in[0,\\gamma_{w}]}\\{s-\\Psi_{1}(s)\\}. \\tag{3.3}\\] Before we give a proof of this theorem, we first describe some heuristics, starting from \\(W_{2}\\stackrel{{ d}}{{=}}P_{1}(W)\\). The most likely way for \\(W_{2}\\) to become large (i.e., \\(W_{2}>x\\)) involves a combination of two events: (i) \\(W\\) is of the order \\(ax\\) for some constant \\(a\\geq 0\\); (ii) \\(P_{1}(ax)\\) is of the order \\(x\\). Clearly, there is a trade-off: as \\(a\\) becomes larger, scenario (i) become less likely, while scenario (ii) becomes more likely. Thus, we need to find the optimal value of \\(a\\). For this we need to know the large-deviations decay rates associated with events (i) and (ii). The decay rate of event (i) is simply \\(a\\gamma_{w}\\). To obtain the decay rate of event (ii), note that \\[\\mathbb{P}\\{P_{1}(ax)>x\\}\\approx\\mathbb{P}\\{X_{1}(x)>(1-a)x\\}.\\]One can show that the RHS probability has decay rate \\(\\sup_{s\\geq 0}\\{(1-a)s-\\Psi_{1}(s)\\}\\). Thus, the optimal value of \\(a\\), and the decay rate \\(\\gamma_{w_{2}}\\), can be found by optimizing the expression \\[\\inf_{a\\geq 0}\\{a\\gamma_{w}+\\sup_{s\\geq 0}[(1-a)s+\\Psi_{1}(s)]\\}.\\] It is possible to show that the value of this program coincides with \\(\\sup_{s\\in[0,\\gamma_{w}]}\\{s-\\Psi_{1}(s)\\}\\). Moreover, the optimal value of \\(a\\) is \\(0\\) if the optimizing argument of \\(s-\\Psi_{1}(s)\\) is strictly less than \\(\\gamma_{w}\\), and it is \\(1-\\Psi_{1}^{\\prime}(\\gamma_{w})\\) if \\(\\sup_{s\\in[0,\\gamma_{w}]}\\{s-\\Psi_{1}(s)\\}=\\gamma_{w}-\\Psi_{1}(\\gamma_{w})\\). In the proof below, we only use these heuristics to \"guess\" the correct value of \\(a\\). Note that the two cases \\(a>0\\) and \\(a=0\\) correspond to two qualitatively different scenarios leading to a large value of \\(W_{2}\\). If \\(a=0\\), then the customer sees a \"normal\" amount of work upon arrival, while \\(a>0\\) results in a workload of the order \\(ax\\) at time \\(0\\). This distinction between two different scenarios is typical in priority queueing, see Abate & Whitt [1] and Mandjes & Van Uitert [16] for more discussion. **Proof** We start with the upper bound. Using the Chernoff bound, we find that for \\(s\\geq 0\\), \\[\\mathbb{P}\\{W_{2}>x\\}=\\mathbb{P}\\{P_{1}(W)>x\\}\\leq\\mathbb{P}\\{W+X_{1}(x)-x>0 \\}\\leq\\mathbb{E}\\{{\\rm e}^{sW}\\}{\\rm e}^{-xs}\\mathbb{E}\\{{\\rm e}^{sX_{1}(x)}\\}.\\] Using (2.3) with \\(X(t)\\) replaced by \\(X_{1}(t)\\), we see that for all \\(s\\in[0,\\gamma_{w})\\), \\[\\limsup_{x\\to\\infty}\\frac{1}{x}\\log\\mathbb{P}\\{W_{2}>x\\}\\leq-[s-\\Psi_{1}(s)].\\] The proof of the upper bound is completed by minimizing over \\(s\\), and noting that \\(\\sup_{s\\in[0,\\gamma_{w})}\\{s-\\Psi_{1}(s)\\}=\\sup_{s\\in[0,\\gamma_{w}]}\\{s-\\Psi_{ 1}(s)\\}\\). We now turn to the lower bound. From the proof of Proposition 2.2, we see that \\(P_{1}\\) has decay rate \\(\\gamma_{p_{1}}=\\sup_{s\\geq 0}\\{s-\\Psi_{1}(s)\\}\\). Let \\(s_{1}\\) be the unique optimizing argument. In addition, let \\(r\\) be the probability that at the arrival of a class-2 customer to the steady state queue at least one customer of type 1 is waiting. It is obvious that \\(r>0\\). Since \\(P_{1}(W)\\geq_{st}P_{1}\\) on this event, we see that \\[\\mathbb{P}\\{P_{1}(W)>x\\}\\geq r\\mathbb{P}\\{P_{1}>x\\},\\] which by (3.1) implies that \\[\\liminf_{x\\to\\infty}\\frac{1}{x}\\log\\mathbb{P}\\{W_{2}>x\\}\\geq-\\gamma_{p_{1}}.\\] Thus, if \\(s_{1}\\leq\\gamma_{w}\\), we can conclude from this and the upper bound that \\[\\lim_{x\\to\\infty}\\frac{1}{x}\\log\\mathbb{P}\\{W_{2}>x\\}=-\\gamma_{p_{1}}.\\]What remains is to consider the case \\(s_{1}>\\gamma_{w}\\). Since the concave function \\(s-\\Psi_{1}(s)\\) is increasing between \\(0\\) and \\(s_{1}\\), we see that \\(\\sup_{s\\in[0,\\gamma_{w}]}\\{s-\\Psi_{1}(s)\\}=\\gamma_{w}-\\Psi_{1}(\\gamma_{w})\\). Thus, to complete the proof of the theorem, it suffices to show that \\[\\liminf_{x\\to\\infty}\\frac{1}{x}\\log\\mathbb{P}\\{W_{2}>x\\}\\geq-[\\gamma_{w}-\\Psi _{1}(\\gamma_{w})]. \\tag{3.4}\\] Note that for any \\(a>0\\), \\[\\mathbb{P}\\{W_{2}>x\\}\\geq\\mathbb{P}\\{W>ax\\}\\mathbb{P}\\{P_{1}(ax)>x\\}=\\mathrm{e }^{-a\\gamma_{w}x+\\mathrm{o}(x)}\\mathbb{P}\\{P_{1}(ax)>x\\}. \\tag{3.5}\\] Combining (3.5) and Lemma 3.2 below, we see that by taking \\(a=1-\\Psi_{1}^{\\prime}(\\gamma_{w})\\), \\[\\log\\mathbb{P}\\{W_{2}>x\\}\\geq-a\\gamma_{w}x+\\mathrm{o}(x)+\\log\\mathbb{P}\\{P_{1 }(ax)>x\\}=-x(\\gamma_{w}-\\Psi_{1}(\\gamma_{w}))+\\mathrm{o}(x),\\] which coincides with (3.4), as was required. \\(\\Box\\) We now provide the result that was quoted in the proof above. **Lemma 3.2**: _Set \\(a=1-\\Psi_{1}^{\\prime}(\\gamma_{w})\\). If \\(\\gamma_{w}<s_{1}\\), then_ \\[\\log\\mathbb{P}\\{P_{1}(ax)>x\\}\\geq-x(\\gamma_{w}(1-a)-\\Psi_{1}(\\gamma_{w}))+ \\mathrm{o}(x).\\] **Proof** To prove the lemma we use a change-of-measure argument. Define a probability measure \\(\\mathbb{P}_{\ u}\\{\\cdot\\}\\) for \\(\ u\\geq 0\\) such that \\[\\mathbb{P}_{\ u}\\{A_{1,i}\\in\\mathrm{d}x\\} = \\mathrm{e}^{-\\Psi_{1}(\ u)x}\\mathbb{P}\\{A_{1,i}\\in\\mathrm{d}x\\}/ \\Phi_{A_{1}}(-\\Psi_{1}(\ u)),\\hskip 28.452756pti\\geq 1,\\] \\[\\mathbb{P}_{\ u}\\{B_{1,i}\\in\\mathrm{d}x\\} = \\mathrm{e}^{\ u x}\\mathbb{P}\\{B_{1,i}\\in\\mathrm{d}x\\}/\\Phi_{B_{1} }(\ u),\\hskip 28.452756pti\\geq 1.\\] Choose \\(\ u=\ u_{\\varepsilon}\\) such that \\[\\Psi_{1}^{\\prime}(\ u_{\\varepsilon})=\\frac{\\mathbb{E}_{\ u}\\{B_{i}\\}}{ \\mathbb{E}_{\ u}\\{X_{i}\\}}=\\frac{\\Phi_{B_{1}}^{\\prime}(\ u_{\\varepsilon})}{ \\Phi_{B_{1}}(\ u_{\\varepsilon})}\\bigg{/}\\frac{\\Phi_{A_{1}}^{\\prime}(-\\Psi( \ u_{\\varepsilon}))}{\\Phi_{A_{1}}(-\\Psi(\ u_{\\varepsilon}))}=1-a+\\varepsilon, \\hskip 28.452756pt\\varepsilon<a.\\] We denote this probability measure by \\(\\mathbb{P}_{\ u_{\\varepsilon}}\\{\\cdot\\}\\). The drift under this new measure is \\(1-a+\\epsilon\\), making the event \\(\\{P_{1}(ax)>x\\}\\) extremely likely for large \\(x\\). Note that \\(\ u_{0}=\\gamma_{w}\\), by the definition of \\(a\\), and since \\(\\Psi_{1}^{\\prime}(s)\\) is strictly increasing. Let \\(\\mathcal{F}_{n}\\) be the Borel \\(\\sigma\\)-algebra generated by \\(A_{1,1},\\ldots,A_{1,n},B_{1,1},\\ldots,B_{1,n}\\). Define \\(S_{n}^{A_{1}}=A_{1,1}+\\ldots+A_{1,n}\\) and \\(S_{n}^{B_{1}}=B_{1,1}+\\ldots+B_{1,n}\\). Note that \\(\\bar{N}_{1}(x):=N_{1}(x)+1\\) is a stopping time w.r.t. the filtration \\((\\mathcal{F}_{n}).\\) Furthermore, note that the event \\(\\{P_{1}(ax)>x\\}\\) is \\(\\mathcal{F}_{\\bar{N}(x)}\\)-measurable. Finally, note that for every \\(\\varepsilon>0\\) small enough, the process \\(1/M_{n}^{\\varepsilon},n\\geq 1\\), with \\[M_{n}^{\\varepsilon}=\\exp\\{\\Psi_{1}(\ u_{\\varepsilon})S_{n}^{A_{1}}-\ u_{ \\varepsilon}S_{n}^{B_{1}}\\},\\]is a martingale w.r.t. \\({\\cal F}_{n}\\) under \\({\\mathbb{P}}\\{\\cdot\\}\\), since the definition of \\(\\Psi_{1}\\) ensures that \\(\\Phi_{A_{1}}(-\\Psi_{1}(\ u_{\\varepsilon}))\\Phi_{B_{1}}(\ u_{\\varepsilon})=1\\). Thus, we have the following fundamental identity (see for example Theorem XIII.3.2 in [2]): \\[{\\mathbb{P}}\\{P_{1}(ax)>x\\}={\\mathbb{E}}_{\ u_{\\varepsilon}}\\{M^{\\varepsilon}_ {\\bar{N}_{1}(x)}I(P_{1}(ax)>x)\\}.\\] Furthermore, we have for any event \\({\\cal S}\\subseteq{\\cal F}_{\\bar{N}_{1}(x)}\\), \\[{\\mathbb{P}}\\{P_{1}(ax)>x\\}\\geq{\\mathbb{E}}_{\ u_{\\varepsilon}}\\{M^{ \\varepsilon}_{\\bar{N}_{1}(x)}I(P_{1}(ax)>x)I({\\cal S})\\}. \\tag{3.6}\\] Take here \\[{\\cal S}\\equiv{\\cal S}_{\\varepsilon}:=\\left\\{S^{B_{1}}_{N_{1}(x)}\\leq(1-a+ \\varepsilon)x\\right\\}.\\] Note that \\(S^{A_{1}}_{N_{1}(x)+1}>x\\) by definition and apply the definition of \\({\\cal S}_{\\varepsilon}\\) to obtain from (3.6) the following lower bound for \\({\\mathbb{P}}\\{P_{1}(ax)>x\\}\\): \\[\\liminf_{x\\to\\infty}\\frac{1}{x}\\log{\\mathbb{P}}\\{P_{1}(ax)>x\\}\\geq-\ u_{ \\varepsilon}(1-a+\\varepsilon)+\\Psi_{1}(\ u_{\\varepsilon})+\\liminf_{x\\to \\infty}\\frac{1}{x}\\log{\\mathbb{P}}_{\ u_{\\varepsilon}}\\{P_{1}(ax)>x,{\\cal S}_ {\\varepsilon}\\}.\\] By the law of large numbers, we have that \\({\\mathbb{P}}_{\ u_{\\varepsilon}}\\{P_{1}(ax)>x;{\\cal S}_{\\varepsilon}\\}\\) is bounded away from zero, uniformly in \\(x\\) for every \\(\\varepsilon>0\\). Consequently, \\[\\liminf_{x\\to\\infty}\\frac{1}{x}\\log{\\mathbb{P}}\\{P_{1}(ax)>x\\}\\geq-\ u_{ \\varepsilon}(1-a+\\varepsilon)+\\Psi_{1}(\ u_{\\varepsilon}).\\] Now let \\(\\varepsilon\\downarrow 0\\). This yields \\[\\liminf_{x\\to\\infty}\\frac{1}{x}\\log{\\mathbb{P}}\\{P_{1}(ax)>x\\}\\geq-\\gamma_{w} (1-a)+\\Psi_{1}(\\gamma_{w}),\\] and the statement of the lemma follows. \\(\\Box\\) From Theorem 3.1 we can deduce the decay rate of the sojourn time \\(V_{2}\\) of class-2 customers. This turns out to be the same for both preemptive and non-preemptive service. **Theorem 3.3**: _As \\(x\\to\\infty\\), we have \\(\\log{\\mathbb{P}}\\{V_{2}>x\\}\\sim-\\gamma_{w_{2}}x\\), where \\(\\gamma_{w_{2}}\\) is as in (3.3)._ **Proof** For the non-preemptive case, we have \\(V_{2}=W_{2}+B_{2}\\), where \\(W_{2}\\) and \\(B_{2}\\) are independent. Since the decay rate of \\(B_{2}\\) is larger than \\(\\gamma_{w_{2}}\\), and since the decay rate of a sum of independent random variables is equal to the smallest decay rate (see for example [15] for a short proof), the result for this case follows immediately. In the preemptive case, we use that \\[V_{2}\\stackrel{{ d}}{{=}}P_{1}(W+B_{2})\\geq_{st}W_{2},\\] which gives us the lower bound. The upper bound follows the same lines of the proof of Theorem 3.1 and noting that \\({\\mathbb{E}}\\{{\\rm e}^{\\gamma_{w}B_{2}}\\}<\\infty\\). \\(\\Box\\)Shortest Remaining Processing Time In this section we present our results on the sojourn time under the SRPT discipline. Define \\(V_{SRPT}\\) as the steady-state sojourn time of a customer under the preemptive SRPT discipline. Further, define the right endpoint \\(x_{B}\\) by \\(x_{B}=\\sup\\{x:\\mathbb{P}\\{B>x\\}>0\\}\\). When it comes to determining the decay rate of \\(V_{SRPT}\\), it turns out to be crucial whether \\[\\mathbb{P}\\{B=x_{B}\\}=0, \\tag{4.1}\\] or not. In the first subsection, we show that if (4.1) holds, then the decay rate of \\(V_{SRPT}\\) is equal to \\(\\gamma_{p}\\), the decay rate of the busy period \\(P\\). If (4.1) does not hold, the situation is more complicated. In that case we use the results of the previous section to show that the decay rate of \\(V_{SRPT}\\) is equal to \\(\\gamma_{w_{2}}\\), where \\(W_{2}\\) is the waiting time in a certain auxiliary priority queue. This is the subject of the second subsection. We also show that for the non-preemptive SRPT discipline the same results hold. ### No mass at the right endpoint In this section we prove the following theorem. **Theorem 4.1**: _Suppose that \\(\\mathbb{P}\\{B=x_{B}\\}=0\\). Then \\(\\log\\mathbb{P}\\{V_{SRPT}>x\\}\\sim-\\gamma_{p}x\\) for \\(x\\to\\infty\\), with \\(\\gamma_{p}\\) as in (2.2)._ **Proof** Let \\(V_{SRPT}\\) be the sojourn time of a tagged customer with service time \\(B\\). Since \\(V_{SRPT}\\leq P^{*}\\), where \\(P^{*}\\) is the residual busy period \\(P(W)\\), and since for light tails the decay rate of \\(P^{*}\\) coincides with that of \\(P\\) (this follows from Lemma 3.2 in [1]), we see that \\[\\limsup_{x\\to\\infty}\\frac{1}{x}\\log\\mathbb{P}\\{V_{SRPT}>x\\}\\leq-\\gamma_{p}.\\] Thus, it suffices to show that the corresponding result holds for the lower limit. For this, we construct a lower bound for \\(\\mathbb{P}\\{V_{SRPT}>x\\}\\). Assume first that \\(x_{B}=\\infty\\). Let \\(A\\) be the last inter-arrival time before the tagged customer arrives, \\(B_{0}\\) be the service time of that customer, and \\(a\\) be such that \\(\\mathbb{P}\\{A<a\\}>0\\). Then, for all \\(y\\), \\[\\mathbb{P}\\{V_{SRPT}>x\\} \\geq \\mathbb{P}\\{V_{SRPT}>x;B>y,A<a,B_{0}\\leq y\\}\\] \\[\\geq \\mathbb{P}\\{A<a\\}\\mathbb{P}\\{B>y\\}\\mathbb{P}\\{B_{0}\\leq y\\} \\mathbb{P}\\{P^{y-a}>x\\}.\\] The last inequality holds since conditional on \\(A<a,B>y\\) and \\(B_{0}\\leq y\\), the tagged customer has to wait at least for the sub-busy period generated by the customer that arrived before him,and this sub-busy period is stochastically larger than \\(P^{y-a}\\). Since \\(\\mathbb{P}\\{A<a\\}\\mathbb{P}\\{B>y\\}>0\\), and \\(\\mathbb{P}\\{B_{0}\\leq y\\}>0\\) for \\(y\\) large enough, we have that \\[\\liminf_{x\\to\\infty}\\frac{1}{x}\\log\\mathbb{P}\\{V_{SRPT}>x\\}\\geq-\\gamma_{p}^{y-a}\\] for \\(y\\) large enough. Letting \\(y\\to x_{B}=\\infty\\), we obtain \\(\\gamma_{p}^{y-a}\\to\\gamma_{p}\\), as in the proof of Proposition 2.2. If \\(x_{B}<\\infty\\), the above proof can be modified in a straightforward way if \\(\\mathbb{P}\\{A<a\\}>0\\) for all \\(a>0\\). However, this may not be the case in general and therefore we have to make a more involved construction. By definition of \\(x_{B}\\), there exists a decreasing sequence \\((\\varepsilon_{n})\\) such that \\(\\mathbb{P}\\{x_{B}-\\varepsilon_{n}<B<x_{B}-\\varepsilon_{n}/2\\}>0\\) for all \\(n\\), and \\(\\varepsilon_{n}\\to 0\\) as \\(n\\to\\infty\\). Since \\(\\mathbb{P}\\{B>A\\}>0\\), we can assume that \\(\\varepsilon_{1}\\) is such that \\(\\mathbb{P}\\{A<x_{B}-2\\varepsilon_{1}\\}>0\\). Let \\(R_{n}\\) be the event that the last \\(\\lfloor x_{B}/\\varepsilon_{n}\\rfloor\\) customers that arrived before the tagged customer had a service time in the interval \\([x_{B}-\\varepsilon_{n},x_{B}-\\varepsilon_{n}/2]\\), and that the last \\(\\lfloor x_{B}/\\varepsilon_{n}\\rfloor\\) inter-arrival times were smaller than \\(x_{B}-2\\varepsilon_{n}\\). By definition of \\(\\varepsilon_{n}\\), we have \\(\\mathbb{P}\\{R_{n}\\}>0\\) for all \\(n\\). Furthermore, by the SRPT priority rule, we see by induction that on the event \\(R_{n}\\), after the \\(k\\)th of the last \\(n\\) inter-arrival times, there is a customer with remaining service time larger than \\(k\\varepsilon_{n}\\). Hence, at the arrival of the tagged customer, there is a customer in the system with remaining service time in the interval \\([x_{B}-\\varepsilon_{n},x_{B}-\\varepsilon_{n}/2]\\). If the tagged customer has service time \\(B>x_{B}-\\varepsilon_{n}/2\\), his sojourn time satisfies \\(V_{SRPT}\\geq P^{x_{B}-\\varepsilon_{n}}\\) on \\(R_{n}\\). Consequently, for all \\(n\\in\\mathbb{N}\\), \\[\\mathbb{P}\\{V_{SRPT}>x\\}\\geq\\mathbb{P}\\{R_{n}\\}\\mathbb{P}\\{B>x_{B}-\\varepsilon _{n}/2\\}\\mathbb{P}\\{P^{x_{B}-\\varepsilon_{n}}>x\\}.\\] This implies \\[\\liminf_{x\\to\\infty}\\frac{1}{x}\\log\\mathbb{P}\\{V_{SRPT}>x\\}\\geq-\\gamma_{p}^{x _{B}-\\varepsilon_{n}}.\\] Letting \\(n\\to\\infty\\), and hence \\(\\varepsilon_{n}\\downarrow 0\\), we get \\(\\gamma_{p}^{x_{B}-\\varepsilon_{n}}\\to\\gamma_{p}\\), as before. This completes the proof. \\(\\Box\\) The property that the decay rate of the sojourn time is equal to that of the busy period is shared by a number of disciplines, see Section 5.1. Further, we remark that for light tails, \\(\\gamma_{p}\\) is the smallest possible decay rate for the sojourn time in the class of all work-conserving disciplines: the sojourn time is bounded above by the residual busy period \\(P^{*}\\), and for light-tailed service times \\(P^{*}\\) has decay rate \\(\\gamma_{p}\\) (cf. Lemma 3.2 in [1]). ### Mass at right endpoint If there is mass at the right endpoint \\(x_{B}\\) of the service-time distribution, then the tail behavior of \\(V_{SRPT}\\) is more complicated. To obtain the decay rate of \\(V_{SRPT}\\) for this case, we identify the SRPTqueue with the following two-class priority queue. Let the customers of class 1 be the customers with service time strictly less than \\(x_{B}\\). Then \\(B_{2}=x_{B}\\) and \\(B_{1}\\) is such that \\[\\mathbb{P}\\{B_{1}\\leq x\\}=\\mathbb{P}\\{B\\leq x\\mid B<x_{B}\\},\\qquad x\\geq 0. \\tag{4.2}\\] **Theorem 4.2**: _Suppose that \\(\\mathbb{P}\\{B=x_{B}\\}>0\\). Then \\(\\log\\mathbb{P}\\{V_{SRPT}>x\\}\\sim-\\gamma_{v}x\\) for \\(x\\to\\infty\\), with_ \\[\\gamma_{v}=\\sup_{s\\in[0,\\gamma_{w}]}\\{s-\\Psi_{1}(s)\\}, \\tag{4.3}\\] _where \\(\\Psi_{1}\\) is as in (3.2), and \\(B_{1}\\) is as in (4.2)._ **Proof** First, note that if \\(q=\\mathbb{P}\\{B=x_{B}\\}=1\\), we have a \\(G/D/1\\) SRPT queue, which has the same dynamics as a FIFO queue. Indeed we obtain \\(\\Psi_{1}\\equiv 0\\), implying \\(\\gamma_{v}=\\gamma_{w}\\), cf. Proposition 2.1. Assume therefore that \\(0<q<1\\), let \\(V_{SRPT}\\) be the sojourn time of a tagged customer with service time \\(B\\), and write \\[\\mathbb{P}\\{V_{SRPT}>x\\}=q\\mathbb{P}\\{V_{SRPT}>x\\mid B=x_{B}\\}+(1-q)\\mathbb{P }\\{V_{SRPT}>x\\mid B<x_{B}\\}.\\] From the nature of the SRPT discipline, or a simple coupling argument, it is obvious that \\[\\mathbb{P}\\{V_{SRPT}>x\\mid B<x_{B}\\}\\leq\\mathbb{P}\\{V_{SRPT}>x\\mid B=x_{B}\\}.\\] Therefore, it suffices to consider the tail behavior of \\(\\bar{V}_{SRPT}\\), where \\[\\mathbb{P}\\{\\bar{V}_{SRPT}\\leq x\\}=\\mathbb{P}\\{V_{SRPT}\\leq x\\mid B=x_{B}\\}.\\] First, we note that \\(\\bar{V}_{SRPT}\\) is bounded from below by the time it takes until our tagged customer receives service for the first time. A crucial observation is that this period coincides with the low priority waiting-time \\(W_{2}\\) defined in Section 3. Second, note that \\(\\bar{V}_{SRPT}\\) is upper bounded by the sojourn time \\(V_{2}\\) of a class-2 customer in the above priority queue. Hence, we have \\(W_{2}\\leq_{st}\\bar{V}_{SRPT}\\leq_{st}V_{2}\\). Further, \\(V_{2}\\) satisfies \\(V_{2}\\stackrel{{ d}}{{=}}P_{1}(W+x_{B})\\) for preemptive service; for non-preemptive service, we have \\(V_{2}\\stackrel{{ d}}{{=}}P_{1}(W)+x_{B}\\). Since the logarithmic asymptotics of \\(W+x_{B}\\) coincide with those of \\(W\\), we can mimic the proof of Theorem 3.1 to see that in both cases the decay rate of \\(V_{2}\\) coincides with that of \\(W_{2}\\). Hence the decay rate of \\(\\bar{V}_{SRPT}\\) is given by (3.3), and the proof is completed. \\(\\Box\\) The intuition of how \\(V_{SRPT}\\) becomes large is the same as that of \\(W_{2}\\) in Section 3. In Section 5.1 below, we show that if there is mass in the endpoint \\(x_{B}\\), then the decay rate of the sojourn time lies strictly between the maximal value (obtained for FIFO) and the minimal value (LIFO). Complements In the previous two sections we have derived expressions for the decay rates \\(\\gamma_{w_{2}}\\) and \\(\\gamma_{v}\\). In this section we derive some properties of these decay rates. Specifically, in Section 5.1 we compare \\(\\gamma_{w_{2}}\\) and \\(\\gamma_{v}\\) with \\(\\gamma_{w}\\) and \\(\\gamma_{p}\\). We show that for \\(q=\\mathbb{P}\\{B=x_{B}\\}\\in(0,1)\\), we always have \\(\\gamma_{p}<\\gamma_{w_{2}}<\\gamma_{w}\\). Consequently, if \\(q\\in(0,1)\\), we also find that \\(\\gamma_{v}=\\gamma_{v}(q)\\in(\\gamma_{p},\\gamma_{w})\\). As explained in the introduction, this is a non-standard result. We also indicate that \\(\\gamma_{v}\\) can take any value between \\(\\gamma_{p}\\) and \\(\\gamma_{w}\\), depending on the value of \\(q\\). Further, in Section 5.2 we specialize our expression of \\(\\gamma_{w_{2}}\\) to the case of Poisson arrivals. For priority queues, a quite involved expression for the decay rate was given in [1]. We show that this expression can be simplified, and that it coincides with our expression of \\(\\gamma_{w_{2}}\\). Finally, in Section 5.3, we derive heavy-traffic approximations for \\(\\gamma_{w_{2}}\\). ### Comparison with other service disciplines In this subsection, we compare the decay rates \\(\\gamma_{w_{2}}\\) and \\(\\gamma_{v}\\) with the decay rate of the sojourn time under FIFO and LIFO, which respectively equal \\(\\gamma_{w}\\) and \\(\\gamma_{p}\\). We first show that for the priority queue described in Section 3, the decay rate of \\(W_{2}\\) is different from those of \\(P\\) and \\(W\\). **Proposition 5.1**: _Assume \\(0<p<1\\). Then \\(\\gamma_{p}<\\gamma_{w_{2}}<\\gamma_{w}\\)._ **Proof** Since \\(\\Psi_{1}(s)>0\\) for \\(s>0\\), we have by Theorem 3.1 that \\[\\gamma_{w_{2}}=\\sup_{s\\in[0,\\gamma_{w}]}\\{s-\\Psi_{1}(s)\\}<\\sup_{s\\in[0,\\gamma_ {w}]}s=\\gamma_{w}.\\] To prove the inequality \\(\\gamma_{w_{2}}>\\gamma_{p}\\), we provide a different construction of the function \\(\\Psi_{1}(s)\\). Let \\(B_{p}\\) be a service time which is equal to \\(B_{1}\\) with probability \\(p\\) and \\(0\\) with probability \\(1-p\\). It is clear that \\(\\Phi_{B_{p}}(s)<\\Phi_{B}(s)\\). The amount of work \\(X_{1}(t)\\) generated by class-1 customers between time \\(0\\) and \\(t\\) is the same in distribution as the amount of work generated by the arrival process with inter-arrival times \\(A\\) and service times \\(B_{p}\\). We thus get that \\(\\Psi_{1}(s)=-\\Phi_{A}^{-1}(1/\\Phi_{B_{p}}(s))\\). Since \\(\\Phi_{A}(s)\\) is strictly increasing in \\(s\\), so is its inverse \\(\\Phi_{A}^{-1}(s)\\). Combining this with \\(\\Phi_{B_{p}}(s)<\\Phi_{B}(s)\\) leads to the conclusion that \\(\\Psi_{1}(s)<\\Psi(s)\\). Recall that the residual busy period \\(P^{*}\\) satisfies \\(P^{*}\\stackrel{{ d}}{{=}}P(W)\\); its decay rate is given by \\(\\sup_{s\\in[0,\\gamma_{w}]}\\{s-\\Psi(s)\\}\\), as can be seen by mimicking the proof of Theorem 3.1. Hence, \\[\\gamma_{w_{2}}=\\sup_{s\\in[0,\\gamma_{w}]}\\{s-\\Psi_{1}(s)\\}>\\sup_{s\\in[0,\\gamma_ {w}]}\\{s-\\Psi(s)\\}=\\gamma_{p^{*}}. \\tag{5.1}\\]The proof is completed by recalling that for light-tailed service times, \\(\\gamma_{p}=\\gamma_{p^{*}}\\). \\(\\Box\\) From Theorem 4.2 and Proposition 5.1 we conclude a similar result for the SRPT discipline. **Corollary 5.2**: _If \\(0<\\mathbb{P}\\{B=x_{B}\\}<1\\), then \\(\\gamma_{p}<\\gamma_{v}<\\gamma_{w}\\)._ Hence, if there is mass in the endpoint \\(x_{B}\\), then the decay rate of the sojourn time under SRPT lies strictly between those under LIFO and FIFO. The following consequence of Theorem 4.2 indicates that in some sense, all values between those of LIFO and FIFO are assumed. Let \\(F_{q}\\) be the mixture of a distribution bounded by \\(c\\), and a distribution with all mass in \\(c\\), such that \\(\\mathbb{P}\\{B=c\\}=q\\). Assume that \\(c<\\mathbb{E}\\{A\\}\\), and let \\(\\gamma_{v}(q)\\) denote the decay rate of the sojourn time in a queue with service-time distribution \\(F_{q}\\). **Proposition 5.3**: _The decay rate \\(\\gamma_{v}(q)\\) is continuous in \\(q\\). In particular, it increases from \\(\\gamma_{p}(0)\\) to \\(\\gamma_{w}(1)\\), and assumes all values in between._ **Proof** By Theorem 4.2, it is enough to show that \\(\\gamma_{w}(q)\\) and \\(\\Psi_{1}(s)\\) are continuous in \\(q\\). Since \\(\\Phi_{B_{1}}\\) is constant in \\(q\\), and \\(\\Phi_{A_{1}}\\) is continuous in \\(q\\), also \\(\\Psi_{1}(s)\\) is continuous in \\(q\\). Furthermore, since \\(\\Phi_{A}\\) is constant in \\(q\\), \\(\\Phi_{B}\\) is continuous in \\(q\\), and \\(\\gamma_{w}(q)\\) is finite for all \\(q\\), Proposition 2.2 implies that \\(\\gamma_{w}(p)\\) is continuous in \\(q\\), and the proof is completed. \\(\\Box\\) Table 1 below shows the decay rates of the sojourn time that are known in the literature. It turns out that the property in Corollary 5.2 is quite special, since almost all other known decay rates are either minimal or maximal. ### Poisson arrivals Consider the priority queue of Section 3, with the additional assumption that \\(A\\) has an exponential distribution with rate \\(\\lambda\\). Letting \\(\\lambda_{1}=p\\lambda\\) and \\(\\lambda_{2}=(1-p)\\lambda\\), we get that \\(\\Psi_{1}(s)=\\lambda_{1}(\\Phi_{B_{1}}(s)-1)\\). Thus, we have \\[\\gamma_{w_{2}}=\\sup_{s\\in[0,\\gamma_{w}]}\\{s-\\lambda_{1}(\\Phi_{B_{1}}(s)-1)\\}. \\tag{5.2}\\] Suppose that \\(1-\\lambda_{1}\\Phi^{\\prime}_{B_{1}}(\\gamma_{w})>0\\). Then the maximum value is attained in \\(\\gamma_{w}\\) and we have \\[\\gamma_{w_{2}}=\\gamma_{w}-\\lambda_{1}(\\Phi_{B_{1}}(\\gamma_{w})-1). \\tag{5.3}\\]This expression is rather explicit, as \\(\\gamma_{w}\\) is the positive solution of the equation \\[\\gamma_{w}=\\lambda(\\Phi_{B}(\\gamma_{w})-1). \\tag{5.4}\\] The goal of this subsection is to show that in the case of Poisson arrivals, our expression (5.3) coincides with the expression of \\(\\gamma_{w_{2}}\\) given by Abate & Whitt [1]. Assuming that \\(\\mathbb{E}\\{B_{1}\\}=1\\), it is shown in [1], p. 18, that \\(-\\gamma_{w_{2}}\\) is the solution of \\(\\hat{f}(s)=1/\\rho\\), with \\[\\hat{f}(s)=\\frac{\\rho_{1}}{\\rho_{1}+\\rho_{2}}\\hat{h}_{0}^{(1)}(s)+\\frac{\\rho_ {2}}{\\rho_{1}+\\rho_{2}}\\hat{g}_{2e}(z_{1}(s)),\\qquad z_{1}(s)=s+\\lambda_{1}- \\lambda_{1}\\hat{b}_{1}(s),\\] \\[\\hat{h}_{0}^{(1)}(s)=\\frac{1-\\hat{b}_{1}(s)}{s+\\rho_{1}-\\rho_{1}\\hat{b}_{1}(s) }=\\frac{1-\\hat{b}_{1}(s)}{z_{1}(s)},\\qquad\\hat{g}_{2e}(s)=\\frac{1-\\hat{g}_{2}( s)}{sg_{21}},\\] where \\(\\hat{b}_{1}(s)\\) is the LST of the \\(M/G/1\\) busy period, \\(\\hat{g}_{2}(s)=\\Phi_{B_{2}}(-s)\\) and \\(g_{21}=\\mathbb{E}\\{B_{2}\\}\\). Our expression of \\(\\gamma_{w_{2}}\\) seems preferable, although we hasten to add that the form provided by [1] is more convenient when considering the more complicated task of obtaining precise asymptotics, as is done in [1]. We now simplify the description of \\(\\gamma_{w_{2}}\\) in [1]. Since \\(\\rho=\\rho_{1}+\\rho_{2}\\), we have \\[\\frac{1}{\\rho_{1}+\\rho_{2}}=\\frac{1}{\\rho}=\\hat{f}(s)=\\frac{\\rho_{1}}{\\rho_{1} +\\rho_{2}}\\frac{1-\\hat{b}_{1}(s)}{z_{1}(s)}+\\frac{\\rho_{2}}{\\rho_{1}+\\rho_{2}} \\frac{1-\\hat{g}_{2}(z_{1}(s))}{z_{1}(s)\\mathbb{E}\\{B_{2}\\}}.\\] Hence, \\[z_{1}(s)=\\rho_{1}[1-\\hat{b}_{1}(s)]+\\frac{\\rho_{2}}{\\mathbb{E}\\{B_{2}\\}}[1- \\hat{g}_{2}(z_{1}(s))].\\] Consequently, \\[s=\\lambda_{2}[1-\\hat{g}_{2}(z_{1}(s))]=\\lambda_{2}(1-\\Phi_{B_{2}}(-z_{1}(s)).\\] \\begin{table} \\begin{tabular}{|l|l|l|l|} \\hline decay rate & discipline & condition & queue \\\\ \\hline \\hline \\(\\gamma_{p}\\) (minimal), (2.1) & LCFS [19] & & \\(GI/GI/1\\) \\\\ & FB [15] & & \\(M/GI/1\\) \\\\ & PS [17] & \\(\\forall c>0:\\log\\mathbb{P}\\{B>c\\log x\\}=o(x)\\) & \\(GI/GI/1\\) \\\\ & ROS [17] & & \\(M/M/1\\) \\\\ & SRPT here & \\(\\mathbb{P}\\{B=x_{B}\\}=0\\) & \\(GI/GI/1\\) \\\\ \\hline \\(\\gamma\\) (in between) & PS [10] & & \\(M/D/1\\) \\\\ \\(\\gamma_{v}\\) (in between), (4.3) & SRPT here & \\(0<\\mathbb{P}\\{B=x_{B}\\}<1\\) & \\(GI/GI/1\\) \\\\ \\hline \\(\\gamma_{w}\\) (maximal), (2.2) & FCFS [20] & & \\(GI/GI/1\\) \\\\ & SRPT & \\(\\mathbb{P}\\{B=x_{B}\\}=1\\) & \\(GI/D/1\\) \\\\ \\hline \\end{tabular} \\end{table} Table 1: _The decay rate of the sojourn time under several disciplines for light-tailed service times._Since \\(\\lambda\\Phi_{B}(s)=\\lambda_{1}\\Phi_{B_{1}}(s)+\\lambda_{2}\\Phi_{B_{2}}(s)\\), we can rewrite this into \\[s+\\lambda_{1}[1-\\Phi_{B_{1}}(-z_{1}(s))]=\\lambda[1-\\Phi_{B}(-z_{1}(s))]. \\tag{5.5}\\] Since the LST of the busy period satisfies the fixed point equation \\[\\hat{b}_{1}(s)=\\Phi_{B_{1}}(-z_{1}(s)), \\tag{5.6}\\] we can rewrite (5.5) as \\[s+\\lambda_{1}(1-\\hat{b}_{1}(s))=\\lambda[1-\\Phi_{B}(-z_{1}(s))],\\] and thus, using the definition of \\(z_{1}(s)\\), \\[z_{1}(s)=\\lambda[1-\\Phi_{B}(-z_{1}(s))].\\] Using the definition of \\(\\gamma_{w}\\) in (5.4), we see that \\(\\gamma_{w_{2}}\\) is the solution of \\(\\gamma_{w}=-z_{1}(s).\\) We now give an alternative expression for \\(-z_{1}(s)\\). An alternative expression for the busy period transform was found by Rosenkrantz [21]: defining \\(\\phi(s)=\\lambda_{1}(1-\\Phi_{B_{1}}(-s))-s\\), it holds that \\[\\hat{b}_{1}(s)=\\Phi_{B_{1}}(-\\phi^{-1}(s)). \\tag{5.7}\\] Since \\(\\Phi_{B_{1}}(s)\\) is strictly increasing, it follows from (5.6) and (5.7) that \\(z_{1}(s)=\\phi^{-1}(s)\\). Hence, \\(\\gamma_{w_{2}}\\) is the solution of \\(\\gamma_{w}=-\\phi^{-1}(s)\\), and thus we obtain \\[\\gamma_{w_{2}}=\\phi(-\\gamma_{w})=\\gamma_{w}-\\lambda_{1}(\\Phi_{B_{1}}(\\gamma_{w })-1),\\] which is indeed equal to our expression (5.3). To conclude this section, we remark that for the M/G/1 queue, the decay rate \\(\\gamma_{v}\\) can take on a simple form. Suppose that \\(1-\\lambda_{1}\\Phi_{B_{1}}^{\\prime}(\\gamma_{w})>0\\), and that \\(0<\\mathbb{P}\\{B=x_{B}\\}<1\\). Then by Theorem 4.2 and the expression for \\(\\gamma_{w}\\) given in (5.4), we have \\[\\gamma_{v} =\\gamma_{w}-\\lambda_{1}(\\Phi_{B_{1}}(\\gamma_{w})-1)=\\lambda(\\Phi _{B}(\\gamma_{w})-1)-\\lambda_{1}(\\Phi_{B_{1}}(\\gamma_{w})-1)\\] \\[=\\lambda_{2}(\\Phi_{B_{2}}(\\gamma_{w})-1)=\\lambda\\mathbb{P}\\{B=x_ {B}\\}(e^{x_{B}\\gamma_{w}}-1).\\] ### Heavy traffic We now examine the behavior of the decay rate \\(\\gamma_{v}\\) of the SRPT sojourn time in heavy traffic. The aim of this section is to show that the behavior of this decay rate critically depends upon whether \\(\\mathbb{P}\\{B=x_{B}\\}>0\\) or not. If \\(\\mathbb{P}\\{B=x_{B}\\}=0\\), then \\(\\gamma_{v}=\\gamma_{p}\\) by Theorem 4.1. The results in Section4.2 of [17] then imply that \\(\\gamma_{v}\\sim C(1-\\rho)^{2}\\) for some constant \\(C\\). We now show that a fundamentally different behavior applies if \\(\\mathbb{P}\\{B=x_{B}\\}>0\\). Since, in this case, we have a relationship with the \\(GI/GI/1\\) priority queue, we consider first the setting of Section 3. We let the service time \\(B_{2}\\) increase in such a way that \\(\\rho\\to 1\\). Specifically, we consider a sequence of systems indexed by \\(r\\), such that \\(p\\), \\(A_{1}\\), \\(A_{2}\\) and \\(B_{1}\\) are all fixed, and that \\(B_{2}=B_{2}(r)\\) is such that the traffic load satisfies \\(\\rho_{r}=1-1/r\\). Let \\(\\gamma_{w}(\\rho_{r})\\) denote the decay rate of the workload in such a queue. If we let \\(\\sigma_{A}^{2}<\\infty\\) be the variance of \\(A\\) and assume that the variance of \\(B(r)=B_{1}+B_{2}(r)\\) converges to \\(\\sigma_{B}^{2}\\), then it holds that for \\(r\\to\\infty\\) (cf. Corollary 3 of [12]), \\[\\gamma_{w}(\\rho_{r})\\sim K(1-\\rho_{r}), \\tag{5.8}\\] with \\(K=2/(\\sigma_{A}^{2}+\\sigma_{B}^{2})\\). In particular, \\(\\gamma_{w}(\\rho_{r})\\downarrow 0\\). Consequently, if \\(r\\) is large enough, we always have \\(\\gamma_{w_{2}}(\\rho_{r})=\\gamma_{w}(\\rho_{r})-\\Psi_{1}(\\gamma_{w}(\\rho_{r}))\\) by Theorem 3.1. Since \\(\\Psi_{1}(s)\\sim\\rho(1)s\\) as \\(s\\downarrow 0\\), where \\(\\rho(1)\\) is the load in the high priority queue, we obtain the following heavy-traffic result for \\(\\gamma_{w_{2}}\\). **Proposition 5.4**: _For \\(\\rho\\to 1\\) as described above, we have_ \\[\\gamma_{w_{2}}\\sim K(1-\\rho(1))(1-\\rho).\\] Thus, also \\(\\gamma_{v}\\) is of the order \\((1-\\rho)\\) if \\(\\mathbb{P}\\{B=x_{B}\\}>0\\). This behavior is notably different from the \\((1-\\rho)^{2}\\) behavior of \\(\\gamma_{p}\\). ## 6 Conditional sojourn times Our results in Section 4 and 5 show that the decay rate \\(\\gamma_{v}\\) for SRPT is smaller than \\(\\gamma_{w}\\), which is the decay rate of the waiting (and sojourn) time under FIFO. Thus, one could say that according to this performance measure, SRPT is worse than FIFO. The reason that the sojourn-time decay rate under SRPT is small is apparent when taking a closer look at the proof in Section 4.1: the sojourn time of a customer with a (very) large service time looks like a residual busy period. However, smaller customers may have a much shorter sojourn time. In fact, for the _conditional_ sojourn time \\(V_{SRPT}(y)=[V_{SRPT}\\mid B=y]\\) under the preemptive SRPT discipline, the following proposition holds. **Proposition 6.1**: _If \\(\\mathbb{P}\\{B=y\\}=0\\), then \\(\\log\\mathbb{P}\\{V_{SRPT}(y)>x\\}\\sim-\\gamma_{p}^{y}x\\) as \\(x\\to\\infty\\)._ **Proof** For the lower bound, we remark that \\(V_{SRPT}(y)\\) is stochastically larger than the residual busy period \\(P^{*y}\\) in the queue with service time \\(B^{y}\\). This residual busy period has decay rate \\(\\gamma_{p}^{y}\\). For the upper bound, we consider an alternative queue with generic service time \\(B^{y}\\), stationary workload at arrival instants \\(W^{y}\\) and busy period \\(P^{y}\\). Now observe that in the original queue, at any point in time, at most one customer with original service time larger than \\(y\\) has remaining service time smaller than \\(y\\). Hence, we can bound \\[V_{SRPT}(y)\\leq_{st}P^{y}(W^{y}+y+y),\\] where \\(P^{y}(x)\\) is a busy period in the alternative queue starting with an exceptional customer of length \\(x\\). Applying the Chernoff bound, and arguing like in the proof of Proposition 2.2, we find \\[\\limsup_{t\\to\\infty}\\frac{1}{t}\\log\\mathbb{P}\\{V_{SRPT}(y)>t\\}\\leq-\\sup_{s\\in [0,\\gamma_{wy}]}\\{s-\\Psi^{y}(s)\\}=-\\gamma_{p*}^{y},\\] where the last equality follows from (5.1). The upper bound follows from noting that \\(P^{y}\\) and \\(P^{*y}\\) have the same decay rate, and the proof is completed. \\(\\Box\\) Suppose that \\(B\\) has a density, so that \\(\\gamma_{p}^{y}\\) is continuous in \\(y\\). Then the function \\(\\gamma_{p}^{y}\\) strictly decreases in \\(y\\), and converges to \\(\\gamma_{p}>\\gamma_{w}\\) as \\(y\\to\\infty\\). Further, \\(\\gamma_{p}^{y}\\to\\infty\\) as \\(y\\to 0\\), since \\(\\Psi^{y}(s)\\to 0\\) as \\(y\\to 0\\). Hence, there exists a critical value \\(y^{*}\\) for which \\(\\gamma_{p}^{y^{*}}=\\gamma_{w}\\). Thus, when the decay rate is used as a performance measure, one could say that FIFO is a better discipline than SRPT for customers of size larger than \\(y^{*}\\); the fraction of customers that suffer from a change from FIFO to SRPT is \\(\\mathbb{P}\\{B>y^{*}\\}\\). We now describe the behavior of \\(y^{*}\\) as a function of \\(\\rho\\) for \\(\\rho\\to 1\\) and \\(\\rho\\to 0\\). **Proposition 6.2**: _Let \\(y^{*}=\\sup\\{y:\\gamma_{p}^{y}\\geq\\gamma_{w}\\}.\\) If \\(\\rho\\to 1\\), then \\(y^{*}\\to x_{B}\\)._ **Proof** Let \\(y<x_{B}\\) be fixed, and let \\(\\gamma_{p}^{y}(\\rho)\\) be the decay rate of \\(P^{y}\\) as a function of \\(\\rho\\), and define \\(\\gamma_{w}(\\rho)\\) similarly. Since \\(P^{y}\\) is a busy period in a stable queue, even when \\(\\rho=1\\) in the original queue, we have \\(\\gamma_{p}^{y}(\\rho)\\geq\\gamma_{p}^{y}(1)>0\\) for all \\(\\rho<1\\). By (5.8), we have for \\(\\rho\\) large enough, \\[\\gamma_{w}(\\rho)<\\gamma_{p}^{y}(1)\\leq\\gamma_{p}^{y}(\\rho).\\] Hence, for \\(\\rho\\) large enough, \\(y^{*}\\geq y\\). Since \\(y<x_{B}\\) was arbitrary, the proof is completed. \\(\\Box\\) **Proposition 6.3**: _If the service time \\(B\\) has decay rate \\(\\gamma_{b}\\in(0,\\infty)\\), then \\(y^{*}\\to\\infty\\) for \\(\\rho\\to 0\\)._ **Proof** Let \\(\\rho\\to 0\\) by setting the generic inter-arrival time equal to \\(rA\\) and letting \\(r\\to\\infty\\). Since \\(\\Phi_{rA}(x)\\to 0\\) for all \\(x<0\\), we have \\(\\Phi_{rA}^{-1}(x)\\to 0\\) for all \\(0<x<1\\). Hence, for all \\(y\\), \\[\\gamma_{p}^{y}(\\rho)=\\sup_{s\\geq 0}\\Big{\\{}s+\\Phi_{rA}^{-1}\\Big{(}\\frac{1}{ \\Phi_{B^{y}}(s)}\\Big{)}\\Big{\\}}\\to\\infty,\\qquad\\rho\\to 0. \\tag{6.9}\\]The workload does not depend on the discipline as long as the discipline is work-conserving. Further, conditioned on it being positive, the workload under FIFO is stochastically larger than a residual service time, which for light-tailed distributions has the same decay rate as \\(B\\). Hence, we have \\(\\gamma_{w}(\\rho)\\leq\\gamma_{b}<\\infty\\) for all \\(\\rho\\). It then follows from (6.9) that \\(\\gamma_{w}^{y}(\\rho)>\\gamma_{w}(\\rho)\\) eventually as \\(\\rho\\to 0\\) for all \\(y\\), and we can conclude that \\(y^{*}\\to\\infty\\) as \\(\\rho\\to 0\\). \\(\\Box\\) ### Numerical example As an illustration, we compute \\(y^{*}\\) and \\(\\mathbb{P}\\{B>y^{*}\\}\\) for the \\(M/M/1\\) queue with \\(\\mathbb{E}\\{B\\}=1\\) and arrival rate \\(\\lambda\\) (so that \\(\\rho=\\lambda\\)). Figure 1 shows the probabilities \\(\\mathbb{P}\\{B>y^{*}\\}\\) for various values of \\(\\rho\\). From the figure, it is clear that \\(y^{*}\\) becomes very large under low and high loads. But even for moderate values of \\(\\rho\\) it is clear that about 85 percent of the customers would prefer (from a large-deviations point of view) SRPT over FIFO. ## 7 Conclusions To conclude the paper, we summarize our results. For the \\(GI/GI/1\\) queue with light-tailed service times, we obtained expressions for the logarithmic decay rate of the tail of the workload, the busy period, the waiting time and sojourn time of low-priority customers in a priority queue, and the sojourn time under the (preemptive and non-preemptive) SRPT discipline. For the sojourn time under SRPT, it turns out that there are three different regimes, namely for service times with no mass, with some mass and with all mass in the endpoint of the service-time distribution. In the first case the decay rate is minimal among all work-conserving disciplines, in the last case it is maximal, but if there is some mass in the endpoint, then the decay rate lies strictly in between these two. The large-deviations results for the unconditional sojourn times suggest that a switch from FIFO to SRPT is not advisable. The results in Section 6 show that this suggestion is only valid for very large service times: in the \\(M/M/1\\) queue, at least about 85 percent of the customers would benefit from a change from FIFO to SRPT. There are several topics that are interesting for further research. First of all, large deviations for the queue length under SRPT are not well understood. A second problem is to obtain precise asymptotics for the tail behavior of the low-priority waiting time, or perhaps even the sojourn time. Finally, it would be interesting to compare conditional sojourn times of FIFO and PS from a large-deviations point of view. It is not clear to us which discipline performs better, and what the influence of the job size might be. **Acknowledgments** We would like to thank Marko Boon for helping us out with the numerics in Section 6.1, and Ton Dieker and Michel Mandjes for several useful comments. ## References * [1] Abate, J., Whitt, W. (1997). Asymptotics for \\(M/G/1\\) low-priority waiting-time tail probabilities. _Queueing Systems_**25**, 173-233. * [2] Asmussen, S. (2003). _Applied Probability and Queues_. Second edition. Springer. * [3] Baccelli, F., Bremaud, P. (2003). _Elements of Queueing Theory._ Third edition. Springer. * [4] Bansal, N., Harchol-Balter, M. (2001). Analysis of SRPT scheduling: investigating unfairness. _Proceedings of ACM Sigmetrics_, 279-290. * [5] Bansal, N. (2004). On the average sojourn time under \\(M/M/1\\) SRPT. _Operations Research Letters_**33**, 195-200. * [6] Bansal, N., Gamarnik, D. (2005). Handling load with less stress. Submitted for publication. * [7] Borst, S.C., Boxma, O.J., Nunez-Queija, R., Zwart, A.P. (2003) The impact of the service discipline on delay asymptotics. _Performance Evaluation_**54**, 177-206. * [8] Cox, D., Smith, W. (1961). _Queues._ Methuen. * [9] Dembo, A., Zeitouni, O. (1998). _Large Deviations Techniques and Applications._ Springer. * [10] Egorova, R., Zwart, B., Boxma, O.J. (2005). Sojourn time tails in the \\(M/D/1\\) Processor Sharing queue. Report PNA-R05xx, CWI, Amsterdam. Submitted for publication. * [11] Ganesh, A., O'Connell, N., Wischik, D. (2003). _Big Queues._ Springer. * [12] Glynn, P., Whitt, W. (1994). Logarithmic asymptotics for steady-state tail probabilities in a single-server queue. _Journal of Applied Probability_**31A**, 131-156. * [13] Harchol-Balter, M., Schroeder, B., Bansal, N., Agrawal, M. (2003). Sizebased scheduling to improve web performance. _ACM Transactions on Computer Systems_**21**, 207-233. * [14] Kingman, J.F.C. (1964). A martingale inequality in the theory of queues. _Proceedings of the Cambridge Philosophical Society_**59**, 359-361. * [15] Mandjes, M., Nuyens, M. (2005). Sojourn times in the \\(M/G/1\\) FB queue with light-tailed service times. _Probability in the Engineering and Informational Sciences_**19**, 351-361. * [16] Mandjes, M., Van Uitert, M. (2005). Sample path large deviations for tandem and priority queues with Gaussian input. _Annals of Applied Probability_**15**, 1193-1226. * [17] Mandjes, M., Zwart, B. (2004). Large deviations for sojourn times in processor sharing queues. _Queueing Systems_, under revision. * [18] Nunez-Queija, R. (2000). _Processor-Sharing Models for Integrated-Service Networks._ PhD thesis, Eindhoven University of Technology. * [19] Palmowksi, Z., Rolski, T. (2004). On busy period asymptotics in the \\(GI/GI/1\\) queue. Submitted for publication, available at [http://www.math.uni.wroc.pl/~zpalma/publication.html](http://www.math.uni.wroc.pl/~zpalma/publication.html) * [20] Ramanan, K., Stolyar, A. (2001). Largest weighted delay first scheduling: large deviations and optimality. _Annals of Applied Probability_**11**, 1-48. * [21] Rosenkrantz, W. (1983). Calculation of the Laplace transform of the length of the busy period for the \\(M/G/1\\) queue via martingales. _Annals of Probability_**11**, 817-818. * [22] Schrage, L. (1968). A proof of the optimality of the shortest remaining service time discipline. _Operations Research_**16**, 670-690.
We consider a \\(GI/GI/1\\) queue with the shortest remaining processing time discipline (SRPT) and light-tailed service times. Our interest is focused on the tail behavior of the sojourn-time distribution. We obtain a general expression for its large-deviations decay rate. The value of this decay rate critically depends on whether there is mass in the endpoint of the service-time distribution or not. An auxiliary priority queue, for which we obtain some new results, plays an important role in our analysis. We apply our SRPT-results to compare SRPT with FIFO from a large-deviations point of view.
Give a concise overview of the text below.
arxiv-format/0505597v1.md
# Asymptotic analysis of the GI/M/1/n loss system as n increases to infinity Vyacheslav M. Abramov ([email protected]) 24/6 Balfour st., Petach Tiqva 49350, Israel ## 1 Introduction Consider \\(GI/M/1/n\\) queueing system denoting by \\(A(x)\\) the probability distribution function of interarrival time and by \\(\\lambda\\) the reciprocal of the expected interarrival time, \\(\\alpha(s)=\\int_{0}^{\\infty}\\mathrm{e}^{-sx}\\mathrm{d}A(x)\\). The parameter of the service time distribution will be denoted \\(\\mu\\), and load of the system is \\(\\rho=\\lambda/\\mu\\). The size of buffer \\(n\\) includes the position for server. Denote also \\(\\rho_{m}=\\mu^{m}\\int_{0}^{\\infty}x^{m}\\mathrm{d}A(x)\\), \\(m=1,2, \\), \\((\\rho_{1}=\\rho^{-1})\\). The explicit representation for the loss probability in terms of generating function was obtained by Miyazawa [12]. Namely, he showed that whenever the value of load \\(\\rho\\), the loss probability \\(p_{n}\\) always exists and has the representation \\[p_{n}=\\frac{1}{\\sum_{j=0}^{n}\\pi_{j}}, \\tag{1}\\] where the generating function \\(\\Pi(z)\\) of \\(\\pi_{j}\\), \\(j=0,1, \\), is the following \\[\\Pi(z)=\\sum_{j=0}^{\\infty}\\pi_{j}z^{j}=\\frac{(1-z)\\alpha(\\mu-\\mu z)}{\\alpha( \\mu-\\mu z)-z},\\ \\ |z|<\\sigma, \\tag{2}\\] \\(\\sigma\\) is the minimum nonnegative solution of the functional equation \\(z=\\alpha(\\mu-\\mu z)\\). This solution is the following. It belongs to the open interval (0,1) if \\(\\lambda<\\mu\\), and it is equal to 1 otherwise. In the recent papers Choi and Kim [8] and Choi et al [9] study the questions related to the asymptotic behavior of the sequence \\(\\{\\pi_{j}\\}\\) as\\(j\\to\\infty\\). Namely, they study asymptotic behavior of the loss probability \\(p_{n}\\), \\(n\\to\\infty\\), as well as obtain the convergence rate of the stationary distributions of the \\(GI/M/1/n\\) queueing system to those of the \\(GI/M/1\\) queueing system as \\(n\\to\\infty\\). The analysis of [8] and [9] is based on the theory of analytic functions. The approach of this paper is based on Tauberian theorems with remainder permitting us to simplify the proof of the results of the mentioned paper of Choi et al [9] as well as to obtain some new results on asymptotic behavior of the loss probability. For the asymptotic behavior of the loss probability in \\(M/GI/1/n\\) queue see Abramov [1], [2], Asmussen [6], Takagi [16], Tomko [17], Willmot [18] etc. For the asymptotic analysis of more general than \\(M/GI/1/n\\) queueing systems see Abramov [3], Baiocchi [7] etc. Study of the loss probability and its asymptotic analysis is motivated by growing development of communication systems. The results of our study can be applied to the problems of flow control, performance evaluation, redundancy. For application of the loss probability to such kind of problems see Ait-Hellal et al [4], Altman and Jean-Marie [5], Cidon et al [10], Gurewitz et al [11]. ## 2 Auxiliary results. Tauberian theorems In this section we represent the asymptotic results of Takacs [15, p.22-23] (see Lemma 2.1 below), and Tauberian theorems of Postnikov [13, Section 25], (see Lemmas 2.2 and 2.3 below). Let \\(Q_{j}\\), \\(j=0,1, \\), be a sequence of real numbers satisfying the recurrent relation \\[Q_{n}=\\sum_{j=0}^{n}r_{j}Q_{n-j+1}, \\tag{2.1}\\] where \\(r_{j}\\), \\(j=0,1, \\), are nonnegative numbers, \\(r_{0}>0\\), \\(r_{0}+r_{1}+ =1\\), and \\(Q_{0}>0\\) is an arbitrary real number. Denote \\(r(z)=\\sum_{j=0}^{\\infty}r_{j}z^{j}\\), \\(|z|\\leq 1\\), \\(\\gamma_{m}=r^{(m)}(1-0)=\\lim_{z\\uparrow 1}r^{(m)}(z)\\), where \\(r^{(m)}(z)\\) is the \\(m\\)th derivative of \\(r(z)\\). Then for \\(Q(z)=\\sum_{j=0}^{\\infty}Q_{j}z^{j}\\), the generating function of \\(Q_{j}\\), \\(j=0,1, \\), we have the following representation \\[Q(z)=\\frac{Q_{0}r(z)}{r(z)-z}. \\tag{2.2}\\] The statements below are known theorems on asymptotic behavior of the sequence \\(\\{Q_{j}\\}\\) as \\(j\\to\\infty\\). Lemma 2.1 below joins two results by Takacs [15]: Theorem 5 on p. 22 and relation (35) on p. 23. **Lemma 2.1** (Takacs [15]). _If \\(\\gamma_{1}<1\\) then_ \\[\\lim_{n\\to\\infty}Q_{n}=\\frac{Q_{0}}{1-\\gamma_{1}}.\\] _If \\(\\gamma_{1}=1\\) and \\(\\gamma_{2}<\\infty\\) then_ \\[\\lim_{n\\to\\infty}\\frac{Q_{n}}{n}=\\frac{2Q_{0}}{\\gamma_{2}}.\\] _If \\(\\gamma_{1}>1\\) then_ \\[\\lim_{n\\to\\infty}\\left(Q_{n}-\\frac{Q_{0}}{\\delta^{n}[1-r^{\\prime}(\\delta)]} \\right)=\\frac{Q_{0}}{1-\\gamma_{1}},\\] _where \\(\\delta\\) is the least (absolute) root of equation \\(z=r(z)\\)._ **Lemma 2.2** (Postnikov [13]). _Let \\(\\gamma_{1}=1\\) and \\(\\gamma_{3}<\\infty\\). Then as \\(n\\to\\infty\\)_ \\[Q_{n}=\\frac{2Q_{0}}{\\gamma_{2}}n+O(\\log n).\\] **Lemma 2.3** (Postnikov [13]). _Let \\(\\gamma_{1}=1\\), \\(\\gamma_{2}<\\infty\\) and \\(r_{0}+r_{1}<1\\). Then as \\(n\\to\\infty\\)_ \\[Q_{n+1}-Q_{n}=\\frac{2Q_{0}}{\\gamma_{2}}+o(1).\\] ## 3 The main results on asymptotic behavior of the loss probability Let us study (1.1) and (1.2) more carefully. Represent (1.2) as the difference of two terms \\[\\Pi(z)=\\frac{(1-z)\\alpha(\\mu-\\mu z)}{\\alpha(\\mu-\\mu z)-z}=\\frac{\\alpha(\\mu- \\mu z)}{\\alpha(\\mu-\\mu z)-z}-z\\frac{\\alpha(\\mu-\\mu z)}{\\alpha(\\mu-\\mu z)-z}\\] \\[=\\widetilde{\\Pi}(z)-z\\widetilde{\\Pi}(z), \\tag{3.1}\\] where \\[\\widetilde{\\Pi}(z)=\\sum_{j=0}^{\\infty}\\widetilde{\\pi}_{j}z^{j}=\\frac{\\alpha( \\mu-\\mu z)}{\\alpha(\\mu-\\mu z)-z}. \\tag{3.2}\\] Note also that \\[\\pi_{0}=\\widetilde{\\pi}_{0}=1,\\]\\[\\pi_{j+1}=\\widetilde{\\pi}_{j+1}-\\widetilde{\\pi}_{j},\\quad j\\geq 0. \\tag{3.3}\\] Therefore, \\[\\sum_{j=0}^{n}\\pi_{j}=\\widetilde{\\pi}_{n},\\] and \\[p_{n}=\\frac{1}{\\widetilde{\\pi}_{n}}. \\tag{3.4}\\] Now the application of Lemma 2.1 yields the following **Theorem 3.1.**_In the case where \\(\\rho<1\\) as \\(n\\to\\infty\\) we have_ \\[p_{n}=\\frac{(1-\\rho)[1+\\mu\\alpha^{\\prime}(\\mu-\\mu\\sigma)]\\sigma^{n}}{1-\\rho- \\rho[1+\\mu\\alpha^{\\prime}(\\mu-\\mu\\sigma)]\\sigma^{n}}+o(\\sigma^{2n}). \\tag{3.5}\\] _In the case where \\(\\rho_{2}<\\infty\\) and \\(\\rho=1\\) we have_ \\[\\lim_{n\\to\\infty}np_{n}=\\frac{\\rho_{2}}{2}. \\tag{3.6}\\] _In the case where \\(\\rho>1\\) we have_ \\[\\lim_{n\\to\\infty}p_{n}=\\frac{\\rho-1}{\\rho}. \\tag{3.7}\\] **Proof.** Indeed, it follows from (3.1), (3.2) and (3.3) that \\(\\widetilde{\\pi}_{0}=1\\), and \\[\\widetilde{\\pi}_{k}=\\sum_{i=0}^{k}\\frac{(-\\mu)^{i}}{i!}\\alpha^{(i)}(\\mu) \\widetilde{\\pi}_{k-i+1}, \\tag{3.8}\\] where \\(\\alpha^{(i)}(\\mu)\\) denotes the \\(i\\)th derivative of \\(\\alpha(\\mu)\\). Note also that \\(\\alpha(\\mu)>0\\), the terms \\((-\\mu)^{i}\\alpha^{(i)}(\\mu)/i!\\) are nonnegative for all \\(i\\geq 1\\), and \\[\\sum_{i=0}^{\\infty}\\frac{(-\\mu)^{i}}{i!}\\alpha^{(i)}(\\mu)=\\sum_{i=0}^{\\infty} \\int_{0}^{\\infty}\\mathrm{e}^{-\\mu x}\\frac{(\\mu x)^{i}}{i!}\\mathrm{d}A(x)\\] \\[=\\int_{0}^{\\infty}\\sum_{i=0}^{\\infty}\\mathrm{e}^{-\\mu x}\\frac{(\\mu x)^{i}}{i! }\\mathrm{d}A(x)=1. \\tag{3.9}\\] Therefore one can apply Lemma 2.1. Then in the case of \\(\\rho<1\\) one can write \\[\\lim_{n\\to\\infty}\\Big{(}\\widetilde{\\pi}_{n}-\\frac{1}{\\sigma^{n}[1+\\mu\\alpha^{ \\prime}(\\mu-\\mu\\sigma)]}\\Big{)}=\\frac{\\rho}{\\rho-1}, \\tag{3.10}\\] and for large \\(n\\) relation (3.10) can be rewritten in the form of the estimation \\[\\widetilde{\\pi}_{n}=\\Big{[}\\frac{1}{\\sigma^{n}[1+\\mu\\alpha^{\\prime}(\\mu-\\mu \\sigma)]}+\\frac{\\rho}{\\rho-1}\\Big{]}[1+o(\\sigma^{n})]. \\tag{3.11}\\]In turn, from (3.11) for large \\(n\\) we obtain \\[p_{n} = \\frac{1}{\\bar{\\pi}_{n}}=\\frac{(1-\\rho)[1+\\mu\\alpha^{\\prime}(\\mu-\\mu \\sigma)]\\sigma^{n}}{1-\\rho-\\rho[1+\\mu\\alpha^{\\prime}(\\mu-\\mu\\sigma)]\\sigma^{n} }[1+o(\\sigma^{n})]\\] \\[= \\frac{(1-\\rho)[1+\\mu\\alpha^{\\prime}(\\mu-\\mu\\sigma)]\\sigma^{n}}{1- \\rho-\\rho[1+\\mu\\alpha^{\\prime}(\\mu-\\mu\\sigma)]\\sigma^{n}}+o(\\sigma^{2n}).\\] Thus (3.5) is proved. The limiting relations (3.6) and (3.7) follow immediately by application of Lemma 2.1. Theorem 3.1 is proved. The following two theorems improve limiting relation (3.6). From Lemma 2.2 we have the following **Theorem 3.2.**_Assume that \\(\\rho=1\\) and \\(\\rho_{3}<\\infty\\). Then as \\(n\\to\\infty\\)_ \\[p_{n}=\\frac{\\rho_{2}}{2n}+O\\Big{(}\\frac{\\log n}{n^{2}}\\Big{)}. \\tag{3.12}\\] **Proof.** The result follows immediately by application of Lemma 2.2. Subsequently, from Lemma 2.3 we have **Theorem 3.3.**_Assume that \\(\\rho=1\\) and \\(\\rho_{2}<\\infty\\). Then as \\(n\\to\\infty\\)_ \\[\\frac{1}{p_{n+1}}-\\frac{1}{p_{n}}=\\frac{2}{\\rho_{2}}+o(1). \\tag{3.13}\\] **Proof**. The theorem will be proved if we show that for all \\(\\mu>0\\) \\[\\alpha(\\mu)-\\mu\\alpha^{\\prime}(\\mu)<1. \\tag{3.14}\\] Taking into account (3.9) and the fact that \\((-\\mu)^{i}\\alpha^{(i)}(\\mu)/i!\\geq 0\\) for all \\(i\\geq 0\\), one can write \\[\\alpha(\\mu)-\\mu\\alpha^{\\prime}(\\mu)\\leq 1. \\tag{3.15}\\] Thus, we have to show that for some \\(\\mu_{0}>0\\) the equality \\[\\alpha(\\mu_{0})-\\mu_{0}\\alpha^{\\prime}(\\mu_{0})=1 \\tag{3.16}\\] is not a case. Indeed, since \\(\\alpha(\\mu)-\\mu\\alpha^{\\prime}(\\mu)\\) is an analytic function then, according to the theorem on maximum absolute value of analytic function, the equality \\(\\alpha(\\mu)-\\mu\\alpha^{\\prime}(\\mu)=1\\) is valid for all \\(\\mu>0\\). This means that (3.16) is valid if and only if \\(\\alpha^{(i)}(\\mu)=0\\) for all \\(i\\geq 2\\) and for all \\(\\mu>0\\), and therefore \\(\\alpha(\\mu)\\) is a linear function, i.e. \\(\\alpha(\\mu)=c_{0}+c_{1}\\mu\\), where \\(c_{0}\\) and \\(c_{1}\\) are some constants. However, since \\(|\\alpha(\\mu)|\\leq 1\\) we obtain \\(c_{0}=1\\), \\(c_{1}=0\\). This is a trivial case where the probability distribution function \\(A(x)\\) is concentrated in point \\(0\\). Therefore (3.16) is not a case, and hence (3.14) holds. Theorem 3.3 is proved. We have also the following **Theorem 3.4**. _Let \\(\\rho=1-\\epsilon\\), where \\(\\epsilon>0\\), and \\(\\epsilon n\\to C>0\\) as \\(n\\to\\infty\\) and \\(\\epsilon\\to 0\\). Assume that \\(\\rho_{3}=\\rho_{3}(n)\\) is a bounded function and there exists \\(\\widetilde{\\rho}_{2}=\\lim_{n\\to\\infty}\\rho_{2}(n)\\). Then,_ \\[p_{n}=\\frac{\\epsilon e^{-2C/\\widetilde{\\rho}_{2}}}{1-e^{-2C/\\widetilde{\\rho} _{2}}}[1+o(1)]. \\tag{3.17}\\] **Proof.** It was shown in Subhankulov [14, p. 326] that if \\(\\rho^{-1}=1+\\epsilon\\), \\(\\epsilon>0\\) and \\(\\epsilon\\to 0\\), \\(\\rho_{3}(n)\\) is a bounded function, and there exists \\(\\widetilde{\\rho}_{2}=\\lim_{n\\to\\infty}\\rho_{2}(n)\\) then \\[\\sigma=1-\\frac{2\\epsilon}{\\widetilde{\\rho}_{2}}+O(\\epsilon^{2}), \\tag{3.18}\\] where \\(\\sigma=\\sigma(n)\\) is the minimum root of the functional equation \\(z-\\alpha(\\mu-\\mu z)=0\\), \\(|z|\\leq 1\\), and where the parameter \\(\\mu\\) and the function \\(\\alpha(z)\\), both or one of them, are assumed to depend on \\(n\\). Therefore, (3.18) is also valid under the assumptions of the theorem. Then after some algebra one can obtain \\[[1+\\mu\\alpha^{\\prime}(\\mu-\\mu\\sigma)]\\sigma^{n}=\\epsilon\\mathrm{e}^{-2C/ \\widetilde{\\rho}_{2}}[1+o(1)],\\] and the result easily follows from estimation (3.11). **Theorem 3.5**. _Let \\(\\rho=1-\\epsilon\\), where \\(\\epsilon>0\\), and \\(\\epsilon n\\to 0\\) as \\(n\\to\\infty\\) and \\(\\epsilon\\to 0\\). Assume that \\(\\rho_{3}=\\rho_{3}(n)\\) is a bounded function and there exists \\(\\widetilde{\\rho}_{2}=\\lim_{n\\to\\infty}\\rho_{2}(n)\\). Then_ \\[p_{n}=\\frac{\\widetilde{\\rho}_{2}}{2n}+o\\Big{(}\\frac{1}{n}\\Big{)}. \\tag{3.19}\\] **Proof.** The proof follows by expanding of the main term of asymptotic relation (3.17) for small \\(C\\). ## 4 Discussion We obtained a number of asymptotic results related to the loss probability for the \\(GI/M/1/n\\) queueing system by using Tauberian theoremswith remainder. Asymptotic relations (3.6) and (3.7) of Theorem 3.1 are the same as correspondent asymptotic relations of Theorem 3 of [9]. Asymptotic relation (3.5) of Theorem 3.1 improves correspondent asymptotic relation of Theorem 3 of [9], however it can be deduced from Theorem 3.1 of [8] and the second equation on p. 1016 of [8]. Under additional condition \\(\\rho_{3}<\\infty\\) the statement (3.12) of Theorem 3.2 is new. It improves the result of [9] under \\(\\rho=1\\): the remainder obtained in Theorem 3.2 is \\(O(\\log n/n^{2})\\) whereas under condition \\(\\rho_{2}<\\infty\\) the remainder obtained in Theorem 3 of [9] is \\(o(n^{-1})\\). Asymptotic relation (3.13) of Theorem 3.3 coincides with intermediate asymptotic relation on p. 441 of [9]. Theorems 3.4 and 3.5 are new. They provide asymptotic results where the load \\(\\rho\\) is close to 1. **Acknowledgement** The author thanks the anonymous referees for a number of valuable comments. **References** [1] V.M.Abramov, _Investigation of a Queueing System with Service Depending on Queue-Length_, (Donish, Dushanbe, 1991) (in Russian). [2] V.M.Abramov, On a property of a refusals stream, J. Appl. Probab. 34 (1997) 800-805. [3] V.M.Abramov, Asymptotic behavior of the number of lost packets, submitted for publication. [4] O.Ait-Hellal, E.Altman, A.Jean-Marie and I.A.Kurkova, On loss probabilities in presence of redundant packets and several traffic sources, Perform. Eval. 36-37 (1999) 485-518. [5] E.Altman and A.Jean-Marie, Loss probabilities for messages with redundant packets feeding a finite buffer, IEEE J. Select. Areas Commun. 16 (1998) 778-787. [6] S.Asmussen, Equilibrium properties of the \\(M/G/1\\) queue, Z. Wahrscheinlichkeitstheorie 58 (1981) 267-281. [7] A.Baiocchi, Analysis of the loss probability of the \\(MAP/G/1/K\\) queue, part I: Asymptotic theory, Stochastic Models 10 (1994) 867-893. [8] B.D.Choi and B.Kim, Sharp results on convergence rates for the distribution of \\(GI/M/1/K\\) queues as \\(K\\) tends to infinity, J. Appl. Probab. 37 (2000) 1010-1019. [9] B.D.Choi, B.Kim and I.-S.Wee, Asymptotic behavior of loss probability in \\(GI/M/1/K\\) queue as \\(K\\) tends to infinity, Queueing Systems 36 (2000) 437-442. [10] I.Cidon, A.Khamisy and M.Sidi, Analysis of packet loss processes in high-speed networks, IEEE Trans. Inform. Theory 39 (1993) 98-108. [11] O.Gurewitz, M.Sidi and I.Cidon, The ballot theorem strikes again: Packet loss process distribution, IEEE Trans. Inform. Theory 46 (2000) 2588-2595. [12] M.Miyazawa, Complementary generating functions for the \\(M^{X}/GI/1/k\\) and \\(GI/M^{Y}/1/k\\) queues and their application to the comparison for loss probabilities, J. Appl. Probab. 27 (1990) 684-692. [13] A.G.Postnikov, Tauberian theory and its application. Proc. Steklov Math. Inst. 144 (1979) 1-148. [14] M.A.Subhankulov, _Tauberian Theorems with Remainder_. (Nauka, Moscow, 1976) (in Russian). [15] L.Takacs, _Combinatorial Methods in the Theory of Stochastic Processes_. (Wiley, New York, 1967). [16] H.Takagi, _Queueing Analysis_, Vol. 2. (Elsevier Science, Amsterdam, 1993). [17] J.Tomko, One limit theorem in queueing problem as input rate increases infinitely. Studia Sci. Math. Hungarica 2 (1967) 447-454 (in Russian). [18] G.E.Willmot, A note on the equilibrium \\(M/G/1\\) queue-length, J. Appl. Probab. 25 (1988) 228-231.
This paper provides the asymptotic analysis of the loss probability in the \\(GI/M/1/n\\) queueing system as \\(n\\) increases to infinity. The approach of this paper is alternative to that of the recent papers of Choi and Kim [8] and Choi et al [9] and based on application of modern Tauberian theorems with remainder. This enables us to simplify the proofs of the results on asymptotic behavior of the loss probability of the abovementioned paper of Choi and Kim [9] as well as to obtain some new results. L (0,1) 25/09/2018 18:39; p.1 Loss system, \\(GI/M/1/n\\) queue, asymptotic analysis, Tauberian theorems with remainder
Condense the content of the following passage.
arxiv-format/0506033v6.md
# Losses in \\(M/GI/m/n\\) queues Vyacheslav M. Abramov Department of Mathematics, University of Wisconsin, Madison, WI 53706, USA [email protected] ## 1. Introduction Analysis of loss queueing systems is very important from both the theoretical and practical points of view. While the multiserver loss queueing system \\(M/GI/m/0\\) and its network extensions have been intensively studied (see the review paper of Kelly [20], the book of Ross [27] and references in these sources), the information about \\(M/GI/m/n\\) queueing systems (\\(n\\geq 1\\)) is very scanty, because explicit results for characteristics of these queueing systems are unknown. (In the present paper, for multiserver queueing systems the notation \\(M/GI/m/n\\) is used, where \\(m\\) denotes the number of servers and \\(n\\) denotes the number of waiting places. Another notation which is also acceptable in the literature is \\(M/GI/m/m+n\\).) From the practical point of view, \\(M/GI/m/n\\) queueing systems serve as a model for telephone systems, where \\(n\\) is the maximally possible number of calls that can wait in the line before their service start. The loss probability is one of the most significant performance characteristics. In the present paper, we study the expected number of losses during a busy period (the characteristic closely related to the stationary loss probability) under the assumption that the arrival rate (\\(\\lambda\\)) is equal to the maximum service capacity (\\(m\\mu\\)), which seems to be the most interesting from the theoretical point of view. There are two main reasons for studying this case. The first reason is that the case \\(\\lambda=m\\mu\\) is a critical case for queueing systems with \\(m\\) identical servers, i.e. the case associated with critically loaded systems. The theoretical and practical interest in studying heavily loaded loss systems is very high, and there are many results in the literature related to the analysis of the loss probability in heavily loaded systems. The asymptotic results for losses in heavily loaded single server systems (\\(n\\to\\infty\\)) such as \\(M/GI/1/n\\) and \\(GI/M/1/n\\)and for associated models of telecommunication systems and dams have been studied in [4], [8], [9], [11], [14] and [33]. Heavy-traffic analysis of losses in heavily loaded multiserver systems have been provided in [12], [33], [34] and [35]. The mathematical foundation of heavy traffic theory can be found in the textbook of Whitt [32]. Although the case \\(\\lambda=m\\mu\\) is idealistic, it enables us to understand the possible behaviour of the system in certain cases when the values \\(\\lambda\\) and \\(m\\mu\\) are close and approach one another as \\(n\\) increases to infinity. (Obtaining nontrivial results in the cases \\(\\lambda<m\\mu\\) and \\(\\lambda>m\\mu\\) is a hard problem, so the analytic investigation of the aforementioned asymptotic behaviour as \\(n\\) increases to infinity is difficult.) The second reason is that \\(\\lambda=m\\mu\\) is an interesting theoretical case associated with an extension of the following non-trivial property of the symmetric random walk. Let \\(X_{1}\\), \\(X_{2}\\), , \\(X_{i}\\), , be a sequence of independent and identically distributed random variables taking the values \\(\\pm 1\\) with the equal probability \\(\\frac{1}{2}\\). Let \\(S_{0}=0\\), and \\(S_{i+1}=S_{i}+X_{i+1}\\), \\(i\\geq 0\\), be a symmetric random walk, and let \\(t=\\tau\\) be the first time instant after \\(t=0\\) when this random walk returns to zero, i.e. \\(S_{\\tau}=0\\). It is known that the expected number of level-crossings through any level \\(n\\geq 1\\) (or \\(n\\leq-1\\)) is equal to \\(\\frac{1}{2}\\) independently of that level. The mentioning of this fact (but in a slightly different formulation) can be found in Szekely [30], and its proof is given in Wolff [37], p.411. The reformulation of this fact in terms of queueing theory is as follows. Consider \\(M/M/1/n\\) queueing system with equal arrival and service rates. For this system, the expected number of losses during a busy period is equal to \\(1\\) for all \\(n\\geq 0\\). It has been recently noticed that this property holds true for \\(M/GI/1/n\\) queueing systems. Namely, it was shown in several recent papers (see Abramov [1], [2], [4], Righter [26], Wolff [38]), that under mutually equal expectations of interarrival and service time, the expected number of losses during a busy period is equal to \\(1\\) for all \\(n\\geq 0\\). Further extension of this property to queueing systems with batch arrivals have been given in Abramov [5], Wolff [38] and Pekoz, Righter and Xia [25]. Applications of the aforementioned property of losses can be found in [9] for analysis of lost messages in telecommunication systems and in [11] for optimal control of large dams. Further relevant results associated with the properties of losses have been obtained in the paper by Pekoz, Righter and Xia [25]. They solved a characterization problem associated with the properties of losses in \\(GI/M/1/n\\) queues and established similar properties for \\(M/M/m/n\\) and \\(M^{X}/M/m/n\\) queueing systems. Recently, a similar property related to consecutive losses in busy periods of \\(M/GI/1/n\\) queueing systems has been discussed in [15]. It follows from the results obtained in this paper that for \\(M/GI/1/n\\) queueing systems with mutually equal expectations of interarrival and service times, the expected number of losses in series containing at least \\(k>1\\) consecutive losses during a busy period generally depends on \\(n\\). However, for \\(M/M/1/n\\) queueing systems with equal arrival and service rates that expected number of consecutive losses during a busy period is the same constant (depending on the value \\(k\\)) for all \\(n\\geq 0\\). The aim of the present paper is further theoretical contribution to this theory of losses, now to the theory of multiserver loss queueing systems. On the basis of the aforementioned results on losses in \\(M/GI/1/n\\) and \\(M/M/m/n\\) queueing systems we address the following open question. _Does the result on losses in \\(M/M/m/n\\) queueing systems remain true for those \\(M/GI/m/n\\) too?_The answer on this question is not elementary. On one hand, under the assumption \\(\\lambda=m\\mu\\) the expected numbers of losses in \\(M/GI/m/0\\) and \\(M/GI/m/n\\) queueing systems (\\(m\\geq 2\\) and \\(n\\geq 1\\)) during their busy periods are different. A simple example for this confirmation can be built for \\(M/GI/2/1\\) queueing systems having the service time distribution \\(G(x)=1-p\\mathrm{e}^{-\\mu_{1}x}-q\\mathrm{e}^{-\\mu_{2}x}\\), \\(p+q=1\\). The analysis of the stationary characteristics for these systems, resulting in an analysis of losses during a busy period, can be provided explicitly. Specifically, the structure of the \\(9\\times 9\\) Markov chain intensity matrix for the states of the Markov chain associated with an \\(M/GI/2/1\\) queueing system shows a clear difference between the structure of the stationary probabilities in \\(M/GI/2/1\\) queues and that in \\(M/GI/2/0\\) queues given by the Erlang-Sevastyanov formulae. So, the parameters \\(p\\), \\(q\\), \\(\\mu_{1}\\) and \\(\\mu_{2}\\) can be chosen such that the expected number of losses during busy periods in these two queueing systems will be different. On the other hand, the property of losses, which is similar to the aforementioned one, indeed holds. The correctness of this similar property for multiserver \\(M/GI/m/n\\) queueing systems is proved in the present paper. Namely, we establish the following results. Let \\(L_{m,n}\\) denote the number of losses during a busy period of the \\(M/GI/m/n\\) queueing system, let \\(\\lambda\\), \\(\\mu\\) be the arrival rate and, respectively, the reciprocal of the expected service time, and let \\(m\\), \\(n\\) denote the number of servers and, respectively, the number of waiting places. We will prove that, under the assumption \\(\\lambda=m\\mu\\), the expected number of losses during a busy period of the \\(M/GI/m/n\\) queueing system, \\(\\mathrm{E}L_{m,n}\\), is the same for all \\(n\\geq 1\\), which is _not_ generally the same as that for the \\(M/GI/m/0\\) loss queueing system (when \\(n=0\\)). In addition, if the probability distribution function of the service time belongs to the class NBU (New Better than Used), then \\(\\mathrm{E}L_{m,n}=\\frac{cm^{m}}{m!}\\), where a constant \\(c\\geq 1\\) is independent of \\(n\\geq 1\\). In the opposite case of the NWU (New Worse than Used) service time distribution we correspondingly have \\(\\mathrm{E}L_{m,n}=\\frac{cm^{m}}{m!}\\) with a constant \\(c\\leq 1\\) independent of \\(n\\geq 1\\) as well. (The constant \\(c\\) becomes equal to \\(1\\) in the case of exponentially distributed service times.) Recall that a probability distribution function \\(\\Xi(x)\\) of a nonnegative random variable is said to belong to the class NBU if for all \\(x\\geq 0\\) and \\(y\\geq 0\\) we have \\(\\overline{\\Xi}(x+y)\\leq\\overline{\\Xi}(x)\\overline{\\Xi}(y)\\), where \\(\\overline{\\Xi}(x)=1-\\Xi(x)\\). If the opposite inequality holds, i.e. \\(\\overline{\\Xi}(x+y)\\geq\\overline{\\Xi}(x)\\overline{\\Xi}(y)\\), then \\(\\Xi(x)\\) is said to belong to the class NWU. The proof of the main results of this paper is based on an application of the level-crossing approach to the special type stationary processes. The construction of the level-crossings approach used in this paper is a substantially extended version of that used in the earlier papers by the author (e.g. [1], [3], [6], [7], [10] and [13]) and by Pechinkin [24]. It uses modern geometric methods of analysis and involves an algebraically close system of processes and a nontrivial construction of deleting intervals and merging the ends together with nontrivial applications of the PASTA property. Throughout the paper, it is assumed that \\(m\\geq 2\\). (This is not the loss of generality since the case \\(m=1\\) is known, see [4], [26] and [38].) The paper is organized as follows. In Section 2, which is the first part of the paper, \\(M/M/m/n\\) queueing systems are studied. The results for \\(M/M/m/n\\) queueing systems are then used in Section 3, which is the second part of the paper, in order to study \\(M/GI/m/n\\) queueing systems. The study in both of Sections 2 and 3 is based on the level-crossing approach. The construction of level-crossingsfor \\(M/M/m/n\\) queueing systems is then developed for \\(M/GI/m/n\\) queueing systems as follows. The stationary processes associated with these queueing systems is considered, and the stochastic relations between the times spent in state \\(m-1\\) associated with \\(m-1\\) busy servers during a busy period of \\(M/GI/m/n\\) (\\(n\\geq 1\\)) and \\(M/GI/m-1/0\\) queueing systems are established. To prove these stochastic relations, some ideas from the paper of Pechinkin [24] are involved to adapt and develop the level-crossing method for the problems of the present paper. The obtained stochastic relations are crucial, and they are then used to prove the main results of the paper in Section 4. In Section 5, possible development of the results for \\(M^{X}/GI/m/n\\) queueing systems with batch arrivals is discussed. ## 2. The \\(M/m/n\\) queueing system In this section, the Markovian \\(M/M/m/n\\) loss queueing system is studied with the aid of the level-crossings approach, in order to establish some relevant properties of this queueing system. Those properties are then developed for \\(M/GI/m/n\\) queueing systems in the following sections. Let \\(f(j)\\), \\(1\\leq j\\leq n+m+1\\), denote the number of customers arriving during a busy period who, upon their arrival, meet \\(j-1\\) customers in the system. It is clear that \\(f(1)=1\\) with probability \\(1\\). Let \\(t_{j,1}\\), \\(t_{j,2}\\), , \\(t_{j,f(j)}\\) be the instants of arrival of these \\(f(j)\\) customers, and let \\(s_{j,1}\\), \\(s_{j,2}\\), , \\(s_{j,f(j)}\\) be the instants of the service completions when there remain only \\(j-1\\) customers in the system. Notice, that \\(t_{n+m+1,k}=s_{n+m+1,k}\\) for all \\(k=1,2,\\ldots,f(n+m+1)\\). For \\(1\\leq j\\leq n+m\\) let us consider the intervals \\[(t_{j,1},s_{j,1}],(t_{j,2},s_{j,2}],\\ldots,(t_{j,f(j)},s_{j,f(j)}]. \\tag{2.1}\\] Then, by incrementing index \\(j\\) we have the following intervals \\[(t_{j+1,1},s_{j+1,1}],(t_{j+1,2},s_{j+1,2}],\\ldots,(t_{j+1,f(j+1)},s_{j+1,f(j+ 1)}]. \\tag{2.2}\\] Delete the intervals of (2.2) from those of (2.1) and merge the ends, that is each point \\(t_{j+1,k}\\) with the corresponding point \\(s_{j+1,k}\\), \\(k=1,2, ,f(j+1)\\) (see Figure 1). Then \\(f(j+1)\\) has the following properties. According to the property of the lack of memory of the exponential distribution, the residual service time for a service completion, after the procedure of deleting the interval and merging the ends as it is indicated above, remains exponentially distributed with parameter \\(\\mu\\min(j,m)\\). Therefore, the number of points generated by merging the ends within the given interval \\((t_{j,1},s_{j,1}]\\) coincides in distribution with the number of arrivals of the Poisson process with rate \\(\\lambda\\) during an exponentially distributed service time with parameter \\(\\mu\\min(j,m)\\). Namely, for \\(1\\leq j\\leq m-1\\) we obtain \\[\\mathrm{E}\\{f(j+1)|f(j)=1\\}=\\sum_{u=1}^{\\infty}u\\int_{0}^{\\infty}\\mathrm{e}^{ -\\lambda x}\\frac{(\\lambda x)^{u}}{u!}j\\mu\\mathrm{e}^{-j\\mu x}\\mathrm{d}x= \\frac{\\lambda}{j\\mu}.\\] Considering now a random number \\(f(j)\\) of intervals (2.1) we have \\[\\mathrm{E}\\{f(j+1)|f(j)\\}=\\frac{\\lambda}{j\\mu}f(j). \\tag{2.3}\\]Figure 1. Level crossings during a busy period in a Markovian system Analogously, denoting the load of the system by \\(\\rho=\\frac{\\lambda}{m\\mu}\\), for \\(m\\leq j\\leq m+n\\) we have \\[\\mathrm{E}\\{f(j+1)|f(j)\\}=\\frac{\\lambda}{m\\mu}f(j)=\\rho f(j). \\tag{2.4}\\] The properties (2.3) and (2.4) mean that the stochastic sequence \\[\\left\\{f(j+1)\\left(\\frac{\\mu}{\\lambda}\\right)^{j}\\prod_{i=1}^{j}\\min(i,m), \\mathcal{F}_{j+1}\\right\\},\\ \\ \\mathcal{F}_{j}=\\sigma\\{f(1),f(2),\\ldots,f(j)\\}, \\tag{2.5}\\] forms a martingale. It follows from (2.5) that for \\(0\\leq j\\leq m-1\\) \\[\\mathrm{E}f(j+1)=\\frac{\\lambda^{j}}{j!\\mu^{j}}, \\tag{2.6}\\] and for \\(m\\leq j\\leq m+n\\) \\[\\mathrm{E}f(j+1)=\\frac{\\lambda^{m}}{m!\\mu^{m}}\\ \\rho^{j-m}. \\tag{2.7}\\] For example, when \\(\\rho=1\\) from (2.7) we obtain the particular case of the result of Pekoz, Righter and Xia [25]: \\(\\mathrm{E}L_{m,n}\\)=\\(\\mathrm{E}f(n+m+1)=\\frac{m^{m}}{m!}\\) for all \\(n\\geq 0\\), where \\(L_{m,n}\\) denotes the number of losses during a busy period of the \\(M/M/m/n\\) queueing system. Next, let \\(B(j)\\) be the period of time during a busy cycle of the \\(M/M/m/n\\) queueing system when there are exactly \\(j\\) customers in the system. For \\(0\\leq j\\leq m+n\\) we have: \\[\\lambda\\mathrm{E}B(j)=\\mathrm{E}f(j+1)=\\begin{cases}\\frac{\\lambda^{j}}{j!\\mu^ {j}},&\\text{for }0\\leq j\\leq m-1,\\\\ \\frac{\\lambda^{m}}{m!\\mu^{m}}\\rho^{j-m},&\\text{for }m\\leq j\\leq m+n.\\end{cases} \\tag{2.8}\\] Now, introduce the following notation. Let \\(T_{m,n}\\) denote the length of a busy period of the \\(M/M/m/n\\) queueing system, let \\(T_{m,0}\\) denote the length of a busy period of the \\(M/M/m/0\\) queueing system with the same arrival and service rates as in the initial \\(M/M/m/n\\) queueing system, and let \\(\\zeta_{n}\\) denote the length of a busy period of \\(M/M/1/n\\) queueing system with arrival rate \\(\\lambda\\) and service rate \\(\\mu m\\). From (2.6)-(2.8) for the expectation of a busy period of the \\(M/M/m/n\\) queueing system we have \\[\\mathrm{E}T_{m,n}=\\sum_{j=1}^{n+m}\\mathrm{E}B(j)=\\sum_{j=1}^{m-1}\\frac{ \\lambda^{j-1}}{j!\\mu^{j}}+\\frac{\\lambda^{m-1}}{m!\\mu^{m}}\\sum_{j=0}^{n}\\rho^{ j}. \\tag{2.9}\\] In turn, for the expectation of a busy period of the \\(M/M/m/0\\) queueing system we have \\[\\mathrm{E}T_{m,0}=\\sum_{j=1}^{m}\\mathrm{E}B(j)=\\sum_{j=1}^{m}\\frac{\\lambda^{j -1}}{j!\\mu^{j}}, \\tag{2.10}\\] where (2.10) is the particular case of (2.9) where \\(n=0\\). It is clear that \\(T_{m,n}\\) contains one busy period \\(T_{m-1,0}\\), where the subscript \\(m-1\\) underlines that there are \\(m-1\\) servers, and a random number of independent busy periods, which will be called _orbital busy periods_. Denote an orbital busy period by \\(\\zeta_{n}\\). (It is assumed that an orbital busy period \\(\\zeta_{n}\\) starts at instant when an arriving customer finds \\(m-1\\) servers busy and occupies the \\(m\\)th server, and it finishesat the instant when after a service completion there at the first time remain only \\(m-1\\) busy servers.) Therefore, denoting the independent sequence of identically distributed orbital busy periods by \\(\\zeta_{n}^{(1)}\\), \\(\\zeta_{n}^{(2)}\\), , we have \\[T_{m,n}{\\buildrel d\\over{=}}T_{m-1,0}+\\sum_{i=1}^{\\kappa}\\zeta_{n}^{(i)}, \\tag{2.11}\\] where \\(\\kappa\\) is the random number of the aforementioned orbital busy periods and \\({\\buildrel d\\over{=}}\\) means an equality in distribution. It follows from (2.9), (2.10) and (2.11) \\[\\mathrm{E}\\sum_{i=1}^{\\kappa}\\zeta_{n}^{(i)}=\\frac{\\lambda^{m-1}}{m!\\mu^{m}} \\sum_{j=0}^{n}\\rho^{j}. \\tag{2.12}\\] On the other hand, the expectation of an orbital busy period \\(\\zeta_{n}\\) is \\[\\mathrm{E}\\zeta_{n}=\\frac{1}{m\\mu}\\sum_{j=0}^{n}\\rho^{j}\\] (this can be easily checked, for example, by the level-crossings method [1], [6] and an application of Wald's identity [16], p. 384), and we obtain \\[\\mathrm{E}\\kappa=\\frac{\\lambda^{m-1}}{(m-1)!\\mu^{m-1}}. \\tag{2.13}\\] Thus, \\(\\mathrm{E}\\kappa\\) coincides with the expectation of the number of losses during a busy period in the \\(M/M/m/0\\) queueing system. In the case \\(\\rho=1\\) we have \\(\\mathrm{E}\\kappa=\\frac{m^{m}}{m!}\\). ## 3. \\(M/GI/m/n\\) queueing systems In this section, the inequalities between the times spent in the state \\(m-1\\) in the \\(M/GI/m/n\\) (\\(n\\geq 1\\)) and \\(M/GI/m/0\\) queueing systems during their busy periods are derived. Consider two queueing systems: \\(M/GI/m/n\\) (\\(n\\geq 1\\)) and \\(M/GI/m/0\\) both having the same arrival rate \\(\\lambda\\) and probability distribution function of a service time \\(G(x)\\), \\(\\frac{1}{\\mu}=\\int_{0}^{\\infty}x\\mathrm{d}G(x)<\\infty\\). Let \\(T_{m,n}(m-1)\\) denote the time spent in the state \\(m-1\\) during its busy period (i.e. the total time during a busy period when \\(m-1\\) servers are occupied) of the \\(M/GI/m/n\\) queueing system, and let \\(T_{m,0}(m-1)\\) have the same meaning for the \\(M/GI/m/0\\) queueing system. We prove the following lemma. **Lemma 3.1**.: _Under the assumption that the service time distribution \\(G(x)\\) belongs to the class NBU (NWU),_ \\[T_{m,n}(m-1)\\geq_{st}(\\text{resp. }\\leq_{st})\\ T_{m,0}(m-1), \\tag{3.1}\\] Proof.:. The proof of the lemma is relatively long. In order to make it transparent and easily readable we strongly indicate the steps of this proof given by several propositions (properties). There are also six figures (Figures 2-7) illustrating the constructions in the proof. Each of these figures contain two graphs. The first (upper) of them indicates the initial (or intermediate) possible path of the process (sometimes two-dimensional), while the second (lower) one indicates the part of the path of one or two-dimensional process after a time scaling or specific transformation (e.g. in Figure 5). Arc braces in the graphs indicate the intervals that should be deleted and their ends merged. Two-dimensional processes are shown as parallel graphs. For example, there are two parallel processes in Figure 3 which are shown in the upper graph, and there are two parallel processes which are shown in the lower graph. The same is in Figures 4, 6 and 7. For the purpose of the present paper we use strictly stationary processes of order 1 or _strictly 1-stationary processes_. Recall the definition of a strictly stationary process of order \\(n\\) (see [23], p.206). **Definition 3.2**.: The process \\(\\xi(t)\\) is said to be _strictly stationary of order \\(n\\)_ or _strictly \\(n\\)-stationary_, if for a given positive \\(n<\\infty\\), any \\(h\\) and \\(t_{1}\\), \\(t_{2}\\), , \\(t_{n}\\) the random vectors \\[\\Big{(}\\xi(t_{1}),\\xi(t_{2}),\\ldots,\\xi(t_{n})\\Big{)}\\text{ and }\\Big{(}\\xi(t_{ 1}+h),\\xi(t_{2}+h),\\ldots,\\xi(t_{n}+h)\\Big{)}\\] have identical joint distributions. If \\(n=1\\) then we have strictly 1-stationary processes satisfying the property: \\[\\mathrm{P}\\{\\xi(t)\\leq x\\}=\\mathrm{P}\\{\\xi(t+h)\\leq x\\}.\\] The probability distribution function \\(\\mathrm{P}\\{\\xi(t)\\leq x\\}\\) in this case will be called _limiting stationary distribution_. The class of strictly 1-stationary processes is wider than the class of strictly stationary processes, where it is required that for all finite dimensional distributions \\[\\mathrm{P}\\{(\\xi(t_{1}),\\xi(t_{2}),\\ldots,\\xi(t_{k}))\\in B_{k}\\}=\\mathrm{P}\\{ (\\xi(t_{1}+h),\\xi(t_{2}+h),\\ldots,\\xi(t_{k}+h))\\in B_{k}\\},\\] for any \\(h\\) and any Borel set \\(B_{k}\\subset\\mathbb{R}^{k}\\). The reason of using strictly 1-stationary processes rather than strictly stationary processes themselves is that, the operation of deleting intervals and merging the ends is algebraically close with respect to strictly 1-stationary processes, and it is _not_ closed with respect to strictly stationary processes. The last means that if \\(\\xi(t)\\) is a strictly 1-stationary process, then for any \\(h>0\\) and arbitrary \\(t_{0}\\) the new process \\[\\xi_{1}(t)=\\begin{cases}\\xi(t),&\\text{if }t\\leq t_{0},\\\\ \\xi(t+h),&\\text{if }t>t_{0}\\end{cases}\\] is also strictly 1-stationary and has the same one-dimensional distribution as the original process \\(\\xi(t)\\). The similar property is not longer valid for strictly stationary processes. If \\(\\xi(t)\\) is a strictly stationary process, then generally \\(\\xi_{1}(t)\\) is not strictly stationary. In the following the prefix'strictly' will be omitted, so strictly stationary and strictly 1-stationary processes will be correspondingly called stationary and 1-stationary processes. Let us introduce \\(m\\) independent and identically distributed stationary renewal processes (denoted below \\(\\mathbf{x}_{m}(t)\\)) with a renewal period having the probability distribution function \\(G(x)\\). On the basis of these renewal processes we build the stationary \\(m\\)-dimensional Markov process \\(\\mathbf{x}_{m}(t)=\\{\\xi_{1}(t),\\xi_{2}(t),\\ldots,\\xi_{m}(t)\\}\\), the coordinates \\(\\xi_{k}(t)\\), \\(k=\\)1,2, , \\(m\\) of which are the residual times to the next renewal times in time moment \\(t\\), following in an ascending order. Let us now consider the two \\((m+1)\\)-dimensional Markov processes corresponding to the \\(M/GI/m/n\\) (\\(n\\geq 1\\)) and \\(M/GI/m/0\\) queueing systems, which are denoted by \\(\\mathbf{y}_{m,n}(t)\\) and \\(\\mathbf{y}_{m,0}(t)\\). Let \\(Q_{m,n}(t)\\) denote the stationary queue-lengthprocess (the number of customers in the system) of the \\(M/GI/m/n\\) queueing system, and let \\(Q_{m,0}(t)\\) denote the stationary queue-length process corresponding to the \\(M/GI/m/0\\) queueing system. We have: \\[\\mathbf{y}_{m,n}(t)=\\left\\{\\eta_{1}^{(m,n)}(t),\\eta_{2}^{(m,n)}(t),\\ldots,\\eta_ {m}^{(m,n)}(t),Q_{m,n}(t)\\right\\},\\] where \\[\\left\\{\\eta_{m-\ u_{m,n}(t)+1}^{(m,n)}(t),\\eta_{m-\ u_{m,n}(t)+2}^{(m,n)}(t), \\ldots,\\eta_{m}^{(m,n)}(t)\\right\\}\\] are the ordered residual service times corresponding to \\(\ u_{m,n}(t)=\\min\\{m,Q_{m,n}(t)\\}\\) customers in service in time \\(t\\), and \\[\\left\\{\\eta_{1}^{(m,n)}(t),\\eta_{2}^{(m,n)}(t),\\ldots,\\eta_{m-\ u_{m,n}(t)}^{ (m,n)}(t)\\right\\}=\\{0,0,\\ldots,0\\}\\] all are zeros. Analogously, \\[\\mathbf{y}_{m,0}(t)=\\left\\{\\eta_{1}^{(m,0)}(t),\\eta_{2}^{(m,0)}(t),\\ldots, \\eta_{m}^{(m,0)}(t),Q_{m,0}(t)\\right\\},\\] only replacing the index \\(n\\) with \\(0\\). Let us delete all time intervals of the process \\(\\mathbf{y}_{m,n}(t)\\) related to the \\(M/GI/m/n\\) queueing system (\\(n\\geq 1\\)) where there are more than \\(m-1\\) or less than \\(m-1\\) customers and merge the ends. Remove the last component of the obtained process which is trivially equal to \\(m-1\\). Then we get the new \\((m-1)\\)-dimensional Markov process (in the following the prefix 'Markov' will be omitted and only used in the places where it is meaningful): \\[\\widehat{\\mathbf{y}}_{m-1,n}(t)=\\left\\{\\widehat{\\eta}_{1}^{(m,n)}(t),\\widehat {\\eta}_{2}^{(m,n)}(t),\\ldots,\\widehat{\\eta}_{m-1}^{(m,n)}(t)\\right\\},\\] the components of which are now denoted by hat. All of the components of this vector are 1-stationary, which is a consequence of the existence of the limiting stationary probabilities of the processes \\(\\eta_{j}^{(m,n)}(t)\\), \\(j=1,2,\\ldots,m\\) (e.g. Takacs [31]) and consequently those of the processes \\(\\widehat{\\eta}_{j}^{(m,n)}(t)\\), \\(j=1,2,\\ldots,m\\). The joint limiting stationary distribution of the process \\(\\widehat{\\mathbf{y}}_{m-1,n}(t)\\) can be obtained by conditioning of that of the processes \\(\\mathbf{y}_{m,n}(t)\\) given \\(Q_{m,n}(t)=m-1\\). The similar operation of deleting intervals and merging the ends, where there are less than \\(m-1\\) customers in the system, for the process \\(\\mathbf{y}_{m-1,0}(t)\\) is used. We correspondingly have \\[\\widehat{\\mathbf{y}}_{m-1,0}(t)=\\left\\{\\widehat{\\eta}_{1}^{(m,0)}(t),\\widehat {\\eta}_{2}^{(m,0)}(t),\\ldots,\\widehat{\\eta}_{m-1}^{(m,0)}(t)\\right\\}.\\] We establish the following elementary property related to the \\(M/GI/1/n\\) queueing systems, \\(n\\)=0,1, **Property 3.3**.: (3.2) \\[\\mathrm{P}\\left\\{\\eta_{1}^{(1,n)}(t)\\in B_{1}\\ |\\ Q_{1,n}(t)\\geq 1\\right\\}= \\mathrm{P}\\{\\mathbf{x}_{1}(t)\\in B_{1}\\},\\] _for any Borel set \\(B_{1}\\subset\\mathbb{R}^{1}\\)._ Proof.: Delete all of the intervals where the server is free and merge the corresponding ends (see Figure 2). Then in the new time scale, the processes all are structured as a stationary renewal process with the length of a period having the probability distribution function \\(G(x)\\). Therefore (3.2) follows. ### Residual service times in the original M/GI/I/\\(n\\) queueing system Figure 2. Residual service times for the original and scaled processes of the \\(M/GI/1/n\\) queueing system. In order to establish similar properties in the case \\(m=2\\) let us first study the properties of 1-stationary processes and explain the construction of _tagged server station_ which is substantially used in our construction throughout the paper. _Properties of 1-stationary processes._ Recall (see Definition 3.2) that if \\(\\xi(t)\\) is a 1-stationary process, then for any \\(h\\) and \\(t_{0}\\) the probability distributions of \\(\\xi(t_{0})\\) and \\(\\xi(t_{0}+h)\\) are the same. The result remains correct (due to the total probability formula) if \\(h\\) is replaced by random variable \\(\\vartheta\\) with some given probability distribution, which is assumed to be independent of the process \\(\\xi\\). Namely, we have: \\[\\begin{split}\\mathrm{P}\\{\\xi(t_{0}+\\vartheta)\\leq x\\}& =\\int_{-\\infty}^{\\infty}\\mathrm{P}\\{\\xi(t_{0}+h)\\leq x\\}\\mathrm{dP} \\{\\vartheta\\leq h\\}\\\\ &=\\mathrm{P}\\{\\xi(t_{0})\\leq x\\}\\int_{-\\infty}^{\\infty}\\mathrm{dP }\\{\\vartheta\\leq h\\}\\\\ &=\\mathrm{P}\\{\\xi(t_{0})\\leq x\\}.\\end{split} \\tag{3.3}\\] That is, \\(\\xi(t_{0})\\) and \\(\\xi(t_{0}+\\vartheta)\\) have the same distribution. The above property will be used for the following _construction of the sequence of 1-stationary processes_\\(\\xi^{(1)}(t)\\), \\(\\xi^{(2)}(t)\\), , having identical one-dimensional distributions. Let \\(\\xi^{(0)}(t)=\\xi(t)\\) be a 1-stationary process, let \\(t_{1}\\) be an arbitrary point, and let \\(\\vartheta_{1}\\) be a random variable with some given probability distribution, which is independent of the process \\(\\xi^{(0)}(t)\\). Let us build a new process \\(\\xi^{(1)}(t)\\) as follows. Put \\[\\xi^{(1)}(t)=\\begin{cases}\\xi^{(0)}(t),&\\text{for all }t<t_{1},\\\\ \\xi^{(0)}(t+\\vartheta_{1}),&\\text{for all }t\\geq t_{1}.\\end{cases} \\tag{3.4}\\] Since the probability distributions of \\(\\xi(t)\\) and \\(\\xi(t+\\vartheta_{1})\\) are the same for all \\(t\\geq t_{1}\\), then the processes \\(\\xi(t)\\) and \\(\\xi^{(1)}(t)\\) have the same one-dimensional distributions, and \\(\\xi^{(1)}(t)\\) is a 1-stationary process as well. With a new point \\(t_{2}\\) and a random variable \\(\\vartheta_{2}\\), which is assumed to be independent of the process \\(\\xi^{(0)}(t)\\) and random variable \\(\\vartheta_{1}\\) (therefore, it is also independent of the process \\(\\xi^{(1)}(t)\\)) by the same manner one can build the new 1-stationary process \\(\\xi^{(2)}(t)\\). Specifically, \\[\\xi^{(2)}(t)=\\begin{cases}\\xi^{(1)}(t),&\\text{for all }t<t_{2},\\\\ \\xi^{(1)}(t+\\vartheta_{2}),&\\text{for all }t\\geq t_{2}.\\end{cases} \\tag{3.5}\\] The new process \\(\\xi^{(2)}(t)\\) has the same one-dimensional distribution as the processes \\(\\xi^{(0)}(t)\\) and \\(\\xi^{(1)}(t)\\). The procedure can be infinitely continued, and one can obtain the infinite family of 1-stationary processes, having the same one-dimensional distribution. The points \\(t_{1}\\), \\(t_{2}\\), in the above construction are assumed to be some fixed (non-random) points. However, the construction also remains correct in the case of random points \\(t_{0}\\), \\(t_{1}\\), of Poisson process, since according to the PASTA property [36] the limiting stationary distribution of a 1-stationary process in a point of a Poisson arrival coincides with the limiting stationary distribution of the same 1-stationary process in an arbitrary non-random point. Furthermore, the aforementioned property of process remains correct when the random points \\(t_{0}\\), \\(t_{1}\\), are the points of the process which is not necessarily Poisson but belongs to the special class of processes that contains Poisson. In this case the property is called ASTA (e.g. [22]). _1-stationary Poisson process._ Consider an important particular case when the process \\(\\xi(t)\\) is Poisson. Let \\(\\xi^{(0)}(t)=\\xi(t)\\). Then the process \\(\\xi^{(1)}(t)\\) that obtained by (3.4) is no longer Poisson. Its limiting stationary distribution is the same as that of the original process \\(\\xi(t)\\), but the joint distributions of this process given in different points \\(s\\) and \\(t\\) distinguish from those of the original process \\(\\xi(t)\\). The process \\(\\xi^{(1)}(t)\\) will be called _1-stationary Poisson process_ or simply _1-Poisson_. Clearly, that the further processes such as \\(\\xi^{(2)}(t)\\), \\(\\xi^{(3)}(t)\\), that obtained similarly to the procedure in (3.4), (3.5) all are 1-Poisson with the same limiting stationary distribution. According to the above construction, a 1-Poisson process is obtained by deleting intervals and merging the ends of an original Poisson process. Therefore, a sequence of 1-Poisson arrival time instants is a scaled subsequence of those instants of the ordinary Poisson arrivals. Hence, for 1-Poisson process the ASTA property is satisfied, i.e. 1-Poisson arrivals see time averages exactly as those Poisson arrivals. _Tagged server station._ Consider a stationary queueing system \\(M/GI/m/n\\), which is referred to as _main server station_, and in addition to this queueing system introduce another one containing a server station in order to register specific arrivals, for example losses or, say, customers waiting their service in the main system. This server station is called _tagged server station_. The main idea of introducing tagged server stations is to decompose the main system as follows. Assume that along with a Poisson stream of arrivals of customers occupying servers in the main system, there is another stream of arrivals of customers in the tagged server system. For instance, the losses in the main system can be supposed to occupy the tagged server station. Although the stream of these losses is not Poisson (see e.g. [21], p. 83 or [20], p. 320), it is shown later that it is 1-Poisson. Therefore, the original system is decomposed into smaller systems with the same (1-Poisson) type of input stream. It is worth noting that only one dimensional distributions of 1-Poisson process are the same for all of them that generated similarly to the procedure in (3.4), (3.5). However, the two-dimensional distributions are distinct in general. In fact, applications of a tagged server station is wider than that, and its aim is a proper decomposition of the original system into the main and tagged systems for further study of the properties of losses. Another idea of using tagged server stations is a proper application of the ASTA property as follows. At the moment of arrival of a customer in the tagged server station, the stationary characteristics in the main server station remain the same. Specifically, the distributions of residual service times in servers of the main station at the moment of arrival of a customer in the tagged station coincides with the usual stationary distributions of these residual service times. Let us now formulate and prove a property similar to Property 3.3 for \\(m=2\\). We have the following. **Property 3.4**.: _For the \\(M/GI/2/0\\) queueing system we have:_ \\[\\mathrm{P}\\{\\widehat{\\mathbf{y}}_{1,0}(t)\\in B_{1}\\}=\\mathrm{P}\\{\\mathbf{x}_{ 1}(t)\\in B_{1}\\}, \\tag{3.6}\\]Proof.: In order to simplify the explanation in this case, let us consider two auxiliary stationary one-dimensional processes \\(\\zeta_{1,0}(t)\\) and \\(\\zeta_{2,0}(t)\\). The first process describes a residual service time in the first server, and the second one describes a residual service time in the second server. If the \\(i\\)th server (\\(i=1,2\\)) is free in time \\(t\\), then we set \\(\\zeta_{i,0}(t):=0\\). Our further convention is that the first server is a _tagged_ server. We assume that if at the moment of arrival of a customer both of the servers are free, he/she occupies the first server. Clearly that this assumption is not a loss of generality. For instance, if we assume that both of the servers are equivalent and can be occupied with the equal probability \\(\\frac{1}{2}\\), then an occupied server (let it be the first) can be called tagged. In another busy period start an arriving customer can occupy the second server. It this case, nothing is changed if the servers will be renumbered, and the occupied server will be numbered as first and called tagged. Our main idea is a decomposition of the stationary \\(M/GI/2/0\\) queueing system into two systems and study the properties of stationary (1-stationary) processes \\(\\zeta_{1,0}(t)\\) and \\(\\zeta_{2,0}(t)\\). The arrival stream to the tagged system is Poisson, so the first system is \\(M/GI/1/0\\), while the second one is denoted \\(\\bullet/GI/1/0\\), where \\(\\bullet\\) in the first place of the notation stands for the input process in the second system, which is the output (loss) stream in the first one. Clearly, that an arriving customer is arranged to the second queueing system if and only if at the moment of his/her arrival the tagged system is occupied. Therefore, let us delete all the intervals when the tagged system is empty and merge the ends. In this case, the tagged system becomes an ordinary renewal process, and the stream of arrivals to the second queueing system becomes 1-Poisson rather then Poisson (because after deleting intervals and merging the ends in the new time scale the original Poisson process is transformed into 1-Poisson). Therefore the second system now can be re-denoted by \\(\\widetilde{M}/GI/1/0\\), where \\(\\widetilde{M}\\) in the first place of the notation stands for 1-Poisson input and replaces the initially written symbol \\(\\bullet\\). Thus, the \\(M/GI/2/0\\) queueing system is decomposed into the \\(M/GI/1/0\\) and \\(\\widetilde{M}/GI/1/0\\) queueing systems. Clearly, that without loss of generality one can assume that the original arrival stream is 1-Poisson rather than Poisson, i.e. the original queueing system is \\(\\widetilde{M}/GI/2/0\\), and it is decomposed into two \\(\\widetilde{M}/GI/1/0\\) queueing systems. The last note is important for the further extension of the result for the systems \\(M/GI/m/0\\) (or generally \\(\\widetilde{M}/GI/m/0\\)) having \\(m>2\\) servers. Let \\(\\tau\\) be the time moment when an arriving customer occupies the tagged server station. According to the ASTA property, \\[\\mathrm{P}\\{\\zeta_{2,0}(\\tau)\\leq x\\}=\\mathrm{P}\\{\\zeta_{2,0}(t)\\leq x\\}, \\tag{3.7}\\] where \\(t\\) is an arbitrary fixed point, and the probability distribution function of \\(\\zeta_{2,0}(t)\\) in this point coincides with the distribution of residual service time in specific \\(\\widetilde{M}/GI/1/0\\) system with some specific value of parameter of 1-Poisson process, which is not important here. On the other hand, the process \\(\\zeta_{2,0}(t)\\) is stationary and Markov. Therefore from (3.7) for any \\(h>0\\) we have \\[\\mathrm{P}\\{\\zeta_{2,0}(\\tau+h)\\leq x\\}=\\mathrm{P}\\{\\zeta_{2,0}(t+h)\\leq x\\}= \\mathrm{P}\\{\\zeta_{2,0}(t)\\leq x\\}. \\tag{3.8}\\] Let \\(\\chi\\) denotes the service time of the customer, who arrives at the time moment \\(\\tau\\) occupying the tagged server station. Our challenge is to prove that \\[\\mathrm{P}\\{\\zeta_{2,0}(\\tau+\\chi)\\leq x\\}=\\mathrm{P}\\{\\zeta_{2,0}(t)\\leq x| \\zeta_{1,0}(t)>0\\}. \\tag{3.9}\\]Instead of the original processes \\(\\zeta_{i,0}(t)\\), \\(i=1,2\\), consider another processes \\(\\widetilde{\\zeta}_{i,0}(t)\\), which are obtained by deleting the intervals where the tagged server is free, and merging the ends. Then, \\(\\widetilde{\\zeta}_{1,0}(t)\\) is a renewal process, and the 1-stationary process \\(\\widetilde{\\zeta}_{2,0}(t)\\) and the random variable \\(\\chi\\) (the length of a service time in the tagged server that starts at moment \\(\\tau\\)) are independent. Hence, for any event \\(\\{\\chi=h\\}\\) according to the properties of 1-stationary processes we have \\[\\mathrm{P}\\{\\widetilde{\\zeta}_{2,0}(\\tau+\\chi)\\leq x|\\chi=h\\}=\\mathrm{P}\\{ \\widetilde{\\zeta}_{2,0}(\\tau)\\leq x\\}, \\tag{3.10}\\] and, due to the total probability formula from (3.10) we have \\[\\mathrm{P}\\{\\widetilde{\\zeta}_{2,0}(\\tau+\\chi)\\leq x\\}=\\mathrm{P}\\{ \\widetilde{\\zeta}_{2,0}(\\tau)\\leq x\\}. \\tag{3.11}\\] The only difference between (3.11) and the basic property (3.3) is that the time moment \\(\\tau\\) is random, while \\(t_{0}\\) is not. However keeping in mind (3.8), this modified equation (3.11) follows by the same derivation as in (3.3). Hence, from (3.11), \\[\\mathrm{P}\\{\\widetilde{\\zeta}_{2,0}(\\tau+\\chi)\\leq x\\}=\\mathrm{P}\\{ \\widetilde{\\zeta}_{2,0}(\\tau)\\leq x\\}=\\mathrm{P}\\{\\zeta_{2,0}(t)\\leq x|\\zeta_ {1,0}(t)>0\\}, \\tag{3.12}\\] and since \\(\\mathrm{P}\\{\\widetilde{\\zeta}_{2,0}(\\tau+\\chi)\\leq x\\}=\\mathrm{P}\\{\\zeta_{2, 0}(\\tau+\\chi)\\leq x\\}\\) we finally arrive at (3.9). As well, noticing that \\[\\mathrm{P}\\{\\widetilde{\\zeta}_{2,0}(\\tau)\\leq x\\}=\\mathrm{P}\\{\\zeta_{2,0}( \\tau)\\leq x\\},\\] from (3.11) and (3.7) we also have \\[\\mathrm{P}\\{\\zeta_{2,0}(\\tau+\\chi)\\leq x\\}=\\mathrm{P}\\{\\zeta_{2,0}(\\tau)\\leq x \\}=\\mathrm{P}\\{\\zeta_{2,0}(t)\\leq x\\}. \\tag{3.13}\\] Similarly to (3.13) one can prove \\[\\mathrm{P}\\{\\zeta_{1,0}(\\tau+\\chi)\\leq x\\}=\\mathrm{P}\\{\\zeta_{1,0}(\\tau)\\leq x \\}=\\mathrm{P}\\{\\zeta_{1,0}(t)\\leq x\\}, \\tag{3.14}\\] where \\(\\tau\\) is the moment of arrival of a customer, who at this moment \\(\\tau\\) occupies the second server, and \\(\\chi\\) denotes his/her service time. Relations (3.14) can be proved with the aid of the same construction of deleting intervals and merging the ends, but now in the second server. So, combining (3.13) and (3.14) we arrive at the following fact. _In any arrival or service completion time instant in one server_, _the residual service time in another server has the same stationary distribution_. This fact is used in the constructions below. Now consider the stationary \\(M/GI/2/0\\) queueing system, in which both servers are equivalent in the sense that if at the moment of arrival of a customer both servers are free, then a customer can occupy each of servers with the equal probability \\(\\frac{1}{2}\\). In this case the both of the processes \\(\\zeta_{1,0}(t)\\) and \\(\\zeta_{2,0}(t)\\) have the same distribution. Let us delete the time intervals where the both servers are simultaneously free, and merge the corresponding ends (see Figure 3). The new processes are denoted by \\(\\widetilde{\\zeta}_{1,0}(t)\\) and \\(\\widetilde{\\zeta}_{2,0}(t)\\), and both of them have the same equivalent distribution. (We use the same notation as in the construction above believing that it is not confusing for readers.) This two-dimensional 1-stationary process characterizes the system where in any time \\(t\\) at least one of two servers is busy. Consider the event of arrival of a customer in a stationary system at the moment when only one of two servers is busy. Let \\(\\tau^{*}\\) be the moment of this arrival, and let \\(\\tau^{**}\\) denote the moment of the first service completion in one of two servers following after the moment \\(\\tau^{*}\\). Then, at the endpoint \\(\\tau^{**}\\) of the interval \\([\\tau^{*},\\tau^{**})\\) the distribution of the residual service time will be the same as that at the moment \\(\\tau^{*}\\) (due to the established fact that at the end of a service completion in one server, the distribution of a residual service time in another server must coincide with the stationary distribution of a residual service time and due to the fact that both servers are equivalent.) The additional details here are as follows. There can be different events associated with the points \\(\\tau^{*}\\) and \\(\\tau^{**}\\). For example, at the moment \\(\\tau^{*}\\) an arriving customer can be accepted by one of the servers, while the service completion at the moment \\(\\tau^{**}\\) can be either in the same server of in another server. If time moments \\(\\tau^{*}\\) and \\(\\tau^{**}\\) are associated with the same server (for example, the moment of service start and service completion in the first server) then we speak about residual service times in another server (in this example - the second server). If time moments \\(\\tau^{*}\\) and \\(\\tau^{**}\\) are associated with different servers (say, \\(\\tau^{*}\\) is the service start in the first server, but \\(\\tau^{**}\\) is the service completion in the second one), then we speak about residual service times in different servers (in this specific case we speak about residual service time in the second server at the time moment \\(\\tau^{*}\\) and residual service time in the first server at the time moment \\(\\tau^{**}\\)). However, according to the earlier result, it does not matter which specific event of these mentioned occurs. The only fact, that the stationary distribution of a residual service time in a given server must be the same for all time moments of arrival and service completion occurring in another server and vice versa, is used. Deleting the interval \\([\\tau^{*},\\,\\tau^{**})\\) and merging the ends \\(\\tau^{*}\\) and \\(\\tau^{**}\\) (see Figure 4) we obtain the following structure of the 1-stationary process \\(\\widehat{\\mathbf{y}}_{1,0}(t)\\). In the points where idle intervals are deleted and the ends are merged we have renewal points: one of periods is finished and another is started. In the other points where the intervals of type \\([\\tau^{*},\\,\\tau^{**})\\) are deleted and their ends are merged we have the points of 'interrupted' renewal processes. In this 'interrupted' renewal process the point \\(\\tau^{*}\\) is a point of 1-Poisson arrival, and, according to ASTA, the distribution in this point in the server that continue to serve a customer coincides with the stationary distribution of the residual service time. In the other point \\(\\tau^{**}\\), which is the point of a service completion, the distribution in this point in the server that continue to serve a customer coincides with the stationary distribution of a residual service time as well. Therefore, in the point of the interruption (which is a point of discontinuity) the residual service time distribution coincides with the stationary distribution of a residual service time, i.e. with the distribution of \\(\\mathbf{x}_{1}(t)\\). (Notice, that the intervals of type \\([\\tau^{*},\\tau^{**})\\) are an analogue of the intervals \\([s_{1,k},t_{1,k})\\) considered in the Markovian case in Section 2.) By amalgamating the residual service times of the first and second servers given in the lower graph in Figure 4, one can built a typical one-dimensional 1-stationary process \\(\\widehat{\\mathbf{y}}_{1,0}(t)\\), the limiting stationary distribution of which coincides with that of \\(\\mathbf{x}_{1}(t)\\). (see Figure 5). Therefore the processes \\(\\widehat{\\mathbf{y}}_{1,0}(t)\\) and \\(\\mathbf{x}_{1}(t)\\) have the identical one-dimensional distribution, and relation (3.6) follows. Let us develop Property 3.4 to the case \\(m=3\\) and then to the case of an arbitrary \\(m>1\\) for the \\(M/GI/m/0\\) queueing systems. Namely, we have the following. **Property 3.5**.: _For the \\(M/GI/m/0\\) queueing system we have:_ \\[\\mathrm{P}\\{\\widehat{\\mathbf{y}}_{m-1,0}(t)\\in B_{m-1}\\}=\\mathrm{P}\\{\\mathbf{ x}_{m-1}(t)\\in B_{m-1}\\}, \\tag{3.15}\\] _where \\(B_{m-1}\\) is an arbitrary Borel set of \\(\\mathbb{R}^{m-1}\\)._ ## Residual service times in the original M/GI/2/0 queueing system Figure 3. Residual service times for the original and scaled processes of the \\(M/GI/2/0\\) queueing system after deleting intervals where both of the servers are free, and merging the ends. **Residual service times of the scaled process in the M/GI/2/0 queueing system after deleting the intervals when both of the servers are idle, and merging the ends** Figure 4. Residual service times for the original and scaled processes of the \\(M/GI/2/0\\) queueing system after deleting intervals where both of the servers are free and both are busy, and merging the ends. Figure 5. A typical 1-stationary process of residual service times is obtained by amalgamating residual service times of the first and second servers. \\(\\tau_{1}^{*}\\) and \\(\\tau_{2}^{*}\\) are the points where the intervals of type \\([\\tau^{*},\\tau^{**})\\) are deleted, and the ends are merged Proof.: The proof will be concentrated in the case \\(m=3\\) for the 1-stationary process \\(\\widehat{\\mathbf{y}}_{2,0}(t)\\), which is associated with the paths of the \\(M/GI/3/0\\) queueing system where only two servers are busy. Then the result will be concluded for an arbitrary \\(m\\geq 2\\) by induction. Prior studying this case, we first study the specific case of the \\(M/GI/2/0\\) queueing system by considering the paths when the both servers are busy. Then using the arguments of the proof of Property 3.4 enables us to extend that specific result related to the \\(M/GI/2/0\\) queueing systems to the 1-stationary process \\(\\widehat{\\mathbf{y}}_{2,0}(t)\\) of the \\(M/GI/3/0\\) queueing system. As in the proof of Property 3.4 in the specific case of the \\(M/GI/2/0\\) queueing system considered here, we will study the stationary one-dimensional processes \\(\\zeta_{1,0}(t)\\) and \\(\\zeta_{2,0}(t)\\). However the idea of the present proof generally differs from that of the proof of Property 3.4. Here we do not call the first (or second) server a tagged server station to use decomposition. We simply use the fact established in the proof of Property 3.4 that at the moment of arrival or service completion of a customer in one server, the distribution of a residual service time in another server will coincide with the stationary distribution of a residual service time in this server. (The same idea has been used in the proof of Property 3.4.) The present proof explicitly uses the fact that the class of 1-stationary processes is algebraically closed with respect to the operations of deleting intervals and merging the ends, which was mentioned before. Let us delete the idle intervals of the process \\(\\zeta_{1,0}(t)\\) and merge the ends. Then we get a stationary renewal process as in the above case \\(m=1\\) (Property 3.3). After deleting the same time intervals in the second stationary process \\(\\zeta_{2,0}(t)\\) and merging the ends, the process will be transformed as follows. Let \\(t^{*}\\) be a moment of 1-Poisson arrival when a customer occupies the first server. (Recall that owing to the known properties of 1-Poisson process, the stream of arrival to each of \\(i\\) servers (\\(i=1,2\\)) is 1-Poisson.) Then, according to the ASTA property, \\(\\zeta_{2,0}(t^{*})=\\zeta_{2,0}(t)\\) in distribution. Therefore after deleting all of the idle intervals of the second server and merging the ends, after the first time scaling (i.e. removing corresponding time intervals, see Figure 6) instead of the initial 1-stationary process \\(\\zeta_{2,0}(t)\\) we obtain the new 1-stationary process with the equivalent one-dimensional distribution. This process is denoted by \\(\\widetilde{\\zeta}_{2,0}(t)\\). Notice, that the process \\(\\widetilde{\\zeta}_{2,0}(t)\\) is obtained from the process \\(\\zeta_{2,0}(t)\\) by constructing a sequence of 1-stationary processes described above. Then we have the two-dimensional process the first component of which is \\(\\mathbf{x}_{1}(t)\\) and the second one is \\(\\widetilde{\\zeta}_{2,0}(t)\\). For our convenience this first component is provided with upper index, and the two-dimensional vector looks now as \\(\\left\\{\\mathbf{x}_{1}^{(1)}(t),\\widetilde{\\zeta}_{2,0}(t)\\right\\}\\). Let us repeat the above procedure, deleting the remaining idle intervals of the second server and merging the ends. We get the 1-stationary process being equivalent in the distribution to the stationary renewal process \\(\\mathbf{x}_{1}(t)\\), which is denoted now \\(\\mathbf{x}_{1}^{(2)}(t)\\). Upon this (final) time scaling the first process \\(\\mathbf{x}_{1}^{(1)}(t)\\) is transformed as follows. Let \\(t^{**}\\) be a random point of 1-Poisson arrival when the second server is occupied. Applying the ASTA property once again, for the first component of the process we obtain that \\(\\mathbf{x}_{1}^{(1)}(t^{**})\\) coincides in one-dimensional distribution with \\(\\mathbf{x}_{1}^{(1)}(t)\\). Thus, after deleting the entire idle intervals and merging the ends, we finally obtain the [MISSING_PAGE_POST] two-dimensional process \\(\\left\\{\\mathbf{x}_{1}^{(1)}(t),\\mathbf{x}_{1}^{(2)}(t)\\right\\}\\) each component of which has the same one-dimensional distribution as this of the process \\(\\mathbf{x}_{1}(t)\\). The dynamic of this time scaling is shown in Figure 7. For our further purpose, the independence of the processes \\(\\mathbf{x}_{1}^{(1)}(t)\\) and \\(\\mathbf{x}_{1}^{(2)}(t)\\) is needed. The constructions in this paper enables us to prove this independence. However, the independence of \\(\\mathbf{x}_{1}^{(1)}(t)\\) and \\(\\mathbf{x}_{1}^{(2)}(t)\\) follows automatically from the known results by Takacs [31] and the easiest way is to follow a result of that paper. Namely, it follows from formulae (6) and (7) on page 72, that the joint conditional stationary distribution of residual service times given that \\(k\\) servers are busy coincides with the stationary distribution of \\(\\mathbf{x}_{k}(t)\\), which in turn is the product of the stationary distributions of \\(\\mathbf{x}_{1}(t)\\). In particular, \\[\\mathrm{P}\\left\\{\\mathbf{x}_{1}^{(1)}(t)\\leq x_{1},\\ \\mathbf{x}_{1}^{(2)}(t) \\leq x_{2}\\right\\}=\\mathrm{P}\\left\\{\\mathbf{x}_{1}(t)\\leq x_{1}\\right\\} \\mathrm{P}\\left\\{\\mathbf{x}_{1}(t)\\leq x_{2}\\right\\}. \\tag{3.16}\\] Now, using the arguments of the proof of Property 3.4 one can easily extend the result obtained now for \\(M/GI/2/0\\) queueing system to the \\(M/GI/3/0\\) queueing system, and thus prove (3.15) for the \\(M/GI/3/0\\) queueing system. Similarly to the proof of Property 3.4, let us introduce the processes \\(\\zeta_{1,0}(t)\\), \\(\\zeta_{2,0}(t)\\) and \\(\\zeta_{3,0}(t)\\) of the residual service times in the first, second and third servers correspondingly. These processes all are assumed to have the same stationary distribution of residual times, which respects to the scheme where an arriving customer occupies one of available free servers with equal probability. Let us call a server of the \\(M/GI/3/0\\) queueing system that occupied at the moment of busy period start a tagged server station. So, we decompose the original system into the \\(\\widetilde{M}/GI/2/0\\) and tagged queueing system \\(\\widetilde{M}/GI/1/0\\). However, it is shown above that \\(\\widetilde{M}/GI/2/0\\) can be decomposed into two \\(\\widetilde{M}/GI/1/0\\) queueing systems, where after the procedure of deleting idle intervals and merging the ends we obtain the process having the same stationary distribution as that of the process \\(\\mathbf{x}_{2}(t)\\). This stationary distribution remains the same in all random points of arrivals and service completions in the tagged service station. So, after deleting intervals and merging the ends in the tagged service station, in a new time scaling we arrive at the same stationary distribution as that of the process \\(\\mathbf{x}_{3}(t)\\). So, the result for \\(m=3\\) follows. This induction becomes clear for an arbitrary \\(m\\geq 2\\) as well, where the original \\(\\widetilde{M}/GI/m/0\\) system can be decomposed into the \\(\\widetilde{M}/GI/m-1/0\\) queueing system and a tagged server station \\(\\widetilde{M}/GI/1/0\\). Now we will establish a connection between the processes \\(\\widehat{\\mathbf{y}}_{m-1,n}(t)\\), \\(\\widehat{\\mathbf{y}}_{m-1,0}(t)\\) and \\(\\mathbf{x}_{m-1}(t)\\). We start from the case \\(m=2\\). **Property 3.6**.: _Under the assumption that the probability distribution function \\(G(x)\\) belongs to the class NBU (NWU) we have_ \\[\\mathrm{P}\\{\\widehat{\\mathbf{y}}_{1,n}(t)\\leq x\\}\\leq\\ (\\text{resp.}\\geq)\\ \\mathrm{P}\\{\\mathbf{x}_{1}(t)\\leq x\\}. \\tag{3.17}\\] Proof.: Along with the 1-stationary processes \\(\\widehat{\\mathbf{y}}_{1,n}(t)\\), let us introduce another 1-stationary processes \\(\\widehat{\\mathbf{Y}}_{2,n}(t)\\). This last process is related to the same \\(\\widetilde{M}/GI/2/n\\) queueing system as the process \\(\\widehat{\\mathbf{y}}_{1,n}(t)\\), and is obtained by deleting time intervals when there are more than two or less than two customers in the system, and merging the ends. ## The processes \\(\\mathbf{x}_{i}(t)\\) and \\(\\widetilde{\\zeta}_{\\,2,0}(t)\\) Figure 7. The dynamic of time scaling for a queueing system with two servers after deleting the idle intervals in the second server and merging the endsUsing the same arguments as in the proof of Property 3.5 one can prove that the components of this process are generated by independent \\(1\\)-stationary processes and having the same distribution. Indeed, involving as earlier in the proof of Property 3.4 the processes \\(\\zeta_{1}(t)\\) and \\(\\zeta_{2}(t)\\) having the same distribution, one can delete intervals where the system is empty and merge the ends. Apparently, the new processes \\(\\widetilde{\\zeta}_{1}(t)\\) and \\(\\widetilde{\\zeta}_{2}(t)\\) obtained after this procedure have the same stationary distribution. (However, it is shown later that the limiting stationary distribution of these one-dimensional processes differs from such the distribution obtained after the similar procedure for the \\(\\widetilde{M}/GI/2/0\\) system, and, therefore, its one-dimensional distribution distinguishes from that of the process \\({\\bf x}_{1}(t)\\).) Let us go back to the initial process \\({\\bf y}_{2,n}(t)\\), \\(n\\geq 1\\), to delete the time intervals where the both servers are free and merge the corresponding ends. We also remove the last component corresponding to the queue-length \\(Q_{2,n}(t)\\). (The exact value of the queue-length \\(Q_{2,n}(t)\\) is irrelevant here and is not used in our analysis.) In the new time scale we obtain the two-dimensional process \\(\\widetilde{\\bf y}_{2,n}(t)\\). Similarly to the proof of relation (3.6) we have time moments \\(\\tau^{*}\\) and \\(\\tau^{**}\\). The first of them is a moment of arrival of a customer at the system with one busy and one free server, and the second one is the following after \\(\\tau^{*}\\) moment of service completion of a customer when there remain one busy server only. The time interval \\([\\tau^{*},\\,\\tau^{**})\\) is an _orbital busy period_. (The concept of orbital busy period is defined in Section 2 for Markovian systems. For \\(M/GI/m/n\\) queueing systems this concept is the same.) It can contain queueing customers waiting for their service. Let \\(t^{\\rm begin}\\) be a moment of arrival of a customer during the orbital busy period \\([\\tau^{*},\\,\\tau^{**})\\) who occupies a waiting place, and let \\(t^{\\rm end}\\) be the following after \\(t^{\\rm begin}\\) moment of time when after the service completion the queue space becomes empty again. A period of time \\([t^{\\rm begin},\\,t^{\\rm end})\\) is called _queueing period_. (Note that for the M/GI/\\(m/n\\) queueing system, the intervals of type \\([\\tau^{*},\\,\\tau^{**})\\) are an analogue of the intervals of type \\([s_{m-1,k},t_{m-1,k})\\) in the Markovian queueing system \\(M/M/m/n\\), and the intervals of type \\([t^{\\rm begin},\\,t^{\\rm end})\\) are an analogue of the intervals of type \\([s_{m,k},t_{m,k})\\) in that Markovian queueing system \\(M/M/m/n\\).) All customers of _queueing periods_, i.e. those arrived during orbital busy period can be considered as customers arriving in a tagged server station. At the moment of \\(t^{\\rm begin}\\), which is an instant of a Poisson arrival, the two-dimensional distribution of the random vector \\(\\widetilde{\\bf y}_{2,n}(t^{\\rm begin})\\) coincides with the stationary distribution of the random vector \\(\\widehat{\\bf Y}_{2,n}(t)\\). However, in the point \\(t^{\\rm end}\\), the probability distribution of \\(\\widetilde{\\bf y}_{2,n}(t^{\\rm end})\\) is different from the stationary distribution of \\(\\widehat{\\bf Y}_{2,n}(t)\\), because this specific time instant \\(t^{\\rm end}\\) coincides with a service beginning in one of servers of the main system. Therefore, deleting the interval \\([t^{\\rm begin},t^{\\rm end})\\) and merging the end leads to the change of the distribution. More specifically, at the time instant \\(t^{\\rm end}\\) one of the components of the vector \\(\\widehat{\\bf Y}_{2,n}(t)\\), say the first one, is a random variable having the probability distribution \\(G(x)\\). Then, another component, i.e. the second one, because of the aforementioned properties of \\(1\\)-stationary processes, coincides in distribution with \\(\\widetilde{\\zeta}_{1,n}\\) (or \\(\\widetilde{\\zeta}_{2,n}\\)), which is a component of the stationary process \\(\\widehat{\\bf Y}_{2,n}(t)\\). Indeed, let customers arriving in a busy system and waiting in the queue be assigned to the tagged server station. At the moment of \\(1\\)-Poisson arrival \\(t^{\\rm begin}\\) of a customer in the tagged server station, the two-dimensional Markov process associated with the main queueingsystem has the same distribution as the vector \\(\\widehat{\\mathbf{Y}}_{2,n}(t)\\), i.e. two-dimensional distribution coinciding with the joint distribution of \\(\\widetilde{\\zeta}_{1,n}\\) and \\(\\widetilde{\\zeta}_{2,n}\\). Then, at the moment of the service completion \\(t^{\\mathrm{end}}\\), which coincides with the moment of service completion in one of two servers, the probability distribution function of the residual service time in another server, where the service is being continued, coincides with the distribution of a component of the vector \\(\\widehat{\\mathbf{Y}}_{2,n}(t)\\), i.e. with the distribution of \\(\\widetilde{\\zeta}_{1,n}\\). If the probability distribution function \\(G(x)\\) belongs to the class NBU, then the 1-stationary process \\(\\widetilde{\\mathbf{y}}_{2,n}(t)\\) satisfies the property \\(\\widetilde{\\mathbf{y}}_{2,n}(t^{\\mathrm{begin}})\\leq_{st}\\widetilde{\\mathbf{ y}}_{2,n}(t^{\\mathrm{end}})\\). If \\(G(x)\\) belongs to the class NWU, then the opposite inequality holds: \\(\\widetilde{\\mathbf{y}}_{2,n}(t^{\\mathrm{end}})\\leq_{st}\\widetilde{\\mathbf{y} }_{2,n}(t^{\\mathrm{begin}})\\). (The stochastic inequality between vectors means the stochastic inequality between their corresponding components.) The above stochastic inequalities are between random values of the process \\(\\widetilde{\\mathbf{y}}_{2,n}(t)\\) in the different time instants \\(t^{\\mathrm{begin}}\\) and \\(t^{\\mathrm{end}}\\). Our further task is to compare two different processes \\(\\widetilde{\\mathbf{y}}_{2,n}(t)\\) and \\(\\widetilde{\\mathbf{y}}_{2,0}(t)\\). The first of these processes is associated with the \\(M/GI/m/n\\) queueing system, while the second one is associated with the \\(M/GI/m/0\\) queueing system. The idea of comparison is very simple. Suppose that both queueing system are started at zero, i.e. consider the paths of these system when the both of them are not in steady state, and compare the Markov processes associated with these system. For the non-stationary processes we will use the same notation \\(\\widetilde{\\mathbf{y}}_{2,n}(t)\\) and \\(\\widetilde{\\mathbf{y}}_{2,0}(t)\\) understanding that it is spoken about usual (not stationary) Markov processes. The notation for time moments such as \\(t^{\\mathrm{begin}}\\) and \\(t^{\\mathrm{end}}\\) is now associated with these usual (i.e. non-stationary) processes as well. We will consider the Markov processes associated with \\(M/GI/2/n\\) and \\(M/GI/2/0\\) queueing systems on the same probability space. In the time interval \\([0,t^{\\mathrm{begin}})\\) the paths of the Markov processes \\(\\widetilde{\\mathbf{y}}_{2,n}(t)\\) and \\(\\widetilde{\\mathbf{y}}_{2,0}(t)\\) coincide (\\(n\ eq 0\\)). However, after deleting the interval \\([t^{\\mathrm{begin}},t^{\\mathrm{end}})\\) and merging the ends, then in the end point \\(t^{\\mathrm{begin}}\\) the values of the processes \\(\\widetilde{\\mathbf{y}}_{2,n}(t)\\) and \\(\\widetilde{\\mathbf{y}}_{2,0}(t)\\) will be different. Indeed, in the case of the process \\(\\widetilde{\\mathbf{y}}_{2,0}(t)\\), which is associated with the \\(M/GI/m/0\\) queueing system, \\(t^{\\mathrm{begin}}\\) and \\(t^{\\mathrm{end}}\\) is the same point, and the value of Markov processes will be the same after replacing the points \\(t^{\\mathrm{begin}}\\) with \\(t^{\\mathrm{end}}\\). However, in the case of the process \\(\\widetilde{\\mathbf{y}}_{2,n}(t)\\) associated with \\(M/GI/m/n\\) queueing system, the values in these points will be different with probability 1, and consequently, because of the inequality \\(\\widetilde{\\mathbf{y}}_{2,n}(t^{\\mathrm{begin}})\\leq_{st}\\widetilde{\\mathbf{ y}}_{2,n}(t^{\\mathrm{end}})\\) we have \\(\\widetilde{\\mathbf{y}}_{2,0}(t^{\\mathrm{begin}})\\leq_{st}\\widetilde{\\mathbf{y }}_{2,n}(t^{\\mathrm{begin}})\\) in the case when \\(G(x)\\) belongs to the class NBU. If \\(G(x)\\) belongs to the class NWU, we have the opposite inequality: \\(\\widetilde{\\mathbf{y}}_{2,n}(t^{\\mathrm{begin}})\\leq_{st}\\widetilde{\\mathbf{ y}}_{2,0}(t^{\\mathrm{begin}})\\). Therefore, after deleting all the intervals of the type \\([t^{\\mathrm{begin}},t^{\\mathrm{end}})\\) from the original Markov process we obtain new Markov process, and in the case when \\(G(x)\\) belongs either to the class NBU or to the class NWU one can apply the theorem of Kalmykov [17] (see also [19]) to compare these two Markov processes. In the case where \\(G(x)\\) belongs to the class NBU, all the path of the Markov process, associated with \\(M/GI/2/n\\) is not smaller (in stochastic sense) than that path of the Markov process, associated with \\(M/GI/2/0\\). If \\(G(x)\\) belongs to the class NWU, then the opposite stochastic inequality holds between two different Markov processes. Apparently, the same stochastic inequalities remain correct if we speak about stationary Markov processes. Nothing is changed if we let \\(t\\) to increase to infinity and arrive at stationary distributions. So, under the assumption that \\(G(x)\\) belongs to the class NBU, for the stationary processes we obtain \\(\\widetilde{\\mathbf{y}}_{1,0}(t)\\leq_{st}\\widetilde{\\mathbf{y}}_{1,n}(t)\\). In other words, due to the fact that \\(\\widehat{\\mathbf{y}}_{1,0}(t)=_{st}\\mathbf{x}_{1}(t)\\), we obtain that \\(\\mathbf{x}_{1}(t)\\leq_{st}\\widehat{\\mathbf{y}}_{1,n}(t)\\). In the case where \\(G(x)\\) belongs to the class NWU, the opposite inequality holds. The arguments of the proof given for \\(m=2\\) remain correct for an arbitrary \\(m\\geq 2\\). The proof given by induction uses decomposition of the original system into the main system and a tagged server station as above. The further arguments for stochastic comparison of Markov processes are also easily extended for the case of an arbitrary \\(m\\geq 2\\). From the above results for the Markov processes the statement of the lemma follows. The stochastic inequalities between \\(T_{m,n}(m-1)\\) and \\(T_{m,0}(m-1)\\) follow by the coupling arguments. The lemma is completely proved. ## 4. Theorems on losses in \\(M/GI/m/n\\) queueing systems The results obtained in the previous section enable us to establish theorems for the number of losses in \\(M/GI/m/n\\) queueing systems during their busy periods. **Theorem 4.1**.: _Under the assumption \\(\\lambda=m\\mu\\), the expected number of losses during a busy period of the \\(M/GI/m/n\\) queueing system is the same for all \\(n\\geq 1\\)._ Proof.: Consider the system \\(M/GI/m/n\\) under the assumption \\(\\lambda=m\\mu\\), and similarly to the construction in the proof of Lemma 3.1 let us delete all the intervals where the number of customers in the system is less than \\(m\\), and merge the corresponding ends. The process obtained is denoted \\(\\widehat{\\mathbf{y}}_{m,n}(t)\\). This is the \\(1\\)-stationary process of orbital busy periods. The stationary departure process, together with the arrival \\(1\\)-Poisson process of rate \\(\\lambda\\) and the number of waiting places \\(n\\) describes the stationary \\(M/G/1/n\\) queue-length process (with generally dependent service times). As soon as a busy period is finished (in our case it is an orbital busy period, see Section 2 for the definition), the system immediately starts a new busy period by attaching a new customer into the system. This unusual situation arises because of the construction of the process. There are no idle periods, and servers all are continuously busy. Thus, the busy period, which is considered here, is one of the busy periods attached one after another. Let \\(T\\) be a large period of time, and during that time there are \\(K(T)\\) busy periods of the \\(M/G/1/n\\) queueing system (which does not contain idle times as mentioned). Let \\(L(T)\\) and \\(\ u(T)\\) denote the number of lost and served customers during time \\(T\\). We have the formula: \\[\\lim_{T\\to\\infty}\\frac{1}{\\mathrm{E}K(T)}\\Big{(}\\mathrm{E}L(T)+\\mathrm{E}\ u (T)\\Big{)}=\\lim_{T\\to\\infty}\\frac{1}{\\mathrm{E}K(T)}\\Big{(}\\lambda T+\\mathrm{ E}K(T)\\Big{)}, \\tag{4.1}\\] the proof of which is given below. Relationship (4.1) has the following explanation. The left-hand side term \\(\\mathrm{E}L(T)+\\mathrm{E}\ u(T)\\) is the expectation of the number of lost customers plus the expectation of the number of served customers during time \\(T\\), and the right-hand side term \\(\\lambda T+\\mathrm{E}K(T)\\) is the expectation of the number of arrivals during time \\(T\\) plus the expected number of attached customers. Relationship (4.1) can be proved by renewal arguments as follows. There are \\(m\\) independent copies \\(\\mathbf{x}_{1}^{(1)}(t)\\), \\(\\mathbf{x}_{1}^{(2)}(t)\\), , \\(\\mathbf{x}_{1}^{(m)}(t)\\) of the stationary renewal process, which model the process \\(\\widehat{\\mathbf{y}}_{m,n}(t)\\). (In fact, we have \\(m\\)\\(1\\)-stationary processes, which have the same distributions as \\(m\\) independent renewal processes \\(\\mathbf{x}_{1}^{(1)}(t)\\), \\(\\mathbf{x}_{1}^{(2)}(t)\\), , \\(\\mathbf{x}_{1}^{(m)}(t)\\).) Let \\(1\\leq i\\leq m\\), and let \\(C_{1}\\), \\(C_{2}\\), \\(C_{K_{i}(T)}\\) be such points of busy period starts associated with the renewal process \\(\\mathbf{x}_{1}^{(i)}(t)\\) (one of those \\(m\\) independent and identically distributed renewal processes), where \\(K_{i}(T)\\) denotes the total number of these regeneration point indexed by \\(i\\). Denote also by \\(z_{1}\\), \\(z_{2}\\), , \\(z_{K_{i}(T)}\\) the corresponding lengths of busy periods, by \\(\\ell_{1}\\), \\(\\ell_{2}\\), ,\\(\\ell_{K_{i}(T)}\\) the corresponding number of losses during these \\(K_{i}\\) busy periods, and by \\(n_{1}\\), \\(n_{2}\\), ,\\(n_{K_{i}(T)}\\) the corresponding number of served customers during these busy periods. Let \\(T_{i}=z_{1}+z_{2}+\\ldots+z_{K_{i}(T)}\\), let \\(L_{i}(T)=\\ell_{1}+\\ell_{2}+\\ldots+\\ell_{K_{i}(T)}\\) and let \\(\ u_{i}(T)=n_{1}+n_{2}+\\ldots+n_{K_{i}(T)}\\). Since at the moments \\(C_{1}\\), \\(C_{2}\\), \\(C_{K_{i}(T)}\\) of the busy period starts the distribution of the above stationary Markov process of residual times is the same, then the numbers of losses \\(\\ell_{1}\\), \\(\\ell_{2}\\), ,\\(\\ell_{K_{i}(T)}\\) and, respectively, the numbers of served customers \\(n_{1}\\), \\(n_{2}\\), ,\\(n_{K_{i}(T)}\\) during each of these busy periods have the same distributions, and one can apply the renewal reward theorem. By the renewal reward theorem we have: \\[\\lim_{T\\to\\infty}\\frac{1}{m\\mathrm{E}K_{i}(T)}\\Big{(}\\mathrm{E}L_{i}(T)+ \\mathrm{E}\ u_{i}(T)\\Big{)}=\\lim_{T\\to\\infty}\\frac{1}{m\\mathrm{E}K_{i}(T)} \\Big{(}\\lambda\\mathrm{E}T_{i}+m\\mathrm{E}K_{i}(T)\\Big{)}. \\tag{4.2}\\] Taking into account that \\[\\lim_{T\\to\\infty}\\frac{\\mathrm{E}K(T)}{\\mathrm{E}K_{i}(T)}=m,\\] \\[\\lim_{T\\to\\infty}\\frac{\\mathrm{E}L(T)}{\\mathrm{E}L_{i}(T)}=1,\\] \\[\\lim_{T\\to\\infty}\\frac{\\mathrm{E}\ u(T)}{\\mathrm{E}\ u_{i}(T)}=1,\\] and \\[\\lim_{T\\to\\infty}\\frac{\\mathrm{E}T_{i}}{T}=1,\\] because of the correspondence between the left- and right-hand sides, from (4.2) we arrive at (4.1). Thus, we bypass the fact that the times between departures are dependent, and thus (4.1) is actually obtained by application of the renewal reward theorem by a usual manner, like in the case of independent times between departures (see e.g. Ross [28], Karlin and Taylor [18]). Together with (4.1) we have \\[\\lim_{T\\to\\infty}\\frac{1}{\\mathrm{E}K(T)}\\mathrm{E}\ u(T)=\\lim_{T\\to\\infty} \\frac{1}{\\mathrm{E}K(T)}m\\mu T. \\tag{4.3}\\] Let us now introduce the following notation. Let \\(\\zeta_{n}\\) denote the length of an orbital busy period, and let \\(L_{n}\\) and \\(\ u_{n}\\) correspondingly denote the number of lost and served customers during that orbital busy period. Using the arguments of [5], we prove that \\(\\mathrm{E}L_{n}=1\\) for all \\(n\\geq 1\\). Indeed, from (4.1) and (4.3) we have the equations: \\[\\mathrm{E}L_{n}+\\mathrm{E}\ u_{n}=\\lambda\\mathrm{E}\\zeta_{n}+1, \\tag{4.4}\\] \\[\\mathrm{E}\ u_{n}=m\\mu\\mathrm{E}\\zeta_{n}. \\tag{4.5}\\] The substitution \\(\\lambda=m\\mu\\) into the system of equations (4.4) and (4.5) yields \\(\\mathrm{E}L_{n}=1\\). Hence, during an orbital busy period there is exactly one lost customer in average for any \\(n\\geq 0\\). To finish the proof we need in a deeper analysis. First, we should find the expected number of queueing periods during one orbital busy period. For this purpose, one can use the similar construction by deleting all the intervals when the number of customers in the system is not greater than \\(m\\), and merge the corresponding ends. The obtained process is denoted \\(\\widehat{\\mathbf{y}}_{m+1,n}(t)\\), and this is one stationary process of queueing periods following one after another. The structure of the process \\(\\widehat{\\mathbf{y}}_{m+1,n}(t)\\) is similar to that of the process \\(\\widehat{\\mathbf{y}}_{m,n}(t)\\). The process \\(\\widehat{\\mathbf{y}}_{m+1,n}(t)\\) describes the stationary \\(M/G/1/n-1\\) queueing system, the service times of which are generally dependent. As soon as one busy period in this system is finished, a new customer starting a new busy period is immediately attached into the system. Thus, the only difference between the processes \\(\\widehat{\\mathbf{y}}_{m,n}(t)\\) and \\(\\widehat{\\mathbf{y}}_{m+1,n}(t)\\) is that the numbers of waiting places differ by value of parameter \\(n\\). Therefore, using the similar notation and arguments, one arrive at the conclusion that the expected number of losses per queueing period is equal to 1 as well. Therefore, in long-run period of time, the number of queueing periods and orbital busy periods is the same in average. So, there is exactly one queueing period per orbital busy period in average. Therefore, during a long-run period the number of events that the different Markov processes \\(\\widehat{\\mathbf{y}}_{m-1,n}(t)\\) change their values after deleting queueing periods and merging the ends (as exactly explained in the proof of Lemma 3.1) is the same in average for all \\(n\\geq 1\\), and the stationary characteristics of all of these Markov processes \\(\\widehat{\\mathbf{y}}_{m-1,n}(t)\\), given for different values \\(n\\)=1,2, , are the same. Hence, the expectation \\(\\mathrm{E}T_{m,n}(m-1)\\) is the same for all \\(n\\)=1,2, as well. (Recall that \\(T_{m,n}(m-1)\\) denote the total time during a busy period when \\(m-1\\) servers are occupied.) Hence, using Wald's identity connecting \\(\\mathrm{E}T_{m,n}(m-1)\\) with \\(\\mathrm{E}L_{m,n}\\) (the expected number of losses during a busy period) we arrive at the desired result, since \\(\\mathrm{E}T_{m,n}(m-1)\\) and the expectation of the number of orbital busy periods during a busy period of \\(M/GI/m/n\\) both are the same for all \\(n\\geq 1\\). Application of Lemma 3.1 and the arguments of Theorem 4.1 enables us to prove the following result. **Theorem 4.2**.: _Let \\(\\lambda=m\\mu\\). Then, under the assumption that \\(G(x)\\) belongs to the class NBU, for the number of losses in \\(M/GI/m/n\\) queueing systems, \\(n\\geq 1\\), we have_ \\[\\mathrm{E}L_{m,n}=\\frac{cm^{m}}{m!}, \\tag{4.6}\\] _where the constant \\(c\\geq 1\\) depends on \\(m\\) and the probability distribution \\(G(x)\\) but is independent of \\(n\\)._ _Under the assumption that \\(G(x)\\) belongs to the class NWU we have (4.6) but with constant \\(c\\leq 1\\)._ Proof.: Notice first, that for the expected number of losses in \\(M/GI/m/0\\) queueing systems we have \\[\\mathrm{E}L_{m,0}=\\frac{m^{m}}{m!}\\] This result follows immediately from the Erlang-Sevastyanov formula [29], so that the expected number of losses during a busy period of the \\(M/GI/m/0\\) queueingsystem is the same that this of the \\(M/M/m/0\\) queueing system. The expected number of losses during a busy period of the \\(M/M/m/0\\) queueing system, \\(\\mathrm{E}L_{m,0}=\\frac{m^{m}}{m!}\\), is also derived in Section 2. In the case where \\(G(x)\\) belongs to the class NBU according to Lemma 3.1 we have \\(\\mathrm{E}T_{m,n}(m-1)\\geq\\mathrm{E}T_{m,0}(m-1)\\), and therefore, the expected number of orbital busy periods in the \\(M/GI/m/n\\) queueing system (\\(n\\geq 1\\)) is not smaller than this in the \\(M/GI/m/0\\) queueing system. Therefore, repeating the proof of Theorem 4.1 leads to the inequality \\(\\mathrm{E}L_{m,n}\\geq\\mathrm{E}L_{m,0}\\) and consequently to the desired result. If \\(G(x)\\) belongs to the class NWU, then we have the opposite inequalities, and finally the corresponding result stated in the formulation of the theorem. ## 5. Batch arrivals The case of batch arrivals is completely analogous to the case of ordinary (non-batch) arrivals. In the case of a Markovian \\(M^{X}/M/m/n\\) queueing system one can also apply the level-crossing method to obtain equations analogous to (2.6)-(2.13). The same arguments as in Sections 3 and 4 in an extended form can be used for \\(M^{X}/GI/m/0\\) queueing systems. ## References * [1]Abramov, V.M. (1991). _Investigation of a Queueing System with Service Depending on Queue Length_, Donish, Dushanbe, Tajikistan (in Russian). * [2]Abramov, V.M. (1991). Asymptotic properties of lost customers for one queueing system with losses. _Kibernetika_ (Ukrainian Academy of Sciences), No. 2, 123-124 (in Russian). * [3]Abramov, V.M. (1994). On the asymptotic distribution of the maximum number of infectives in epidemic models with immigration. _J. Appl. Probab._, **31**, 606-613. * [4]Abramov, V.M. (1997). On a property of a refusals stream. _J. Appl. Prob._, **34**, 800-805. * [5]Abramov, V.M. (2001). On losses in \\(M^{X}/GI/1/n\\) queues. _J. Appl. Prob._, **38**, 1079-1080. * [6]Abramov, V.M. (2001). Some results for large closed queueing networks with and without bottleneck: Up- and down-crossings approach. _Queueing Syst._, **38**, 149-184. * [7]Abramov, V.M. (2001). Inequalities for the \\(GI/M/1/n\\) loss system. _J. Appl. Prob._, **38**, 232-234. * [8]Abramov, V.M. (2002). Asymptotic analysis of the \\(GI/M/1/n\\) loss system as \\(n\\) increases to infinity. _Ann. Operat. Res._, **112**, 35-41. * [9]Abramov, V.M. (2004). Asymptotic behavior of the number of lost messages. _SIAM J. Appl. Math._, **64**, 746-761. * [10]Abramov, V.M. (2006). Stochastic inequalities for single-server loss queueing systems. _Stoch. Anal. Appl._, **24**, 1205-1222. * [11]Abramov, V.M. (2007). Optimal control of a large dam. _J. Appl. Prob._, **44**, 249-258. * [12]Abramov, V.M. (2007). Asymptotic analysis of loss probabilities in \\(GI/M/m/n\\) queueing systems as \\(n\\) increases to infinity. _Qual. Technol. Quantit. Manag._, **4**, 379-393. * [13]Abramov, V.M. (2008). Continuity theorems for \\(M/M/1/n\\) queueing systems. _Queueing Syst._, **59**, 63-86. * [14]Abramov, V.M. (2008). The effective bandwidth problem revisited. _Stochastic Models_, **24**, 527-557. * [15]Abramov, V.M. (2010). Takacs' asymptotic theorem and its applications: A survey. _Acta Appl. Math._, **109**, 609-651. * [16]Feller, W. (1966). _An Introduction to Probability Theory and its Applications_, vol. 2. John Wiley, New York. * [17]Kalmykov, G.I. (1962). On the partial ordering of one-dimensional Markov processes. _Theor. Probab. Appl._, 7, 456-459. * [18]Karlin, S. and Taylor, H.G. (1975). _A First Course in Stochastic Processes._ Second edn. Academic Press, New York. * Rarity and Exponentiality._ Springer, Berlin. * [20]Kelly, F.P. (1991). Loss networks. _Ann. Appl._. _Probab._, 1, 319-378. * [21]Khintchine, A.Y. (1960). _Mathematical Methods in the Theory of Queueing_. Hafner Publ. Co., New York. * [22]Melamed, B. and Yao, D.D. (1995). The ASTA property. In: _Frontiers in Queueing: Models, Methods and Problems_ (J.Dshalalow, ed.) CRC Press, 1995, pp. 195-224. * [23]Melsa, J. L. and Sage, A. P. (1973). _An Introduction to Probability and Stochastic Processes_. Prentice-Hall Inc., Englewood Cliffs. * [24]Pechinkin, A.V. (1987). A new proof of Erlang's formula for a lossy multichannel queueing system. _Soviet J. Comput. System Sci._**25**, No.4, 165-168. Translated from: _Izv. Akad. Nauk. SSSR. Techn. Kibernet._ (1986), No.6, 172-175 (in Russian). * [25]Pekoz, E., Righter, R. and Xia, C.H. (2003). Characterizing losses during busy periods in finite buffer systems. _J. Appl. Prob._, **40**, 242-249. * [26]Righter, R. (1999). A note on losses in \\(M/GI/1/n\\) queues. _J. Appl. Prob._, **36**, 1240-1243. * [27]Ross, K.W (1995). _Multiservice Loss Models in Broadband Telecommunication Networks_. Springer, Berlin. * [28]Ross, S.M. _Introduction to Probability Models_, Seventh edn, Academic Press, Burlington. * [29]Sevastyanov, B.A. (1957). An ergodic theorem for Markov processes and its application to telephone systems with refusals. _Theor. Probab. Appl._, **2**, 104-112. * [30]Szekely, G.J. (1986). _Paradoxes in Probability Theory and Mathematical Statistics_, Reidel. * [31]Takacs, L. (1969). On Erlang's formula. _Ann. Mathem. Statist._, **40**, 71-78. * [32]Whitt, W. (2001). _Stochastic Process Limits: An Introduction to Stochastic Process Limits and Their Applications to Queues_. Springer, New York. * [33]Whitt, W. (2004). Heavy-traffic limits for loss proportions in single-server queues. _Queueing Syst._, **46**, 507-736. * [34]Whitt, W. (2004). A diffusion approximation for the \\(G/GI/m/n\\) queues. _Operat. Res._, **52**, 922-941. * [35]Whitt, W. (2005). Heavy-traffic limits for the \\(G/H_{2}^{*}/m/n\\) queue. _Math. Operat. Res._, **30**, 1-27. * [36]Wolff, R.W. (1982). Poisson arrivals see time averages. _Operat. Res._, **30**, 223-231. * [37]Wolff, R.W. (1989). _Stochastic Modeling and the Theory of Queues_. Prentice-Hall Inc., Englewood Cliffs. * [38]Wolff, R.W. (2002). Losses per cycle in a single-server queue. _J. Appl. Prob._, **39**, 905-909.
The \\(M/GI/m/n\\) queueing system with \\(m\\) homogeneous servers and the finite number \\(n\\) of waiting spaces is studied. Let \\(\\lambda\\) be the customers arrival rate, and let \\(\\mu\\) be the reciprocal of the expected service time of a customer. Under the assumption \\(\\lambda=m\\mu\\) it is proved that the expected number of losses during a busy period is the same value for all \\(n\\geq 1\\), while in the particular case of the Markovian system \\(M/M/m/n\\) the expected number of losses during a busy period is \\(\\frac{m^{m}}{m!}\\) for all \\(n\\geq 0\\). Under the additional assumption that the probability distribution function of a service time belongs to the class NBU or NWU, the paper establishes simple inequalities for those expected numbers of losses in \\(M/GI/m/n\\) queueing systems. Key words and phrases:Loss systems; \\(M/GI/m/n\\) queueing system; busy period; level-crossings; stochastic order; coupling 1991 Mathematics Subject Classification: 60K25
Condense the content of the following passage.
arxiv-format/0506619v1.md
# An approach toward the successful supernova explosion by physics of unstable nuclei K. Sumiyoshi[MCSD]Numazu College of Technology, Ooka 3600, Numazu, Shizuoka 410-8501, Japan, S. Yamada[MCSD]Science and Engineering, Waseda University, Ohkubo 3-4-1, Shinjuku, Tokyo, 169-8555, Japan, H. Suzuki[MCSD]Faculty of Science and Technology, Tokyo University of Science, Yamazaki 2641, Noda, Chiba 278-8510, Japan, H. Shen[MCSD]Department of Physics, Nankai University, Tianjin 300071, China and H. Toki[MCSD]Research Center for Nuclear Physics (RCNP), Osaka University, Mihogaoka 10-1, Ibaraki, Osaka 567-0047, Japan ## 1 Introduction Understanding the explosion mechanism of core-collapse supernovae is a challenging problem, that requires extensive researches in nuclear physics and astrophysics. In order to reach the final answer, it is necessary to investigate the core-collapse supernovae by implementing hydrodynamics and neutrino-transfer together with reliable nuclear equation of states and neutrino-related reactions. In this regard, recent numerical simulations of neutrino-transfer hydrodynamics [1, 2, 3, 4] have cast a light to the importance of nuclear physics as well as neutrino-transfer, though they show no explosion at the moment. Meanwhile, advances in physics of unstable nuclei have given chances to provide us with nuclear physics in supernovae than ever before, therefore, those new nuclear data should be examined in modern supernova simulations. In this paper, we focus on the influenceof the new nuclear equation of state (EOS) in the neutrino-transfer hydrodynamics. We follow the core-collapse, bounce and shock propagation by adopting the new equation of state, which is based on the data of unstable nuclei, and the conventional one which has been used almost uniquely in recent simulations. We compare the behavior of shock and the thermal evolution of supernova core by performing numerical simulations for a long period of \\(\\sim\\)1 sec after the core bounce. ## 2 A new nuclear EOS table Recently, a new complete set of EOS for supernova simulations (Shen's EOS) has become available [5, 6]. The relativistic mean field (RMF) theory with a local density approximation has been applied to the derivation of the supernova EOS table. The RMF theory has been successful to reproduce the saturation properties, masses and radii of nuclei, and proton-nucleus scattering data [7]. The effective interaction used in the RMF theory is checked by the recent experimental data of unstable nuclei in neutron-rich environment close to astrophysical situations [8, 9]. We stress that the RMF theory [8] is based on the relativistic Bruckner-Hartree-Fock (RBHF) theory [10]. The RBHF theory, which is a microscopic and relativistic many body theory, has been shown to be successful to reproduce the saturation of nuclear matter starting from the nucleon-nucleon interactions determined by scattering experiments. This is in good contrast with non-relativistic many body frameworks which can account for the saturation only with the introduction of extra three-body interactions. The RMF framework with the parameter set TM1, which was determined as the best one to reproduce the properties of finite nuclei including n-rich ones [8], provides the uniform nuclear matter with the incompressibility of 281 MeV and the symmetry energy of 36.9 MeV. The maximum neutron star mass calculated for the cold neutron star matter in the RMF with TM1 is 2.2 M\\({}_{\\odot}\\)[11]. The table of EOS covers the wide range of density, electron fraction and temperature, which is necessary for supernova simulations. The relativistic EOS table has been applied to numerical simulations of r-process in neutrino-driven winds [12] and prompt supernova explosions [13], and other simulations [14, 15, 16]. For comparison, we adopt also the EOS by Lattimer and Swesty [17]. The EOS is based on the compressible liquid drop model for nuclei together with dripped nucleons. The bulk energy of nuclear matter is expressed in terms of density, proton fraction and temperature with nuclear parameters. The values of nuclear parameters are chosen to be the ones suggested from nuclear mass formulae and other theoretical studies with the Skyrme interaction. Among various parameters, the symmetry energy is set to be 29.3 MeV, which is smaller than the value in the relativistic EOS. As for the incompressibility, we use the EOS with 180 MeV, which has been often used for recent supernova simulations. ## 3 Simulations of core-collapse supernovae We have developed a new numerical code of neutrino-transfer hydrodynamics [18, 19, 20] for supernova simulations. The code solves hydrodynamics and neutrino-transfer at once in general relativity under the spherical symmetry. The Boltzmann equation for neutrinos is solved by a finite difference scheme (S\\({}_{N}\\) method) together with lagrangian hydrodynamics in the implicit manner. The implicit method enables us to have a longer time step than the explicit method, therefore, our code is advantageous to follow long term behaviors after the core bounce. We adopt 127 spatial zones and discretize the neutrino distribution function with 6 angle zones and 14 energy zones for \\(\ u_{e}\\), \\(\\bar{\ u}_{e}\\), \\(\ u_{\\mu/\\tau}\\) and \\(\\bar{\ u}_{\\mu/\\tau}\\), respectively. The weak interaction rates regarding neutrinos are based on the _standard_ rates by Bruenn [21]. In addition to the Bruenn's standard neutrino processes, the plasmon process and the nucleon-nucleon bremsstrahlung processes are included [20]. The collision term of the Boltzmann equation is explicitly calculated as functions of neutrino angles and energies. As an initial model, we adopt the profile of iron core of a 15M\\({}_{\\odot}\\) progenitor from Woosley and Weaver [22]. We perform the numerical simulations with Shen's EOS (denoted by SH) and Lattimer-Swesty EOS (LS) for comparisons. It is remarkable that the explosion does not occur in the model SH, i.e. the case with the new EOS table, and the shock wave stalls in a similar manner as the model LS. We have found, however, that there are quantitative differences in core-collapse and bounce due to the differences between nuclear EOSs. The peak central density at the bounce in the case of SH is 3.3\\(\\times\\)10\\({}^{14}\\) g/cm\\({}^{3}\\), which is lower than 4.2\\(\\times\\)10\\({}^{14}\\) g/cm\\({}^{3}\\) in the case of LS. It is to be noted that, in microscopic many body calculations, the relativistic EOS (including Shen's EOS) is generally stiffer [10] than the non-relativistic EOS, on which Lattimer-Swesty EOS is based. The stiff EOS is not advantageous in a simple argument of initial shock energy, however, the symmetry energy of EOS also plays an important role [23]. Having a larger symmetry energy, free-proton fractions during collapse in SH are smaller than in LS. Smaller free-proton fractions lead to smaller electron capture rates when electron captures on nuclei are suppressed. Because of this difference, the trapped lepton fraction at the bounce in SH is 0.35 at center, which is slightly larger than 0.34 in LS. In the current simulations, this difference of lepton fraction does not change the size of bounce core significantly. As a result, the shock in SH does not reach at a significantly larger radius than in LS (Fig. 1). The shock stalls below 200 km and starts receding in two cases. The central core becomes a proto-neutron star having a radius of several tens km and a steady accretion shock is formed. The difference of EOS becomes apparent having a more compact star for LS with the central density of 6.0\\(\\times\\)10\\({}^{14}\\) g/cm\\({}^{3}\\) (3.9\\(\\times\\)10\\({}^{14}\\) g/cm\\({}^{3}\\) for SH) at 600 ms after the bounce. The peak structure of temperature at around 10 km is formed and the peak temperature reaches at \\(\\sim\\)40 and \\(\\sim\\)50 MeV for SH and LS, respectively, due to the gradual compression of core having accretion (Fig. 2). It is noticeable that negative gradients in entropy and lepton fraction appear by this stage, suggesting the importance of convection. These differences of thermal structure may give influences on shock dynamics, supernova neutrinos and proto-neutron star cooling. Further details of the full numerical simulations will be published elsewhere [20]. ## References * [1] M. Rampp and H.-T. Janka, Astrophys. J. 539 (2000) L33. * [2] A. Mezzacappa, M. Liebendorfer, O.E.B. Messer, W.R. Hix, F.-K. Thielemann and S.W. Bruenn, Phys. Rev. Lett. 86 (2001) 1935. * [3] M. Liebendorfer, A. Mezzacappa, F.-K. Thielemann, O.E.B. Messer, W.R. Hix and S.W. Bruenn, Phys. Rev. D63 (2001) 103004. * [4] T.A. Thompson, A. Burrows and P.A. Pinto Astrophys. J. 592 (2003) 434. * [5] H. Shen, H. Toki, K. Oyamatsu and K. Sumiyoshi, Nucl. Phys. A637 (1998) 435. * [6] H. Shen, H. Toki, K. Oyamatsu and K. Sumiyoshi, Prog. Theor. Phys. 100 (1998) 1013. * [7] B.D. Serot and J.D. Walecka, in Advances in Nuclear Physics, edited by J.W. Negele and E. Vogt (Plenum Press, New York, 1986), Vol. 16, p.1. * [8] Y. Sugahara and H. Toki, Nucl. Phys. A579 (1994) 557. * [9] D. Hirata, K. Sumiyoshi, I. Tanihata, Y. Sugahara, T. Tachibana and H. Toki, Nucl. Phys. A616 (1997) 438c. * [10] R. Brockmann and R. Machleidt, Phys. Rev. C42 (1990) 1965. * [11] K. Sumiyoshi, H. Kuwabara and H. Toki, Nucl. Phys. A581 (1995) 725. * [12] K. Sumiyoshi, H. Suzuki, K. Otsuki, M. Terasawa and S. Yamada, Pub. Astron. Soc. Japan 52 (2000) 601. * [13] K. Sumiyoshi, M. Terasawa, G.J. Mathews, T. Kajino, S. Yamada, and H. Suzuki, Astrophys. J. 562 (2001) 880. * [14] K. Sumiyoshi, H. Suzuki and H. Toki, Astronom. & Astrophys. 303 (1995) 475. * [15] S. Rosswog, M.B. Davies, Mon. Not. Roy. Astron. Soc. 345 (2003) 1077. * [16] K. Sumiyoshi, H. Suzuki, S. Yamada and H. Toki, Nucl. Phys. A730 (2004) 227. * [17] J.M. Lattimer and F.D. Swesty, Nucl. Phys. A535 (1991) 331. * [18] S. Yamada, Astrophys. J. 475 (1997) 720. * [19] S. Yamada, H.-Th. Janka and H. Suzuki, Astronom. & Astrophys. 344 (1999) 533. * [20] K. Sumiyoshi, H. Suzuki, S. Yamada, H. Shen and H. Toki, in preparation. * [21] S.W. Bruenn, Astrophys. J. Suppl. 62 (1986) 331. * [22] S.E. Woosley and T. Weaver, Astrophys. J. Suppl. 101 (1995) 181. * [23] S.W. Bruenn, Astrophys. J. 340 (1989) 955.
We study the explosion mechanism of collapse-driven supernovae by numerical simulations with a new nuclear EOS based on unstable nuclei. We report new results of simulations of general relativistic hydrodynamics together with the Boltzmann neutrino-transport in spherical symmetry. We adopt the new data set of relativistic EOS and the conventional set of EOS (Lattimer-Swesty EOS) to examine the influence on dynamics of core-collapse, bounce and shock propagation. We follow the behavior of stalled shock more than 500 ms after the bounce and compare the evolutions of supernova core.
Summarize the following text.
arxiv-format/0506620v1.md
# Postbounce evolution of core-collapse supernovae: Long-term effects of equation of state K. Sumiyoshi Numazu College of Technology, Ooka 3600, Numazu, Shizuoka 410-8501, Japan [email protected] S. Yamada Science and Engineering, Waseda University, Okubo, 3-4-1, Shinjuku, Tokyo 169-8555, Japan & Advanced Research Institute for Science and Engineering, Waseda University, Okubo, 3-4-1, Shinjuku, Tokyo 169-8555, Japan H. Suzuki Faculty of Science and Technology, Tokyo University of Science, Yamazaki 2641, Noda, Chiba 278-8510, Japan H. Shen Department of Physics, Nankai University, Tianjing 300071, China S. Chiba Advanced Science Research Center, Japan Atomic Energy Research Institute, Tokai, Ibaraki 319-1195, Japan H. Toki Research Center for Nuclear Physics (RCNP), Osaka University, Mihogaoka 10-1, Ibaraki, Osaka 567-0047, Japan ## 1 Introduction Understanding the explosion mechanism of core-collapse supernovae is a grand challenge that requires endeavor to conduct numerical simulations of \\(\ u\\)-radiation-hydrodynamics with best knowledge of particle and nuclear physics. Three dimensional simulations of \\(\ u\\)-radiation-hydrodynamics, which are currently formidable, and better determinations of the nuclear equation of state of dense matter and the neutrino-related reaction rates are mandatory. One has to advance step by step by developing numerical methods and examining microphysics and its influence in various stages of supernovae. Even with the extensive studies in recent years with currently available computing resources, the numerical results have not made clear the explosion mechanism. On one hand, recent multi-dimensional supernova simulations with approximate neutrino-transportschemes have revealed the importance of asymmetry such as rotation, convection, magnetic fields and/or hydrodynamical instability (Blondin et al., 2003; Buras et al., 2003; Kotake et al., 2003; Fryer & Warren, 2004; Kotake et al., 2004; Walder et al., 2004). On the other hand, spherically symmetric supernova simulations of late have removed the uncertainty of neutrino-transport and clarified the role of neutrinos in core-collapse and shock propagation (Rampp & Janka, 2000; Liebendorfer et al., 2001; Mezzacappa et al., 2001; Rampp & Janka, 2002; Thompson et al., 2003). In this study, we focus on spherically symmetric simulations which are advantageous to examine the role of microphysics without ambiguity of neutrino-transport. Almost all authors have reported that neither prompt explosion nor delayed explosion occurs under the spherical symmetry. This conclusion is commonly reached by simulations with Newtonian (Rampp, 2000; Rampp & Janka, 2000; Mezzacappa et al., 2001; Thompson et al., 2003), approximately relativistic (Rampp & Janka, 2002) and fully general relativistic (Liebendorfer et al., 2001) gravity, together with standard microphysics, i.e. the equation of state (EOS) by Lattimer & Swesty (1991) and the weak reaction rates by Bruenn (1985). The influence of nuclear physics inputs has been further assessed by employing the extended neutrino reactions (Thompson et al., 2003, see also section 3.2) and more up-to-date electron capture rates on nuclei (Hix et al., 2003). The dependence on the progenitor models (Liebendorfer et al., 2002; Thompson et al., 2003; Liebendorfer et al., 2004; Janka et al., 2004) and the sets of physical EOS (Janka et al., 2004) has been studied very recently. These simulations so far have shown that the collapse of iron cores leads to the stalled shock after bounce without successful explosion. In the current study, we explore the influence of EOS in the time period that has not been studied very well in the previous studies. Most of recent numerical simulations have been performed until about 300 milliseconds after bounce. This is due to the severe limitation on time steps by the Courant condition for explicit time-differencing schemes. A typical time step in the explicit method is about \\(10^{-6}\\) seconds after the formation of dense compact objects. However, in the implicit method as we employ in this study, the time step is not restricted by the Courant condition. This is advantageous for a long-term evolution. In the studies by Liebendorfer et al. (2002, 2004), who also adopted the implicit method, the postbounce evolution has been followed up to about 1 second for a small number of models with Lattimer and Swesty EOS. Historically, the idea of the delayed explosion was proposed by Wilson's simulations that followed more than several hundred milliseconds after bounce. In some cases, the revival of shock wave occurred even beyond 0.5 seconds (for example, Bethe and Wilson, 1985). It is still interesting to explore this late phase in the light of possible influence of microphysics. The progress of the supernova EOS put an additional motivation to the study of this late postbounce phase. Only recently the sets of physical EOS, which cover a wide range of density, composition and temperature in a usable and complete form, have become available for simulations. A table of EOS was made for the first time by Hillebrandt & Wolff (1985) within the Skyrme Hartree-Fock approach and applied to some simulations (Hillebrandt et al. 1984; Suzuki 1990, 1993, 1994; Sumiyoshi et al. 1995c; Janka et al. 2004). Another set of EOS has been provided as a numerical routine by Lattimer & Swesty (1991) utilizing the compressible liquid-drop model. This EOS has been used these years as a standard. Recently, a new complete set of EOS for supernova simulations has become available (Shen et al. 1998a,b). The relativistic mean field (RMF) theory with a local density approximation was applied to the derivation of the table of supernova EOS. This EOS is different in two important aspects from previous EOSs. One thing is that the Shen's EOS is based on the relativistic nuclear many-body framework whereas the previous ones are based on the non-relativistic frameworks. The relativistic treatment is known to affect the behavior of EOS at high densities (i.e. stiffness) (Brockmann & Machleidt 1990) and the size of nuclear symmetry energy (Sumiyoshi et al. 1995b). The other thing is that the Shen's EOS is based on the experimental data of unstable nuclei, which have become available recently. The data of neutron-rich nuclei, which are close to the astrophysical environment, were used to constrain the nuclear interaction. The resulting properties of isovector interaction are generally different from the non-relativistic counterpart and the size of symmetry energy is different. The significant differences in stiffness and compositions during collapse and bounce have been shown between Shen's EOS and Lattimer-Swesty EOS by hydrodynamical calculations (Sumiyoshi et al. 2004). Therefore, it would be exciting to explore the supernova dynamics with the new set of EOS. Such an attempt has been made recently by Janka et al. (2004) and no explosion has been reported up to 300 milliseconds after bounce. Our aim of the current study is, therefore, the comparison of the postbounce evolutions beyond 300 milliseconds for the first time. We perform the core-collapse simulations adopting the two sets of EOS, that is, Shen's EOS (SH-EOS) and Lattimer-Swesty EOS (LS-EOS). We follow the evolutions of supernova core for a long period. We explore the fate of the stalled shock up to 1 second after bounce. In this time period, one can also see the birth of protoneutron star as a continuous evolution from the collapsing phase together with the long-term evolution of neutrino emissions. Although the supernova core does not display successful explosion, as we will see, the current simulations may provide some aspects of central core leading to the formation of protoneutron star or black hole. This information is also helpful to envisage the properties of supernova neutrinos in the first second since the simulations of protoneutron star cooling done so far usually starts from several hundred milliseconds after bounce for some given profiles. As a whole, we aim to clarify how the EOSinfluences the dynamics of shock wave, evolution of central core and supernova neutrinos. ## 2 Numerical Methods A new numerical code of general relativistic, \\(\ u\\)-radiation-hydrodynamics under the spherical symmetry has been developed (Yamada, 1997; Yamada et al., 1999) for supernova simulations. The code solves a set of equations of hydrodynamics and neutrino-transfer simultaneously in the implicit way, which enables us to have substantially longer time steps than explicit methods. This is advantageous for the study of long-term behaviors after core bounce. The implicit method has been also adopted by Liebendorfer et al. (2004) in their general relativistic \\(\ u\\)-radiation-hydrodynamics code. They have taken, however, an operator splitting method so that hydrodynamics and neutrino-transfer could be treated separately. ### Hydrodynamics The equations of lagrangian hydrodynamics in general relativity are solved by a implicit, finite differencing. The numerical method is based on the approximate linearized Riemann solver (Roe-type scheme) that captures shock waves without introducing artificial viscosities. Assuming the spherical symmetry, the metric of Misner & Sharp (1964) is adopted to formulate hydrodynamics and neutrino-transport equations. A set of equations for the conservation of baryon number, lepton number and energy-momentum are solved together with the metric equations and the entropy equation. Details of the numerical method of hydrodynamics can be found in (Yamada, 1997), where standard numerical tests of the hydrodynamics code have been also reported. ### Neutrino-transport The Boltzmann equation for neutrinos in general relativity is solved by a finite difference scheme (S\\({}_{N}\\) method) implicitly together with above-mentioned lagrangian hydrodynamics. The neutrino distribution function, \\(f_{\ u}(t,m,\\mu,\\varepsilon_{\ u})\\), as a function of time \\(t\\), lagrangian mass coordinate \\(m\\), neutrino propagation angle \\(\\mu\\) and neutrino energy \\(\\varepsilon_{\ u}\\), is evolved. Finite differencing of the Boltzmann equation is mostly based on the scheme by Mezzacappa & Bruenn (1993a). However, the update of time step is done simultaneously with hydrodynamics. The reactions of neutrinos are explicitly calculated in the collision terms of the Boltzmann equation with incident/outgoing neutrino angles and energies taken into account. Detailedcomparisons with the Monte Carlo method have been made to validate the Boltzmann solver and to examine the angular resolution (Yamada et al. 1999). ### \\(\ u\\)-radiation-hydrodynamics The whole set of finite-differenced equations described above are solved by the Newton-Raphson iterative method. The Jacobian matrix forms a block-tridiagonal matrix, in which dense block matrices arise from the collision terms of the transport equation. Since the inversion of this large matrix is most costly in the computing time, we utilize a parallel algorithm of block cyclic reduction for the matrix solver (Sumiyoshi & Ebisuzaki 1998). In the current simulations, we adopt non-uniform 255 spatial zones for lagrangian mass coordinate. We discretize the neutrino distribution function with 6 angle zones and 14 energy zones for \\(\ u_{e}\\), \\(\\bar{\ u}_{e}\\), \\(\ u_{\\mu/\\tau}\\) and \\(\\bar{\ u}_{\\mu/\\tau}\\), respectively. ### Rezoning The description of long-term evolution of accretion in a lagrangian coordinate is a numerically tough problem. In order to keep enough resolution during the accretion phase, rezoning of accreting materials is done long before they accrete onto the surface of protoneutron star and become opaque to neutrinos. At the same time, dezoning of the hydrostatic inner part of protoneutron star is done to avoid the increase of grid points. When we have tried simulations without rezoning, neutrino luminosities oscillate largely in time due to intermittent accretion of coarse grid points and it sometimes leads to erroneous dynamics (even explosions). Therefore, we have checked that the resolution of grid points is enough by refining the initial grid points and rezoning during the simulations. Even then, there are still slight oscillations in luminosities and average energies of neutrinos in the last stage of calculations. There are also transient kinks sometimes when the grid size in mass coordinate changes during accretion as we will see in section 4.5. These slight modulations of neutrino quantities, however, do not affect the overall evolution of protoneutron stars with accretion once we have enough resolution. ## 3 Model Descriptions As an initial model, we adopt the profile of iron core of a 15M\\({}_{\\odot}\\) progenitor from Woosley & Weaver (1995). This progenitor has been widely used in supernova simulations. The computational grid points in mass coordinate are non-uniformly placed to cover the central core, shock propagation region and accreting material with enough resolution. ### Equation of state The new complete set of EOS for supernova simulations (SH-EOS) (Shen et al. 1998a,b) is derived by the relativistic mean field (RMF) theory with a local density approximation. The RMF theory has been a successful framework to reproduce the saturation properties, masses and radii of nuclei, and proton-nucleus scattering data (Serot & Walecka 1986). We stress that the RMF theory (Sugahara & Toki 1994) is based on the relativistic Bruckner-Hartree-Fock (RBHF) theory (Brockmann & Machleidt 1990), which is a microscopic and relativistic many-body theory. The RBHF theory has been shown to be successful to reproduce the saturation of nuclear matter starting from the nucleon-nucleon interactions determined by scattering experiments. This is in good contrast with non-relativistic many-body frameworks which can account for the saturation only with the introduction of extra three-body interactions. The effective interactions in the RMF theory have been determined by least squares fittings to reproduce the experimental data of masses and radii of stable and unstable nuclei (Sugahara & Toki 1994). The determined parameters of interaction, TM1, have been applied to many studies of nuclear structures and experimental analyses (Sugahara et al. 1996; Hirata et al. 1997). One of stringent tests on the isovector interaction is passed in excellent agreement of the theoretical prediction with the experimental data on neutron and proton distributions in isotopes including neutron-rich ones with neutron-skins (Suzuki et al. 1995; Ozawa et al. 2001). The RMF theory with the parameter set TM1 provides uniform nuclear matter with the incompressibility of 281 MeV and the symmetry energy of 36.9 MeV. The maximum mass of neutron star is 2.2 M\\({}_{\\odot}\\) for the cold neutron star matter in the RMF with TM1 (Sumiyoshi et al. 1995a). The table of EOS covers a wide range of density, electron fraction and temperature for supernova simulations, and has been applied to numerical simulations of r-process in neutrino-driven winds (Sumiyoshi et al. 2000), prompt supernova explosions (Sumiyoshi et al. 2001), and other simulations (Sumiyoshi et al. 1995c; Rosswog & Davies 2003; Sumiyoshi et al. 2004; Janka et al. 2004). For comparison, we also adopt the EOS by Lattimer & Swesty (1991). The LS-EOS is based on the compressible liquid drop model for nuclei together with dripped nucleons. The bulk energy of nuclear matter is expressed in terms of density, proton fraction and temperature. The values of nuclear parameters are chosen according to nuclear mass formulae and other theoretical studies with the Skyrme interaction. Among various parameters, the symmetry energy is set to be 29.3 MeV, which is smaller than the value in the relativistic EOS. As for the incompressibility, we use 180 MeV, which has been used frequently for recent supernova simulations. In this case, the maximum mass of neutron star is estimated to be 1.8 M\\({}_{\\odot}\\). This choice enables us to make comparisons with previous works, though 180 MeV is smaller than the standard value as will be discussed below. The sensitivity to the incompressibility of LS-EOS has been studied by Thompson et al. (2003) using the choices of 180, 220 and 375 MeV. The numerical results of core-collapse and bounce with different incompressibilities turn out to be similar up to 200 milliseconds after bounce. The differences in luminosities and average energies of emergent neutrinos are within 10 % and do not affect significantly the post-bounce dynamics on the time scale of 100 ms. The influence of different incompressibilities in LS-EOS on the time scale of 1 sec remains to be seen as an extension of the current study. For densities below \\(10^{7}\\) g/cm\\({}^{3}\\), the subroutine of Lattimer-Swesty EOS runs into numerical troubles, therefore, we adopt Shen's EOS in this density regime instead. This is mainly for numerical convenience. In principle, it is preferable to adopt the EOS, which contains electrons and positrons at arbitrary degeneracy and relativity, photons, nucleons and an ensemble of nuclei as non-relativistic ideal gases (see for example, Timmes & Arnett (1999); Thompson et al. (2003)). One also has to take into account non-NSE abundances determined from the preceding quasi-static evolutions. Note that we are chiefly concerned with the effect of EOS at high densities, and this pragmatic treatment does not have any significant influence on the shock dynamics. We comment here on the nuclear parameters of EOS and its consequences for the astrophysical applications considered here. The value of incompressibility of nuclear matter has been considered to be within 200-300 MeV from experimental data and theoretical analyses. The value recently obtained within the non-relativistic approaches (Colo & Van Giai 2004) is 220-240 MeV. The corresponding value extracted within the relativistic approaches is known to be higher than non-relativistic counterpart and is 250-270 MeV (Vretenar et al. 2003; Colo & Van Giai 2004). It is also known that the determination of incompressibility is closely related with the size of the symmetry energy and its density dependence. The incompressibility of EOS in the RMF with TM1 is slightly higher than those standard values and the SH-EOS is relatively stiff. The neutron stars with SH-EOS are, therefore, less compact with lower central densities and have higher maximum masses than those obtained by LS-EOS with the incompressibility of 180 MeV. The adiabatic index of SH-EOS at the bounce of supernova core is larger than that of LS-EOS (Sumiyoshi et al. 2004). The value of symmetry energy at the nuclear matter density is known to be around 30 MeV by nuclear mass formulae (Moller et al. 1995). The recent derivation of the symmetry energy in a relativistic approach gives higher values of 32-36 MeV together with the above mentioned higher incompressibility (Dieperink et al. 2003; Vretenar et al. 2003). The symmetry energy in the RMF with TM1 is still a bit larger compared with the standard values. We note that the symmetry energy in the RMF is determined by the fitting of masses and radii of various nuclei including neutron-rich ones. The large symmetry energy in SH-EOS leads to large proton fractions in cold neutron stars, which may lead to a possible rapid cooling by the direct URCA process, as well as the stiffness of neutron matter (Sumiyoshi et al. 1995a). The difference between neutron and proton chemical potentials is large and leads to different compositions of free protons and nuclei (Sumiyoshi et al. 2004). The consequences of these differences in incompressibility and symmetry energy will be discussed in the comparison of numerical simulations in section 4. ### Weak reaction rates The weak interaction rates regarding neutrinos are evaluated by following the standard formulation by Bruenn (1985). For the collision term in the Boltzmann equation, the scattering kernels are explicitly calculated in terms of angles and energies of incoming and outgoing neutrinos (Mezzacappa & Bruenn 1993b). In addition to the Bruenn's standard neutrino processes, the plasmon process (Braaten and Segel 1993) and the nucleon-nucleon bremsstrahlung process (Friman & Maxwell 1979; Maxwell 1987) are included in the collision term. The latter reaction has been shown to be an important process to determine the supernova neutrinos from the protoneutron star cooling (Suzuki 1993; Burrows et al. 2000) as a source of \\(\ u_{\\mu/\\tau}\\). The conventional _standard_ weak reaction rates are used for the current simulations to single out the effect of EOS and to compare with previous simulations. Recent progress of neutrino opacities in nuclear matter (Burrows et al. 2005) and electron capture rates on nuclei (Langanke & Martinez-Pinedo 2003) will be examined along with the updates of EOS in future studies. ## 4 Comparison of results We present the results of two numerical simulations performed with Shen's EOS and Lattimer-Swesty EOS. They are denoted by SH and LS, respectively. ### Shock propagation Fig. 1 shows the radial trajectories of mass elements as a function of time after bounce in model SH. The trajectories are plotted for each \\(0.02{\\rm M}_{\\odot}\\) in mass coordinate up to and for each 0.01M\\({}_{\\odot}\\) for the rest of outer part. Thick lines denote the trajectories for 0.5M\\({}_{\\odot}\\), 1.0M\\({}_{\\odot}\\) and 1.5M\\({}_{\\odot}\\). One can see the shock wave is launched up to 150 kilometers and stalled there within 100 milliseconds. The shock wave recedes down to below 100 kilometers afterwards and the revival of shock wave or any sign of it is not found even after 300 milliseconds. Instead, the stationary accretion shock is formed at several tens of kilometers. As the central core gradually contracts, a protoneutron star is born at center. The material, which was originally located in the outer core, accretes onto the surface of the protoneutron star. The accretion rate is about 0.2M\\({}_{\\odot}\\)/s on average and decreases from 0.25M\\({}_{\\odot}\\)/s to 0.15M\\({}_{\\odot}\\)/s gradually. This behavior is similar in model LS. At 1 sec after bounce, the baryon mass of protoneutron star is 1.60M\\({}_{\\odot}\\) for both cases. The trajectories of shock wave in models SH and LS are compared in Fig. 2. The propagations of shock wave in two models are similar in the first 200 milliseconds (left panel). We note that slight fluctuations in the curves are due to numerical artifact in the procedure to determine the shock position. Note that we have rather low resolutions in the central part in order to have higher resolutions in the accreting material. Except for the discrepancy due to the different numerical methods (e.g. approximate general relativity, eulerian etc.), zoning and resolutions, the current simulations up to 200 milliseconds are consistent with the results (middle panel of Figure 3) by Janka et al. (2004) having similar maximum radii and timing of recession. The difference shows up from 200 milliseconds after bounce and becomes more apparent in the later phase (right panel). After 600 milliseconds, the shock position in model LS is less than 20 kilometers and it is clearly different from that in model SH. This difference originates from the faster contraction of the protoneutron star in model LS. We discuss the evolution of protoneutron star later in section 4.4. ### Collapse phase The initial propagation of shock wave is largely controlled by the properties of the inner core during the gravitational collapse. We have found noticeable differences in the behavior of core-collapse in two models. However, they did not change the initial shock energy drastically, which then leads to the similarity of the early phase of shock propagation we have just seen above. First of all, it is remarkable that the compositions of dense matter during the collapse are different. In Fig. 3, the mass fraction is shown as a function of mass coordinate when the central density reaches \\(10^{11}\\) g/cm\\({}^{3}\\). The mass fraction of free proton in model SH is smaller than that in model LS by a factor of \\(\\sim\\)5. This is caused by the larger symmetry energy in SH-EOS, where the proton chemical potential is lower than the neutron chemical potential as discussed in Sumiyoshi et al. (2004). The smaller free proton fraction reduces the electron captures on free protons. Note that the electron capture on nuclei is suppressed in the current simulations due to the blocking above N=40 in Bruenn's prescription. This is in accordance with the numerical results by Bruenn (1989); Swesty et al. (1994) who studied the influence of the free proton fraction and the symmetry energy. However, there is also a negative feedback in the deleptonization during collapse (Liebendorfer et al., 2002). Smaller electron capture rates keep electron fraction high, which then leads to an increase of free proton fraction and consequently to electron captures after all. The resultant electron fraction turns out to be not significantly different as we will see later. It is also noticeable that the mass fraction of alpha particles differs substantially and the abundance of nuclei is slightly reduced in model SH. This difference of alpha abundances in two models persists during the collapse and even in the post-bounce phase. The nuclear species appearing in the central core during collapse are shown in the nuclear chart (Fig. 4). The nuclei in model SH are always less neutron-rich than those in model LS by more than several neutrons. This is also due to the effect of the symmetry energy, which gives nuclei closer to the stability line in model SH. The mass number reaches up to \\(\\sim\\)80 and \\(\\sim\\)100 at the central density of 10\\({}^{11}\\) g/cm\\({}^{3}\\) (solid circle) and 10\\({}^{12}\\) g/cm\\({}^{3}\\) (open circle), respectively. In the current simulations, the electron capture on nuclei is suppressed beyond N=40 due to the simple prescription employed here and the difference of species do not give any difference. However, results may turn out different when more realistic electron capture rates are adopted (Hix et al., 2003). It would be interesting to see whether the difference found in two EOSs leads to differences in central cores using recent electron capture rates on nuclei (Langanke & Martinez-Pinedo, 2003). Further studies are necessary to discuss the abundances of nuclei and the influence of more updated electron capture rates for mixture of nuclear species beyond the approximation of single-species in the current EOSs. The profiles of lepton fractions at bounce are shown in Fig. 5. The central electron fraction in model SH is Y\\({}_{e}\\)=0.31, which is slightly higher than Y\\({}_{e}\\)=0.29 in model LS. The central lepton fractions including neutrinos for models SH and LS are rather close to each other, having Y\\({}_{L}\\)=0.36 and 0.35, respectively. The difference of lepton fraction results in a different size of the inner core. The larger lepton fraction in model SH leads to a larger inner core 0.61M\\({}_{\\odot}\\), whereas it is 0.55M\\({}_{\\odot}\\) in model LS. Here, the inner core is defined by the region inside the position of velocity discontinuity, which is the beginning of shock wave. Fig. 6 shows the velocity profile at bounce. We define the bounce (t\\({}_{pb}\\)=0 ms) as the time when the central density reaches the maximum, which is similar to other definitions such as using the peak entropy height. The central density reaches 3.4\\(\\times\\)10\\({}^{14}\\) g/cm\\({}^{3}\\) and 4.4\\(\\times\\)10g/cm\\({}^{3}\\) in models SH and LS, respectively. The difference of stiffness in two EOSs leads to a lower peak central density in model SH than that in model LS. Because of this difference, the radial size of inner core at bounce is \\(\\sim\\)1 km larger for model SH than that for model LS. The initial shock energy, which is roughly estimated by the gravitational binding energy of inner core at bounce, turns out to be not drastically different because of the increases both in mass and radial size of the inner core in model SH. Clearer difference appears at later stages where the protoneutron star is formed having a central density much higher than the nuclear matter density. This is one of reasons why we are interested in the late phase of supernova core, where the difference of EOS appears more clearly and its influence on the supernova dynamics could be seen. We remark here that the numerical results with LS-EOS at bounce are in good agreement with previous simulations such as the reference models by Liebendorfer et al. (2005). For example, the profiles of model LS shown in Figs. 5 and 6 accord with the profiles of their model G15. The behavior after bounce is also qualitatively consistent with the reference models up to 250 milliseconds (see also section 4.4). ### Postbounce phase The postbounce phase is interesting in many aspects, especially in clarifying the role of EOS in the neutrino heating mechanism and the protoneutron star formation. As we have seen in section 4.1, the stall of shock wave occurs in a similar manner in two models and the difference appears in later stage. We discuss here the similarities and the differences in terms of the effect of EOS. The evolution of shock wave after it stalls around 100 kilometers is controlled mainly by the neutrino heating behind the shock wave. The neutrinos emitted from the neutrinosphere in the nascent protoneutron star contribute to the heating of material just behind the shock wave through absorption on nucleons. Whether the shock wave revives or not depends on the total amount of heating, hence more specifically, on the neutrino spectrum, luminosity, amount of targets (nucleons), mass of heating region and duration time. The heating rates of material in supernova core in two models at t\\({}_{pb}\\)=150 ms are shown in Fig. 7 as a function of radius. The heating rate in model SH is smaller than that in model LS around 100 kilometers. The cooling rate (negative value in the heating rate) in model SH is also smaller than that in model LS. The smaller heating (cooling) rate in model SH is caused by lower neutrino luminosities and smaller free-proton fractions. Figs. 8 and 9 show the radial profiles of neutrino luminosities and mass fractions of dense matter around the heating region. The luminosities in model SH are lower than those in model LS for all neutrino flavors. The mass fraction of free protons, which are the primary target of neutrino heating, is slightly smaller in model SH around the heating region. These two combinations lower the heating rate in model SH. It is also interesting that other compositions (alpha and nuclei) appear different in this region. The lower luminosities in model SH are related with the lower cooling rate. The temperature of protoneutron star in model SH is generally lower than that in model LS as shown in Fig. 10. The peak temperature, which is produced by the shock heating and the contraction of core, in model SH is lower than that in model LS. This difference exists also in the surface region of the protoneutron star, where neutrinos are emitted via cooling processes. The temperature at the neutrinosphere in model SH is lower and, as a result, the cooling rate is smaller. The difference of temperature becomes more evident as the protoneutron star evolves as we will see in the next section. ### Protoneutron star The thermal evolution of protoneutron star formed after bounce is shown in Fig. 11 for two models. Snapshots of temperature profile at t\\({}_{pb}\\)=20, 50, 100, 200, 300, 400, 500, 600, 700, 800, 900 ms and 1 s are shown. The temperature increase is slower in model SH than in model LS. The peak temperature at t\\({}_{pb}\\)=1 s is 39 and 53 MeV in models SH and LS, respectively. The temperature difference arises mainly from the stiffness of EOS. The protoneutron star contracts more in model LS and has a higher central density than in model SH. At t\\({}_{pb}\\)=1 s, the central density in model SH is 4.1 \\(\\times\\) 10\\({}^{14}\\) g/cm\\({}^{3}\\) whereas that in model LS is 7.0 \\(\\times\\) 10\\({}^{14}\\) g/cm\\({}^{3}\\), which means the rapid contraction in model LS. Since the profile of entropy per baryon is similar to each other, lower density results in lower temperature. The rapid contraction also gives rise to the rapid recession of shock wave down to 20 kilometers in model LS. We note here on the effective mass. In SH-EOS, the effective mass of nucleons is obtained from the attraction by scalar mesons in the nuclear many-body framework. The effective mass at center is reduced to be 440 MeV at t\\({}_{pb}\\)=1 s. The nucleon mass is fixed to be the free nucleon mass in LS-EOS, on the other hand. The temperature difference within 1 second as we have found may affect the following evolution of protoneutron star up to several tens of seconds, during which the main part of supernova neutrinos is emitted. Although our models do not give a successful explosion, the obtained profiles will still give a good approximation to the initial setup for the subsequent protonneutron star cooling. Since we have followed the continuous evolution of the central core from the onset of gravitational collapse, the calculated protoneutron star contains the history of matter and neutrinos during the prior stages. This is much better than the situation so far for calculations of protoneutron star cooling, where the profiles from other supernova simulations were adopted for the initial model. It would be interesting to study the cooling of protoneutron star for the two models obtained here. Even if such evolutions of protoneutron star are not associated with a successful supernova explosion, it will be still interesting for the collapsar scenario of GRB and/or black hole formation. Exploratory studies on various scenarios for the fate of compact objects with continuous accretion of matter are fascinating and currently under way, but it is beyond the scope of the present study. In Fig. 12, we display the profiles of entropy and lepton fraction in model LS at t\\({}_{pb}\\)=100, 250, 500 ms and 1 s. The distributions of entropy as well as other quantities (not shown here) at t\\({}_{pb}\\)=100 and 250 ms are consistent with the reference model G15 (Liebendorfer et al. 2005). We have found that the negative gradients in the profiles of entropy and lepton fraction commonly appear in late phase for both models. As for entropy per baryon, the negative gradient appears after t\\({}_{pb}\\)=100 ms in the region between \\(\\sim\\)0.7M\\({}_{\\odot}\\) and the shock. The negative gradient of lepton fraction appears first in the outer core behind the shock and then prevails toward the center till t\\({}_{pb}\\)=1 s. Since these regions are unstable against the convection according to the Ledoux criterion, the whole region of proto-neutron star may be convective after core bounce. It has been pointed out that the sign of derivative of thermodynamical quantities (\\(\\partial\\rho/\\partial Y_{L}|_{P,S}\\)) changes in the neutron-rich environment at high densities beyond 10\\({}^{14}\\) g/cm\\({}^{3}\\) (Sumiyoshi et al. 2004), and the central core may be stabilized in model SH. Whether the convection occurs efficiently enough to help the neutrino-driven mechanism for explosion remains to be studied in multi-dimensional \\(\ u\\)-radiation-hydrodynamics simulations with SH-EOS. ### Supernova neutrinos The different temperature distribution could affect the neutrino luminosities and spectra. We discuss here the properties of neutrinos emitted during the evolution of supernova core up to 1 second. As we have already discussed in section 4.3, the luminosity of neutrinos in model SH is lower than that in model LS after bounce. This difference actually appears after t\\({}_{pb}\\)=100 ms as shown in Fig. 13. The initial rise and peak of luminosities in two models are quite similar to each other. The peak heights of neutronization burst of electron-type neutrino are also similar. The difference, however, gradually becomes larger and apparent after t\\({}_{pb}\\)=200 ms. We remark here that the kinks around t\\({}_{pb}\\)=500 ms are numerical artifact due to the rezoning of mass coordinate as discussed in section 2.4. Except for this kink, the luminosities increase in time. For last 150 milliseconds, luminosities show oscillations numerically, therefore, we have plotted smoothed curves by taking average values. It is to be noted that we are interested in the relative differences of supernova neutrinos between two models. The difference in average energies of neutrinos appears in a similar manner to that in luminosities as seen in Fig. 14. The average energy presented here is the _rms_ average energy, \\(E_{\ u}=\\sqrt{\\langle\\varepsilon_{\ u}^{2}\\rangle}\\), at the outermost grid point (\\(\\sim\\)7000 km). The average energies up to t\\({}_{pb}\\)=100 ms are almost identical in two models and become different from each other afterwards. The average energies in model SH turn out to be lower than those in model LS. Kinks around t\\({}_{pb}\\)=500 ms appear due to the same reason mentioned above and the curves are smoothed around kinks and t\\({}_{pb}\\sim\\)1 s to avoid artificial transient behaviors due to the rezoning. At t\\({}_{pb}\\)=1 s, the gap amounts to be more than a few MeV and has tendency to increase in time. The lower luminosity and average energy in model SH is due to the slow contraction of protoneutron star and, as a result, the slow rise of temperature as seen in Fig. 11. Again, it would be interesting to see the subsequent cooling phase of protoneutron star up to \\(\\sim\\)20 seconds to obtain the main part of supernova neutrinos. ## 5 Summary We have performed the numerical simulations of core-collapse supernovae by solving general relativistic \\(\ u\\)-radiation-hydrodynamics in spherical symmetry. We have adopted the relativistic EOS table which is based on the recent advancement of nuclear many-body theory as well as the recent experimental data of unstable nuclei, in addition to the conventional Lattimer-Swesty EOS. We have done the long-term simulations from the onset of gravitational collapse to the late phase far beyond 300 milliseconds after bounce, which have not been well studied in previous studies due to the numerical restrictions. This is meant to explore the chance of shock revival and the influence of the new EOS in this stage, and is first such an attempt. We have found that a successful explosion of supernova core does not occur in neither a prompt nor a delayed way, even though we have followed the postbounce evolution up to 1 second with the new EOS table. The numerical simulation using the Lattimer-Swesty EOS shows no explosion either, which is in accord with other recent studies and in contrast to the finding by Wilson. Note that Wilson incorporated convective effects into their spherical simulations to obtain successful explosions. The shock wave stalls around 100 millisecondsafter bounce and recedes down to several tens of kilometers to form a stationary accretion shock. Regardless of the outcome with no explosion we have revealed the differences caused by two EOSs in many aspects, which might give some hints for the successful explosion. We have seen the difference in composition of free-protons and nuclei at the collapse phase of supernova core in interesting manners. The difference in symmetry energy of two EOSs has caused this effect, which can change the electron capture rates and the resulting size of bounce cores. Although the early shock propagations turn out to be similar in the current simulations due to the counter effect by the stiffness of EOS and the neutrino heating, the implementation of up-to-date electron capture rates on nuclei is remaining to be done to obtain more quantitatively reliable difference of composition during the collapse phase, which may then affect the initial shock energy. During the postbounce evolution around 100 milliseconds after bounce we have seen that the heating rates in two models are different due to the different luminosities and compositions predicted by two EOSs. Unfortunately, the merit of larger inner core found in the model with SH-EOS is mostly canceled by the smaller heating rate, and the behaviors of shock wave in the early postbounce phase turn out to be similar in two simulations. In general, though, different heating rates by spectral change of neutrinos and compositional differences due to EOSs might contribute to the revival of shock wave in the neutrino-driven mechanism. One of the most important facts we have revealed in the comparison is that larger difference actually appears from 200 milliseconds after bounce when the central core contracts to become a protoneutron star. The temperature and density profiles display larger differences as the protoneutron star shrinks further. It is in this late phase that we are interested to see possible influences of EOS for shock dynamics, since the central density becomes high enough and the difference of EOS becomes more apparent. In the current study, we have not found any shock revival in either model. We have found, however, distinctly different thermal evolution of protoneutron stars in two models, and the resulting neutrino spectra are clearly different at this stage. This difference might have some influence on the accretion of matter. The following evolution of protoneutron star cooling or formation of a black hole or any other exotic objects will certainly be affected. After all, the current numerical simulations of core-collapse supernovae in spherical symmetry have not given successful explosions, even with a new EOS or after long-term evolution. One might argue that this situation indicates the necessity of breaking spherical symmetry, which is also suggested by some observations and has been supported by multi-dimensional simulations. However, before one goes to the conclusion that the asymmetry is essential in the explosion mechanism, one also has to make efforts to find missing ingredients in microphysics (such as hyperons in EOS, for example) in spherically symmetric simulations. Moreover, the spherical simulations serve as a reliable basis for multi-dimensional computations of \\(\ u\\)-radiation-hydrodynamics. Convection may be somehow taken into account effectively in spherical codes as in the stellar evolution codes. These extensions of simulations and microphysics are now in progress. The extension of the relativistic EOS table by including strangeness particles at high densities has been recently made (Ishizuka 2005) and corresponding neutrino reactions in hyperonic matter are currently being implemented in \\(\ u\\)-radiation-hydrodynamics. K. S. expresses thanks to K. Oyamatsu, A. Onishi, K. Kotake, T. Kajino, Tony Mezzacappa and Thomas Janka for stimulating discussions and useful suggestions. K. S. thanks partial supports from MPA in Garching and INT in Seattle where a part of this work has been done. The numerical simulations have been performed on the supercomputers at RIKEN, KEK (KEK Supercomputer Project No. 108), JAERI (VPP5000) and NAO (VPP5000 System Projects yks86c, rks07b, rks52a). This work is supported by the Grant-in Aid for Scientific Research (14039210, 14079202, 14740166, 15540243, 15740160) of the Ministry of Education, Science, Sports and Culture of Japan. This work is partially supported by the Grant-in-Aid for the 21st century COE program \"Holistic Research and Education Center for Physics of Self-organizing Systems\". ## References * Bethe & Wilson (1985) Bethe, H. A., & Wilson, J. R. 1985, ApJ, 296, 14 * Blondin et al. (2003) Blondin, J. M., Mezzacappa, A., & DeMarino, C. 2003, ApJ, 584, 971 * Braaten & Segel (1993) Braaten, E., & Segel, D. 1993, Phys. Rev. D, 48, 1478 * Brockmann & Machleidt (1990) Brockmann, R., & Machleidt, R. 1990, Phys. Rev. C, 42, 1965 * Bruenn (1985) Bruenn, S. W. 1985, ApJS, 58, 771 * Bruenn (1989) Bruenn, S. W. 1989, ApJ, 340, 955 * Buras et al. (2003) Buras, R., Rampp, M., Janka, H.-Th., & Kifonidis, K. 2003, Phys. Rev. Lett., 90, 241101 * Burrows et al. (2000) Burrows, A., Young, T., Pinto, P., Eastman, R., & Thompson, T. A. 2000, ApJ, 539, 865 * Burrows et al. (2005) Burrows, A., Reddy, S. & Thompson, T. A. 2005, Nucl. Phys. A, in press * Burrows et al. (2005)* () Colo, G., & Van Giai, N. 2004, Nucl. Phys. A, 731, 15 * () Dieperink, A. E. L., Dewulf, Y., Van Neck, D., Waroquier, M., & Rodin, V. 2003, Phys. Rev. C, 68, 064307 * () Friman, B. L., & Maxwell, O. V. 1979, ApJ, 232, 541 * () Fryer, C. L., & Warren, M. S. 2004, ApJ, 601, 391 * () Hillebrandt, W., & Wolff, R.G. 1985, in Nucleosynthesis-Challenges and New Developments, ed. W.D. Arnett and J.M. Truran, (Chicago: Univ. of Chicago), 131 * () Hillebrandt, W., Nomoto, K., & Wolff, R.G. 1984, A&A, 133, 175 * () Hirata, D., Sumiyoshi, K., Tanihata, I., Sugahara, Y., Tachibana, T., & Toki, H. 1997, Nucl. Phys. A, 616, 438c * () Hix, W. R., Messer, O. E. B., Mezzacappa, A., Liebendoerfer, M., Sampaio, J., Langanke, K., Dean, D. J., & Mariez-Pinedo, G. 2003, Phys. Rev. Lett., 91, 201102 * () Horiguchi, T., Tachibana, T., Koura, H., & Katakura, J. 2000, Chart of the Nuclides, JAERI Ishizuka, C. 2005, Ph.D. thesis, Hokkaido University, Sapporo, Japan * () Janka, H.-Th., Buras, R., Kitaura Joyanes, F. S., Marek, A., & Rampp, M. 2004, in Proc. 12th Workshop on Nuclear Astrophysics, in press (astro-ph/0405289) * () Kotake, K., Yamada, S. & Sato, K. 2003, ApJ, 595, 304 * () Kotake, K., Sawai, H., Yamada, S. & Sato, K. 2004, ApJ, 608, 391 * () Lattimer, J. M., & Swesty, F. D. 1991, Nucl. Phys. A, 535, 331 * () Langanke, K., & Martinez-Pinedo, G. 2003, Rev. Mod. Phys., 75, 819 * () Liebendorfer, M., Mezzacappa, A., Thielemann, F.-K., Messer, O. E. B., Hix, W. R., & Bruenn, S. W. 2001, Phys. Rev. D, 63, 103004 * () Liebendorfer, M., Messer, O. E. B., Mezzacappa, A., Hix, W. R., Thielemann, F.-K., & Langanke, K. 2002, in Proc. 11th Workshop on Nuclear Astrophysics, ed. W. Hillebrandt & E. Muller (Garching: Springer), 126 * () Liebendorfer, M., Messer, O. E. B., Mezzacappa, A., Bruenn, S. W., Cardall, C. Y., & Thielemann, F.-K. 2004, ApJS, 150, 263 * ()* () Liebendorfer, M., Rampp, M., Janka, H.-Th., & Mezzacappa, A. 2005, ApJ, 620, 840 * () Maxwell, O. V. 1987, ApJ, 316, 691 * () Mezzacappa, A., & Bruenn, S. W. 1993, ApJ, 405, 669 * () Mezzacappa, A., & Bruenn, S. W. 1993, ApJ, 410, 740 * () Mezzacappa, A., Liebendorfer, M., Messer, O. E. B., Hix, W. R., Thielemann, F.-K., & Bruenn, S. W. 2001, Phys. Rev. Lett., 86, 1935 * () Misner, C. W., & Sharp, D. H. 1964, Phys. Rev. B, 136, 571 * () Moller, P., Nix, J. R., Myers, W. D., & Swiatecki, W. J. 1995, Atomic Data Nucl.Data Tables, 59, 185 * () Ozawa A. _et al._, 2001, Nucl. Phys. A, 691, 599 * () Rampp M. 2000, Ph.D. Thesis, Max-Planck Institute for Astrophysics, Garching, Germany * () Rampp, M., & Janka, H.-Th. 2000, ApJ, 539, L33 * () Rampp, M., & Janka, H.-Th. 2002, A&A, 396, 361 * () Rosswog, S. & Davies, M. B. 2003, MNRAS, 345, 1077 * () Serot, B.D., & Walecka, J.D. 1986, in Advances in Nuclear Physics Vol. 16, ed. J.W. Negele and E. Vogt (New York: Plenum Press), 1 * () Shen, H., Toki, H., Oyamatsu, K., & Sumiyoshi, K. 1998, Nucl. Phys. A, 637, 435 * () Shen, H., Toki, H., Oyamatsu, K., & Sumiyoshi, K. 1998, Prog. Theor. Phys., 100, 1013 * () Sugahara, Y., & Toki, H. 1994, Nucl. Phys. A, 579, 557 * () Sugahara, Y., Sumiyoshi, K., Toki, H., Ozawa, A., & Tanihata, I. 1996, Prog. Theor. Phys., 96, 1165 * () Sumiyoshi, K., Kuwabara, H., & Toki, H. 1995, Nucl. Phys. A, 581, 725 * () Sumiyoshi, K., Oyamatsu, K., & Toki, H. 1995, Nucl. Phys. A, 595, 327 * () Sumiyoshi, K., Suzuki, H., & Toki, H. 1995, A&A, 303, 475 * () Sumiyoshi, K., & Ebisuzaki, T. 1998, Parallel Computing, 24, 287 * ()Sumiyoshi, K., Suzuki, H., Otsuki, K., Terasawa, M., & Yamada, S. 2000, PASJ, 52, 601 * Sumiyoshi et al. (2001) Sumiyoshi, K., Terasawa, M., Mathews, G. J., Kajino, T., Yamada, S., & Suzuki, H. 2001, ApJ, 562, 880 * Sumiyoshi et al. (2004) Sumiyoshi, K., Suzuki, H., Yamada, S., & Toki, H. 2004, Nucl. Phys. A, 730, 227 * Suzuki (1990) Suzuki H. 1990, Ph.D. thesis, University of Tokyo, Tokyo, Japan * Suzuki (1993) Suzuki H. 1993, in Proceedings of the International Symposium on Neutrino Astrophysics: Frontiers of Neutrino Astrophysics, ed. Y. Suzuki and K. Nakamura (Tokyo:Universal Academy Press Inc.), 219 * Suzuki (1994) Suzuki H. 1994, in Physics and Astrophysics of Neutrinos, ed. M. Fukugita and A. Suzuki (Tokyo:Springer-Verlag), 763 * Suzuki et al. (1995) Suzuki T. _et al._, 1995, Phys. Rev. Lett., 75, 3241 * Swesty et al. (1994) Swesty, F. D., Lattimer, J. M., & Myra, E. S. 1994, ApJ, 425, 195 * Thompson et al. (2003) Thompson, T. A., Burrows, A., & Pinto, P. 2003, ApJ, 539, 865 * Timmes & Arnett (1999) Timmes, F. X., & Arnett, D. 1999, ApJS, 125, 277 * Vretenar et al. (2003) Vretenar, D., Niksic, T., & Ring, P. 2003, Phys. Rev. C, 68, 024310 * Walder et al. (2004) Walder, R., Burrows, A., Ott, C. D., Livne, E., & Jarrah, M. 2004, submitted to ApJ (astro-ph/0412187) * Woosley & Weaver (1995) Woosley, S. E., & Weaver, T. 1995, ApJS, 101, 181 * Yamada (1997) Yamada, S. 1997, ApJ, 475, 720 * Yamada et al. (1999) Yamada, S., Janka, H.-Th., & Suzuki, H. 1999, A&A, 344, 533Figure 2: Radial positions of shock waves in models SH and LS are shown by thick and thin lines, respectively, as a function of time after bounce. The evolutions at early and late times are displayed in left and right panels, respectively. Small fluctuations in the curves are due to numerical artifact in the procedure to determine the shock position from a limited number of grid points. Figure 1: Radial trajectories of mass elements of the core of 15M\\({}_{\\odot}\\) star as a function of time after bounce in model SH. The location of shock wave is displayed by a thick dashed line. Figure 5: Lepton, electron and neutrino fractions at bounce are shown as a function of baryon mass coordinate by solid, dashed and dot-dashed lines, respectively. The results for models SH and LS are shown by thick and thin lines, respectively. Figure 6: Velocity profiles at bounce are shown as a function of baryon mass coordinate. The results for models SH and LS are shown by thick and thin lines, respectively. Figure 8: Luminosities of \\(\ u_{e}\\), \\(\\bar{\ u}_{e}\\) and \\(\ u_{\\mu/\\tau}\\) around the heating region are shown by solid, dashed and dot-dashed lines, respectively, as a function of radius at t\\({}_{pb}\\)=150 ms. The results for models SH and LS are shown by thick and thin lines, respectively. Figure 7: Heating rates at t\\({}_{pb}\\)=150 ms in two models are shown as a function of radius. Notation is the same as in Fig. 6. Figure 10: Temperature profiles at t\\({}_{pb}\\)=150 ms for two models are shown as a function of radius. Notation is the same as in Fig. 6. Figure 9: Mass fractions in dense matter around the heating region are shown as a function of radius at t\\({}_{pb}\\)=150 ms. Notation is the same as in Fig. 3. Figure 11: Snapshots of temperature profiles as a function of baryon mass coordinate from t\\({}_{pb}\\)=20 ms to t\\({}_{pb}\\)=1000 ms in models SH (left) and LS (right). Note that small peaks around the central grid are artificial due to the numerical treatment. Figure 12: Snapshots of entropy (left) and lepton fraction (right) profiles in model LS are shown as a function of baryon mass coordinate at t\\({}_{pb}\\)=100 ms (solid), t\\({}_{pb}\\)=250 ms (dashed), t\\({}_{pb}\\)=500 ms (dotted) and t\\({}_{pb}\\)=1000 ms (dot-dashed). Figure 13: Luminosities of \\(\ u_{e}\\), \\(\\bar{\ u}_{e}\\) and \\(\ u_{\\mu/\\tau}\\) are shown as a function of time after bounce. Notation is the same as in Fig. 6. Kinks around t\\({}_{pb}\\)=500 ms are due to numerical artifact due to the rezoning of mass coordinate. See the main text for details. Figure 14: Average energies of \\(\ u_{e}\\), \\(\\bar{\ u}_{e}\\) and \\(\ u_{\\mu/\\tau}\\) are shown as a function of time after bounce. Notation is the same as in Fig. 13. Kinks around t\\({}_{pb}\\)=500 ms are due to numerical artifact due to the rezoning of mass coordinate. See the main text for details.
We study the evolution of supernova core from the beginning of gravitational collapse of a 15M\\({}_{\\odot}\\) star up to 1 second after core bounce. We present results of spherically symmetric simulations of core-collapse supernovae by solving general relativistic \\(\ u\\)-radiation-hydrodynamics in the implicit time-differencing. We aim to explore the evolution of shock wave in a long term and investigate the formation of protoneutron star together with supernova neutrino signatures. These studies are done to examine the influence of equation of state (EOS) on the postbounce evolution of shock wave in the late phase and the resulting thermal evolution of protoneutron star. We make a comparison of two sets of EOS, that is, by Lattimer and Swesty (LS-EOS) and by Shen et al.(SH-EOS). We found that, for both EOSs, the core does not explode and the shock wave stalls similarly in the first 100 milliseconds after bounce. The revival of shock wave does not occur even after a long period in either cases. However, the recession of shock wave appears different beyond 200 milliseconds after bounce, having different thermal evolution of central core. A more compact protoneutron star is found for LS-EOS than SH-EOS with a difference in the central density by a factor of \\(\\sim\\)2 and a difference of \\(\\sim\\)10 MeV in the peak temperature. Resulting spectra of supernova neutrinos are different to the extent that may be detectable by terrestrial neutrino detectors. supernovae: general -- stars: neutron -- neutrinos -- hydrodynamics -- equation of state
Write a summary of the passage below.
arxiv-format/0507230v1.md
# Spatial Evidence for Transition Radiation in a Solar Radio Burst Gelu M. Nita\\({}^{1}\\), Dale E. Gary\\({}^{1}\\), and Gregory D. Fleishman\\({}^{1,2}\\) \\({}^{1}\\)New Jersey Institute of Technology, Newark, NJ 07102, \\({}^{2}\\)National Radio Astronomy Observatory, Charlottesville 22903, VA, USA ## 1 Introduction Microturbulence in cosmic sources governs the dynamics of energy release and dissipation in astrophysical and geospace plasmas, the formation of collisionless shock waves and current sheets, and is a key ingredient in stochastic acceleration (Fermi, 1949; Miller et al., 1997) and enhanced diffusion (Dolginov & Toptygin, 1966; Kennel & Petscheck, 1966; Lee, 2004) of nonthermal particles. The microturbulence may also affect the electromagnetic emission produced by fast particles, giving rise to Transition Radiation (TR), which was proposed nearly 60 years ago by two Nobel Prize winning physicists, Ginzburg and Frank (1946). TR in its original form (Ginzburg and Frank, 1946) results from a variation in phase speed of wave propagation at transition boundaries. The theory of TR has seen wide application in the laboratory (Cherry et al., 1974) and in cosmic ray detectors (Favuzzi et al.,2001; Wakely et al., 2004), although no naturally occurring radiation had been confirmed as TR. In the astrophysical context, TR must arise whenever nonthermal charged particles pass near or through small-scale inhomogeneities such as wave turbulence or dust grains. However, it was thought to be weak, and perhaps unobservable (Durand, 1973; Yodh, Artru & Ramaty, 1973; Fleishman & Kahler, 1992), until Platonov & Fleishman (1994) showed that its intensity can be greatly enhanced due to plasma resonance at frequencies just above the local plasma frequency. Spatially and spectrally resolved observations of this resonant transition radiation (RTR), if present, can provide quantitative diagnostics of plasma density, and of the level of microturbulence in the flaring region. A number of recent publications, based mainly on studies of individual events, indicate that RTR may be produced in solar radio bursts (Fleishman, 2001; Fleishman, Melnikov, & Shibasaki, 2002; Lee et al., 2003; LaBelle et al., 2003; Bogod & Yasnov, 2005). Most recently, we have described the observational characteristics expected for RTR in the case of solar bursts (Fleishman, Nita, & Gary, 2005), and found that the correlations and associations predicted for total power data are indeed present in the decimetric (\\(\\sim\\)1-3 GHz) components of a statistical sample of two-component solar continuum radio bursts. However, interpretations based on non-imaging data remain indirect (and, thus, ambiguous) until they can be combined with direct imaging evidence from multi-wavelength spatially resolved observations, which were missing in the previous studies. This report presents comprehensive (radio, optical, and soft X-ray) spatially resolved observations for one of the RTR-candidate bursts. As we describe below, these observations provide primarily three new confirmations: (1) the RTR and gyroemission sources are co-spatial, (2) the RTR component is associated with a region of high density, and (3) the RTR emission is \\(o\\)-mode polarized. Together with the already demanding spectral and polarization correlations found previously (Fleishman, Nita, & Gary, 2005), these new observations provide further strong evidence in favor of RTR. ## 2 Theoretical Expectations The two spectral components of such RTR candidate bursts (one at centimeter wavelengths due to the usual gyrosynchrotron (GS) mechanism, and one at decimeter wavelengths suspected as RTR), must be _co-spatial_ to allow an unambiguous RTR interpretation. An alternative explanation of such two-component bursts is that both spectral components are produced by the same (GS) emission mechanism (with different parameter combinations in the two components), but if the low and high frequencies come from the same source location this should merely broaden the spectrum. Having truly separate spectral components requires either completely different source locations, or different mechanisms, or both. Distinct spectral components having the _same source location_ is a strong indicator that each component is produced by _a different emission mechanism_. Therefore, direct observation of the spatial relationship between the spectral components of RTR candidate bursts is the key evidence needed to conclude the emission mechanism producing the decimetric spectral component. The theory of RTR in the astrophysical context is discussed in detail in a recent review paper (Platonov & Fleishman 2002). RTR arises as fast particles move through a plasma with small-scale variations (as short as the wavelength of the emitted wave) of the refractive index. Such variations may be provided by microturbulence-induced inhomogeneities of the plasma density or magnetic field. In the case of solar bursts, the main properties of this emission mechanism that can be checked against observations are: The emission (1) originates in a dense plasma, \\(f_{pe}\\gg f_{Be}\\), where \\(f_{pe}\\) and \\(f_{Be}\\) are the electron plasma- and gyro-frequencies; (2) has a relatively low peak frequency in the decimetric range, and so appears as a low-frequency component relative to the associated GS spectrum; (3) is co-spatial with or adjacent to the associated GS source; (4) varies with a time scale comparable to the accompanying GS emission (assuming a constant or slowly varying level of the necessary microturbulence); (5) is typically strongly polarized in the ordinary mode (\\(o\\)-mode), since the extraordinary mode (\\(x\\)-mode) is evanescent, as for any radiation produced at the plasma frequency in a magnetized plasma; (6) is produced typically by the lower-energy end of the same nonthermal electron distribution that produces the GS emission, with the _emissivity proportional to the instantaneous total number_ of the low-energy electrons in the source _at all times during the burst_ (in contrast to plasma emission, whose highly nonlinear emissivity is largely decoupled from electron number even though it may for a time display a similar proportionality); (7) has a high-frequency spectral slope that does not correlate with the spectral index of fast electrons (in contrast to GS radiation, which does). ## 3 Data Analysis Figure 1 presents the dynamic spectrum of the 2001 April 06 solar radio burst in intensity and circular polarization, observed with the Owens Valley Solar Array (Gary & Hurford 1999) (OVSA). This event is one of many observed with OVSA whose spectral behavior matches the expectations for RTR (Fleishman, Nita, & Gary 2005), but is the first for which detailed spatial comparison has been made. The RTR occurs at a restricted range of time and frequency shown by the bright red region in the bottom panel, which represents highly right hand circularly polarized (RCP) emission. The results presented in Figures 2 and 3 confirm the expected spatial association of the RTR radio source with (i) the accompanying GS source, (ii) an unusually dense soft X-ray loop, and (iii) the underlying magnetic field structure, and hence offer further support for its interpretation as RTR emission. Comparing the required observational characteristics in the order 1-7 presented above, we find: 1. Both the RTR (2 GHz) and GS (7.4 GHz) sources arise in or near an unusually dense loop. The electron temperature inferred from SXT data (Fig. 3), averaged over the pixels lying inside the 85% 2 GHz RCP contour, is \\(2\\times 10^{7}\\) K, while the average emission measure corresponding to one pixel (\\(2.5\\times 2.5\\) arcsec) is \\(5.6\\times 10^{48}\\) cm\\({}^{-3}\\). Assuming a line of sight length of \\(\\sim\\)25 arcsec, the projected loop width, we obtain an estimate for the plasma density in the region as \\(3\\times 10^{11}\\) cm\\({}^{-3}\\). This value directly confirms the existence of a high plasma density in the flaring region, as suggested by the Razin effect diagnosis we employed previously (Fleishman, Nita, & Gary, 2005). The RTR peak frequency of 2 GHz implies, from the electron plasma frequency \\(f_{pe}=9\\times 10^{3}\\sqrt{n}\\) Hz, an electron density of \\(5\\times 10^{10}\\) cm\\({}^{-3}\\), compared with \\(3\\times 10^{11}\\) cm\\({}^{-3}\\) derived above for the underlying soft X-ray loop. The X-ray-derived density demonstrates the presence of high densities in the region, while the lower radio-derived density is expected since the 2 GHz radio emission will come primarily from overlying, less-dense regions due to significant free-free absorption in the higher-density regions. 2. As seen in Fig. 1, the RTR forms a distinct, low-frequency spectral component relative to the higher-frequency GS component. 3. Figs. 2 and 3 show that the RTR and GS sources are co-spatial. As already emphasized, this co-spatiality is highly conclusive in favor of RTR, since separate spectral components (of multi-component bursts) typically come from distinct locations (Gary & Hurford, 1990; Benz, Saint-Hilaire, & Vilmer, 2002). 4. Both spectral components are smooth in time and frequency, with comparable time scales, the main difference being that the GS component is delayed with respect to the RTR component (see Figs. 1 and 4). Note also in Fig. 4 the similarity of high-energy HXR with the GS (7.4 GHz) component, and low-energy HXR with the RTR (2 GHz) component, which we discuss in more detail in item 6, below. 5. Figs. 1 and 2 show that the RTR emission is strongly polarized in the sense of the \\(o\\)-mode, as required, while the GS emission is \\(x\\)-mode. The radio maps at 7.4 GHz in Fig. 2 (filled contours) reveal RCP (red) overlying positive (white) magnetic polarity and LCP (blue) overlying negative (black) polarity, located on opposite sides of the neutral line. This clearly shows a relatively high degree of \\(x\\)-mode polarization of both 7.4 GHz radio sources. At 2 GHz (unfilled contours), exactly the opposite spatial correspondence is seen, with RCP (red) overlying negative magnetic polarity and (the much weaker) LCP (blue) slightly shifted toward positive polarity. This clearly shows a high degree of \\(o\\)-mode polarization for the RTR spectral component. 6. Indirect statistical evidence for the RTR component being due to low energy electrons was obtained from spectral correlations (Fleishman, Nita, & Gary 2005). 1 A more reliable estimate of the energy of the fast electrons involved comes from a comparison of the radio and hard X-ray light-curves in Figure 4. We first note the similarity of the RTR light curve and the 41-47 keV hard X-ray light curve. As shown by Nitta & Kosugi (1986), hard X-rays are due to electrons of energy 2-3 times higher than the photon energy, so that 41-47 keV HXR correspond to \\(\\sim 100-150\\) keV electrons. In contrast, the GS light curve at 7.4 GHz displays a poor correlation with 41-47 keV HXR, but an excellent correlation with the higher energy HXR light curve, at 128-157 keV, produced by the electrons of \\(\\sim 250-450\\) keV. This is consistent with the well known result that GS emission comes from electrons of energy typically \\(>300\\) keV (Bastian, Benz, & Gary 1998). The similarity of the shape and timing of the 128-157 keV HXR and 7.4 GHz light curves, and those of the 41-47 keV HXR and 2 GHz light curves, is consistent with their being due to electrons of energies \\(\\;\\raise 1.29pt\\hbox{$>$}\\kern-7.5pt\\raise-4.73pt\\hbox{$\\sim$}\\;300\\) keV and \\(\\;\\raise 1.29pt\\hbox{$<$}\\kern-7.5pt\\raise-4.73pt\\hbox{$\\sim$}\\;150\\) keV, respectively. It is reasonable to conclude that the RTR and GS emission, being essentially co-spatial, are from different parts of a single electron energy distribution. Footnote 1: Note that in most incoherent emission mechanisms, the spatially resolved brightness temperature provides a lower limit to the energy of emitting electrons. The brightness temperature of the 2 GHz RCP source in Fig. 2 reaches \\(2.5\\times 10^{9}\\) K, which for the typical incoherent mechanism would correspond to a particle energy of about 220 keV (indeed lower than the energy of the synchrotron emitting fast electrons specified below). However, for the RTR case this argument is inconclusive since the brightness temperature of RTR depends on effective energies of both fast electrons and nonthermal density fluctuations, rather than of fast electrons only. 7. As reported in an earlier paper (Fleishman, Nita, & Gary 2005, fig.7) the high-frequency slopes of the RTR and GS spectra for this event are uncorrelated, which provides an independent confirmation that the low-frequency component is not simply a low-frequency GS source. Discussion The above characteristics rule out standard GS emission for the low-frequency spectral component, while they are expected and agree fully with RTR. An alternative model that might account for the presence of a co-spatial, yet distinct dm-continuum spectral component--quasi-stationary plasma emission due to a marginally stable regime of a loss-cone instability--is much more difficult to eliminate, or even distinguish from RTR. Indeed, properties 1, 2, 6, 7 are typical also for plasma emission, and properties 3, 4, 5, while not required for plasma emission, are not inconsistent with it. We believe that the key evidence distinguishing RTR from plasma emission is the strict proportionality between the radio flux and the number of emitting electrons on all time scales, as suggested by the agreement between the RTR time profile and the low-energy hard X-ray light curve of Fig. 4. This proportionality, based on the spectral properties of the dm bursts, was found in all of the bursts studied by Fleishman, Nita, & Gary (2005). We note, however, that a temporal resolution better than the 4 s we have available will be needed to check this property down to millisecond time scales. Nevertheless, we looked for further evidence favoring the plasma emission interpretation of the smooth dm component and conclude that this model (even though not firmly eliminated) is not supported by the data. For example, the high degree of \\(o\\)-mode polarization of the dm continuum implies fundamental rather than harmonic plasma emission, although the latter is typically much easier to generate in the coronal plasma. However, spectra in this burst and in the other bursts studied by Fleishman, Nita, & Gary (2005), at no time show any hint of a second harmonic spectral feature. Furthermore, quasi-steady plasma emission requires a significant loss-cone anisotropy, which in turn gives rise to a widely observed loop-top peak brightness for the optically thin GS radio emission (Melnikov et al., 2002). In contrast, the 7.4 GHz source displays a clear separation into \\(x\\)-polarized kernels (corresponding to leg or foot-point sources, rather than a loop-top source), thus, any pitch-angle anisotropy is at best very modest. This conclusion is also supported by the statistical evidence found in (Fleishman, Nita, & Gary, 2005) in favor of more isotropic (than on average) distributions of the fast electrons in the RTR-producing bursts. Therefore, all the properties specific for RTR and those common for both RTR and plasma emission are observed, while no specific property expected solely for the plasma emission is seen, which leads us to favor RTR. We have presented ample evidence that the decimetric component of the 2001 April 06 radio burst near 19:23 UT is produced by the RTR mechanism. Since this event is one among a set of other events with similar, unique characteristics, the evidence presented here supports the conclusions made by Fleishman, Nita, & Gary (2005), based on total power data for a statistical sample of the bursts candidates, that these bursts are due to RTR. The importance of this result is several-fold. First, it strengthens the case for RTR as another incoherent continuum emission mechanism in astrophysical plasmas, among only a small number of others: gyrosynchrotron/synchrotron emission, bremsstrahlung, and inverse Compton emission. Second, there are a few types of solar radio continuum, e.g., type I and type IV m/dm, which are conventionally ascribed to plasma emission. We point out that this interpretation has never been quantitatively proved, and RTR represents a plausible alternative to the current interpretation, which we believe calls for revisiting the issue of the origin of non-GS solar radio continua. Third, with new radio facilities in development that are capable of simultaneous spatial and spectral measurements of solar bursts (e.g. Expanded VLA (Perley et al., 2004) and Frequency Agile Solar Radiotelescope (FASR) (Bastian, 2004)), RTR can be routinely recognized and used as a diagnostic of the plasma density, the low-energy part of the electron energy distribution, and of the presence and quantitative level of microturbulence. In this event, for example, the level of inhomogeneities derived from the RTR flux, described by Eq. (403) in Platonov & Fleishman (2002), is \\(\\left\\langle\\Delta n^{2}\\right\\rangle/n^{2}\\sim 10^{-5}\\). Thus, RTR may provide a sensitive tool for measuring this elusive but important quantity. We acknowledge NSF grant AST-0307670 and NASA grant NAG5-11875 to NJIT. The NRAO is a facility of the NSF operated under cooperative agreement by Associated Universities, Inc. We gratefully acknowledge the help of J. Qiu in providing the X-ray and MDI data. ## References * (1) * (2) Acton, L., Tsuneta, S., Ogawara, Y., Bentley, R., Bruner, M., Canfield, R., Culhane, L., Doschek, G., Hiei, E. & Hirayama, T. 1992, Science, 258(5082), 618 * (3) Bastian, T.S., Benz, A.O. & Gary, D.E. 1998, ARA&A, 36, 131 * D. E. Gary & C. U. Keller, Astrophysics and Space Science Library, Kluwer), 314, 47 * (5) Benz, A. O., Saint-Hilaire, P., Vilmer, N. 2002, A&A, 383, 678 * (6) Bogod, V.M., & Yasnov, L.V. 2005, Astron. Repts., 49, 144 * (7) Cherry, M.L., Hartmann, G., Muller, D. & Prince, T.A. 1974, Phys. Rev. D, 10(11), 3594 * (8) Dolginov, A.Z., & Toptygin, I.N. 1966, ZhETF, 51, 1771 * (9) Domingo, V., Fleck, B. & Poland, A. I. 1995, Sol. Phys., 162, 1 * (10) Durand, L. 1973, ApJ, 182, 417 * (11) Favuzzi, C., Giglietto, N., Mazziotta, M.N. & Spinelli, P. 2001, Riv. Nuovo Cim. 24(5-6), 1 * (12) Fermi, E., Phys. Rev. 1949, 75(8), 1169 * (13) Fleishman, G.D. 2001, Astronomy Letters, 26, 254 * (14) Fleishman G.D. & Kahler S.W. 1992, ApJ, 394, 688 * (15) Fleishman, G.D., Nita, G.M. & Gary, D.E. 2005, ApJ, 620, 506 * (16) Fleishman, G.D., Melnikov, V.F., & Shibasaki, K., P_roc. \\(10^{th}\\) European Meeting on Solar Physics_, Prague, Czech Rep., 9-14 September 2002 * (17) Gary, D. E. & Hurford, G. J. 1990, ApJ, 361, 290 * (18) Gary, D.E. & Hurford, G.J. 1999, in _Proceedings of the Nobeyama Symposyum, Kiyosoto, Japan_, NRO Rep. 479, 429 * (19) Ginzburg, V.L. & Frank, I.M. 1946, Zh. Eksp. Teor. Fiz. 16, 15 * (20) Kennel, C. F. & Petscheck, H. E. 1966, J. Geophys. Res., 71, 1 * (21) LaBelle, J., Treumann, R. A., Yoon, P. H., & Karlicky, M. 2003, ApJ, 593, 1195 * (22)Lee, J.W. 2004, in _Solar and Space Weather Radiophysics_(Eds - D. E. Gary & C. U. Keller, Astrophysics and Space Science Library, Kluwer), 314, 179 * () Lee, J., Gallagher, P.T., Gary, D.E., Nita, G.M., Choe, G.S., Bong, S.C. & Yun, H.S. 2003, ApJ, 585, 524 * () Melnikov, V.F., Shibasaki, K., & Reznikova, V.E. 2002, ApJ, 580, L185 * () Miller, J. A., Cargill, P. J., Emslie, A., Holman, G. D., Dennis, B. R., Larosa, T. N., Winglee, R. M., Benka, S. G. & Tsuneta, S. 1997, J. Geophys. Res., 102, 14631 * () Nitta, N. & Kosugi, T. 1986, Sol. Phys., 105, 73 * () Platonov K.Yu. & Fleishman G.D. 1994, Zh. Exper. Teor. Fiz., 106(4), 1053 (transl.: _JETP_, 79(4), 572-580). * () Platonov K.Yu. & Fleishman G.D. 2002, Uspekhi Fiz. Nauk., 172(3), 241(transl.: _Physics Uspekhi_, 45(3), 235-291). * () Perley, Richard A., Napier, Peter J. & Butler, Bryan J. 2004, _Proceedings of the SPIE_, 5489, 484 * () Qiu, J. & Gary, D. E. 2003, ApJ, 599(1), 615 * () Wakely, S. P., Plewnia, S., Muller, V., Horandel, J. R. & Gahbauer, F. 2004, _Nuclear Instruments and Methods in Physics Research Section A_, 531(3), 435 * () Yodh, G. B., Artru, X., & Ramaty, R. 1973, ApJ, 181, 725Figure 1: 2001 April 06 after 19:12 UT. Upper panel: Total power dynamic spectrum recorded by OVSA with 4 s time resolution at 40 frequencies in the [1.2–18] GHz range. Lower panel: Dynamic spectrum of circular polarization with 8 s time resolution at the same frequencies as in the upper panel. The period of RTR is the highly polarized (red) emission in the lower panel. Two spectral components are visible in the upper panel during this time: the low frequency RTR component, which peaks at 19:22:11 UT (3700 sfu at 2 GHz), and the delayed high frequency GS component, which peaks at 19:22:51 UT (2300 sfu at 7.4 GHz). Figure 2: 2001 April 06, OVSA radio maps (19:22:03 UT). The RCP (red contours) and LCP (blue contours) at 2 GHz (unfilled contours) and 7.4 GHz (filled contours) are overlaid on the SOHO (Domingo, Fleck, & Poland, 1995) MDI magnetogram (19:22:02 UT). The dashed ovals represent the half power OVSA beam at the two selected frequencies. The radio contours are scaled separately for each frequency and polarization, and only 3 are shown for clarity, representing 55, 75, and 95% of the maximum intensity. The maximum brightness temperatures are 2500 MK (2 GHz RCP), 770 MK (2 GHz LCP), 880 MK (7.4 GHz RCP) and 600 MK (7.4 GHz LCP). Small islands of apparent magnetic field sign reversal in regions of both polarities are an instrumental artifact (Qiu & Gary, 2003) and not real. Within the instrumental resolution (see the corresponding beam size), the 2 GHz RCP source (red, unfilled contour) is co-located with the 7.4 GHz LCP source (blue, filled contour) in the negative magnetic field region. We conclude that both low and high frequency emissions are likely produced by the same population of electrons travelling along the same magnetic loop. Remarkably, for both frequencies, the _intrinsic_ degrees of polarization implied by the radio maps are noticeably larger than those suggested by the unresolved polarization spectrum presented in the lower panel of Fig. 1. Figure 3: 2001 April 06. Emission measure (EM) map (19:22:00 UT) derived from the Yohkoh SXT instrument (Acton et al., 1992), using data obtained with two different filters (Be119 and Al12). For clarity, only the OVSA (19:22:03 UT) RCP 2 GHz (red contours) and LCP 7.4 GHz (blue contours) are overlaid here. The EM map reveals the existence of a magnetic loop or arcade of loops filled with hot and dense plasma, which is consistent with the magnetic and radio topology presented in Fig. 2. The 2 GHz RCP radio source and the 7.4 GHz LCP kernel are well aligned with the most dense section of the loop. Figure 4: OVSA total power lightcurves at 2 GHz (red line) and 7.4 GHz (blue line), and Yohkoh (Acton et al. 1992) WBS hard X-ray counts in the 41-47 keV (thick line) and 128-157 keV (thin line) ranges. Each curve has been normalized to the corresponding maximum values recorded after 19:21 UT (3700 sfu at 2 GHz, 2300 sfu at 7.4 GHz, 2088, and 244 HXR counts, respectively). The 128-157 keV hard X-ray and 7.4 GHz time profiles are similar, which is consistent with the 7.4 GHz emission being due to electrons of energy \\(>300\\) keV. The RTR emission at 2 GHz peaks about 1 min earlier, has quite different time behavior, and best correlates with the 41-47 keV hard X-rays channel, reflecting the fact that it is due to a lower-energy part of the same population, and also depends on other parameters such as the level of density fluctuations.
Microturbulence, i.e. enhanced fluctuations of plasma density, electric and magnetic fields, is of great interest in astrophysical plasmas, but occurs on spatial scales far too small to resolve by remote sensing, e.g., at \\(\\sim\\)1-100 cm in the solar corona. This paper reports spatially resolved observations that offer strong support for the presence in solar flares of a suspected radio emission mechanism, resonant transition radiation, which is tightly coupled to the level of microturbulence and provides direct diagnostics of the existence and level of fluctuations on decimeter spatial scales. Although the level of the microturbulence derived from the radio data is not particularly high, \\(\\left<\\Delta n^{2}\\right>/n^{2}\\sim 10^{-5}\\), it is large enough to affect the charged particle diffusion and give rise to effective stochastic acceleration. This finding has exceptionally broad astrophysical implications since modern sophisticated numerical models predict generation of much stronger turbulence in relativistic objects, e.g., in gamma-ray burst sources. radiation mechanisms: nonthermal - Sun: flares - Sun: radio radiation
Condense the content of the following passage.
arxiv-format/0508065v2.md
# Effect of Isovector-Scalar Meson on Neutron Star Matter in Strong Magnetic Fields F.X. Wei1 Institute of High Energy Physics,Chinese Academy of Sciences, P.O.Box 918(4), Beijing 100049, China2 G.J. Mao\\({}^{2,3}\\) C. M. Ko Cyclotron Institute and Physics Department, Texas A-M University, College Station, Texas 77843-3366 L.S. Kisslinger Department of Physics, Eeihang University, Beijing 100083, China H. Stocker Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe-University, Max-von-Laue-Str. 1, D-60438 Frankfurt am Main, Germany W. Greiner Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe-University, Max-von-Laue-Str. 1, D-60438 Frankfurt am Main, Germany Footnote 1: E-mail: [email protected] ###### PACS numbers: 26.60.+c, 21.65.+f Introduction In the standard relativistic mean field (RMF) model [1, 2] of nuclear matter, the \\(\\sigma\\), \\(\\omega\\) and \\(\\rho\\) meson are used in descriptions of nuclear interactions. The short range of the isovector-scalar meson \\(a_{0}(980)\\)(the \\(\\delta\\) meson) exchange justifies neglecting its contribution to symmetric nuclear matter. However, for strongly isospin-asymmetric matter at high densities in neutron stars, the contribution of the \\(\\delta\\)-field should be considered. Resent theoretical studies[3, 4, 5, 6, 7] motivate the investigation of the effect of the isovector-scalar meson on the neutron-star matter. They have found [4] that the \\(\\delta\\)-field leads to a large repulsion in dense neutron-rich matter and definite splitting of proton and neutron effective masses. The energy per particle of neutron matter becomes larger at high densities than the one with no \\(\\delta\\)-field included and the proton fraction of \\(\\beta\\)-stable matter increases[5]. Those properties play an important role in description of the structure and stability conditions of neutron stars. A splitting of proton and neutron masses can affect the transport properties of dense matter [8]. As is well known, there are strong magnetic fields of \\(10^{14}\\)G [9] on the neutron star surface. The strength of magnetic fields in the interior of neutron stars can be up to \\(10^{18}G\\)[10]. The neutron-star matter in strong magnetic fields without the isovector-scalar field has been studied, which give interesting and novelty results [11, 12, 13]. It is our main aim to investigate the \\(\\delta\\)-field influence on the properties of the neutron-star matter in the presence of magnetic fields. Theoretical studies about the effects of very strong magnetic fields on the EOS of neutron-star matter indicated[11] that the softening of the EOS caused by Landau quantization was overwhelmed by stiffening due to the incorporation of the anomalous magnetic moments of nucleons. At high baryon densities, muons can be produced in the charge-neutral, beta-equilibrated matter with respect to the channel of \\(e^{-}\\longleftrightarrow\\mu^{-}+\ u_{e}+\\overline{\ u}_{\\mu}\\), as soon as the chemical potential of electrons \\(\\mu_{e}\\) reaches a value equaling to the muon rest mass. In cold neutron stars neutrinos and photons already escape and the chemical potentials of those can be set as zero. Consequently, we get \\(\\mu_{e}=\\mu_{\\mu}\\)(\\(\\mu_{\\mu}\\) is the chemical potential of muons). As the baryon density increases, the densities of muons and electrons become comparable with that of nucleons. The inclusion of the anomalous magnetic moments of leptons thus makes sense. In this paper we will study the effects of the AMM of nucleons and leptons in a dense neutron-star matter including the \\(\\delta\\)-field effect. In the following section, the Lagrangian field theory of interacting nucleons and mesons including magnetic fields will be introduced. The numerical results are given in section 3. We separate the cases with and without the inclusion of the AMM effects. The modification of proton and neutron effective masses in the dense matter with strong magnetic fields will be discussed in detail. The energy per particle and EOS will be studied too. In Section 4, we summarize our results and prospect for the possible descriptions of additional components such as hyperons and quarks. ## 2 Formalism The application of Lagrangian field theory to the study of neutron stars was first carried out by Glendenning[14]. We consider a neutron-star matter consisting of neutrons, protons, electrons and muons in the presence of a uniform magnetic field B along the \\(z\\)-axis. The Lagrangian density can be written as \\[{\\cal L} = \\bar{\\psi}_{b}[i\\gamma_{\\mu}\\partial^{\\mu}-q_{b}\\frac{1+\\tau_{0}} {2}\\gamma_{\\mu}A^{\\mu}-\\frac{1}{4}\\kappa_{b}\\mu_{N}\\sigma_{\\mu\ u}F^{\\mu\ u} -M_{b}+g_{\\sigma}\\sigma+g_{\\delta}\\mathbf{\\tau}\\cdot\\mathbf{ \\delta}-g_{\\omega}\\gamma_{\\mu}\\omega^{\\mu}-g_{\\rho}\\gamma_{\\mu}\\mathbf{\\tau}\\cdot{\\bf R}^{\\mu}]\\psi_{b} \\tag{1}\\] \\[+\\bar{\\psi}_{l}[i\\gamma_{\\mu}\\partial^{\\mu}-q_{l}\\gamma_{\\mu}A^{ \\mu}-\\frac{1}{4}\\kappa_{l}\\mu_{B}\\sigma_{\\mu\ u}F^{\\mu\ u}-m_{l}]\\psi_{l}+ \\frac{1}{2}\\partial_{\\mu}\\sigma\\partial^{\\mu}\\sigma-U(\\sigma)-\\frac{1}{2}m_{ \\sigma}^{2}\\sigma^{2}+\\frac{1}{2}\\partial_{\\mu}\\mathbf{\\delta} \\partial^{\\mu}\\mathbf{\\delta}\\] \\[-\\frac{1}{2}m_{\\delta}^{2}\\mathbf{\\delta}^{2}-\\frac{1}{ 4}\\omega_{\\mu\ u}\\omega^{\\mu\ u}+\\frac{1}{2}m_{\\omega}^{2}\\omega_{\\mu}\\omega^{ \\mu}-\\frac{1}{4}{\\bf R}_{\\mu\ u}\\cdot{\\bf R}^{\\mu\ u}+\\frac{1}{2}m_{\\rho}^{2} {\\bf R}_{\\mu}{\\bf R}^{\\mu}-\\frac{1}{4}F_{\\mu\ u}F^{\\mu\ u},\\] where \\(\\psi_{b}\\) and \\(\\psi_{l}\\) are the baryon(b = n, p) and lepton(l = e, \\(\\mu\\)) fields; \\(\\sigma\\), \\(\\omega_{\\mu}\\), \\({\\bf R}\\), \\(\\delta\\) represent the scalar meson, vector meson, isovector-vector meson and isovector-scalar meson field, which are exchanged for the description of nuclear interactions. \\(A^{\\mu}\\)\\(\\equiv\\) (0, 0, \\(Bx\\), 0) refers to a constant external magnetic field along the \\(z\\)-axis. The field tensors for the \\(\\omega\\), \\(\\rho\\) and magnetic field are given by \\(\\omega_{\\mu\ u}=\\partial_{\\mu}\\omega_{\ u}-\\partial_{\ u}\\omega_{\\mu}\\), \\({\\bf R}_{\\mu\ u}=\\partial_{\\mu}{\\bf R}_{\ u}-\\partial_{\ u}{\\bf R}_{\\mu}\\) and \\(F_{\\mu\ u}=\\partial_{\\mu}A_{\ u}-\\partial_{\ u}A_{\\mu}\\). U(\\(\\sigma\\)) is the self-interaction part of the scalar field[15]: \\(U(\\sigma)=\\frac{1}{3}b\\sigma^{3}+\\frac{1}{4}c\\sigma^{4}\\). \\(M_{b}\\) and \\(m_{l}\\) are free baryon masses and lepton masses, and \\(m_{\\sigma}\\), \\(m_{\\omega}\\), \\(m_{\\rho}\\), \\(m_{\\delta}\\) are the masses of the \\(\\sigma\\), \\(\\omega\\), \\(\\rho\\) and \\(\\delta\\) meson, respectively. \\(\\mu_{N}\\) and \\(\\mu_{B}\\) are the nuclear magneton of nucleons and Bohr magneton of electrons; \\(\\kappa_{p}\\) = 3.5856, \\(\\kappa_{n}\\) = -3.8263, \\(\\kappa_{l}=\\frac{\\alpha_{l}}{\\pi}\\) and \\(\\alpha_{l}=1159652188(4)\\times 10^{12}(4ppb)\\), \\(\\alpha_{\\mu}=11659203(8)\\times 10^{-10}(0.7ppm)\\)[16, 17, 18] are the coefficients of AMMs for protons, neutrons, electrons and leptons, respectively. Then the anomalous magnetic moment can be defined by the coupling of the baryons and leptons to the electromagnetic field tensor with \\(\\sigma_{\\mu\ u}\\) = \\(\\frac{i}{2}[\\gamma_{\\mu}\\), \\(\\gamma_{\ u}]\\) and \\(\\kappa_{i}(i=n,p,e,\\mu)\\) given above. The field equations in a mean field approximation(MFA), in which the meson fields are replaced by their expectation values in a many-body ground state, are given by \\[(i\\gamma_{\\mu}\\partial^{\\mu}-q_{b}\\frac{1+\\tau_{0}}{2}\\gamma_{\\mu }A^{\\mu}-\\frac{1}{4}\\kappa_{b}\\mu_{N}\\sigma_{\\mu\ u}F^{\\mu\ u}-m_{b}^{*}-g_{ \\omega}\\gamma_{\\mu}\\omega^{\\mu}-g_{\\rho}\\gamma_{0}\\tau_{3b}R_{0}^{0})\\psi_{b} =0, \\tag{2}\\] \\[(i\\gamma_{\\mu}\\partial^{\\mu}-q_{l}\\gamma_{\\mu}A^{\\mu}-\\frac{1}{4 }\\kappa_{b}\\mu_{N}\\sigma_{\\mu\ u}F^{\\mu\ u}-m_{l})\\psi_{l}=0,\\] (3) \\[m_{\\sigma}^{2}\\sigma+b\\sigma^{2}+c\\sigma^{3}=g_{\\sigma}\\rho_{s},\\] (4) \\[m_{\\omega}^{2}\\omega_{0}=g_{\\omega}\\rho_{b},\\] (5) \\[m_{\\rho}^{2}R_{00}=g_{\\rho}(\\rho_{p}-\\rho_{n}),\\] (6) \\[m_{\\delta}^{2}\\delta_{0}=g_{\\delta}(\\rho_{s}^{p}-\\rho_{s}^{n}). \\tag{7}\\] The energy-momentum tensor can be written as \\[T_{\\mu\ u} = i\\bar{\\psi}_{b,l}\\gamma_{\\mu}\\partial_{\ u}\\psi_{b,l}+g_{\\mu\ u}[ \\ \\frac{1}{2}m_{\\sigma}^{2}\\sigma^{2}+U(\\sigma)+\\frac{1}{2}m_{\\delta}^{2}\\ \\mathbf{\\delta}^{2}-\\frac{1}{2}m_{\\omega}^{2}\\omega_{\\lambda} \\omega^{\\lambda} \\tag{8}\\] \\[-\\frac{1}{2}m_{\\rho}^{2}{\\bf R}_{\\lambda}{\\bf R}^{\\lambda}+\\frac{ B^{2}}{2}]+\\partial_{\ u}A^{\\lambda}F_{\\lambda\\mu}.\\] Here \\(\\rho_{b}=\\rho_{p}+\\rho_{n}\\) is the baryon number density, and \\(\\rho_{s}=\\rho_{s}^{p}+\\rho_{s}^{n}\\) is the scalar number density. In the mean field approximation, the first two components of the isospin vector \\({\\bf R}_{\\mu}\\) and \\(\\delta\\) vanish; i.e, \\(\\langle{\\bf R}^{\\mu}\\rangle=\\langle R_{0}^{\\mu}\\rangle\\); \\(\\langle\\mathbf{\\delta}\\rangle=\\langle\\delta_{0}\\rangle\\). The effective baryon masses are thus expressed as \\[m_{p}^{*} = M_{p}-g_{\\sigma}\\sigma-g_{\\delta}\\delta_{0}, \\tag{9}\\] \\[m_{n}^{*} = M_{n}-g_{\\sigma}\\sigma+g_{\\delta}\\delta_{0}. \\tag{10}\\]In the presence of the AMM of nucleons and leptons, the energy spectra of particles can be expressed as \\[E^{p}_{\ u,s} = \\sqrt{k_{z}^{2}+(\\sqrt{2eB\ u+m_{p}^{*2}}+s\\Delta_{p})^{2}}+g_{\\omega }\\omega_{0}+g_{\\rho}R_{0,0}, \\tag{11}\\] \\[E^{n}_{s} = \\sqrt{k_{z}^{2}+(\\sqrt{k_{x}^{2}+k_{y}^{2}+m_{n}^{*2}}+s\\Delta_{n })^{2}}+g_{\\omega}\\omega_{0}-g_{\\rho}R_{0,0},\\] (12) \\[E^{l}_{\ u,s} = \\sqrt{k_{z}^{2}+(\\sqrt{2eB\ u+m_{l}^{2}}+s\\Delta_{l})^{2}}, \\tag{13}\\] where \\(\\Delta_{b}=-\\frac{1}{2}\\kappa_{b}\\mu_{N}B\\) and \\(\\Delta_{l}=-\\frac{1}{2}\\kappa_{l}\\mu_{B}B\\). The number densities of protons, neutrons and leptons read as \\[\\rho_{p} = \\frac{eB}{2\\pi^{2}}[\\sum_{\ u=0}^{\ u_{max}}k^{(p)}_{f,\ u,1}+ \\sum_{\ u=1}^{\ u_{max}}k^{(p)}_{f,\ u,-1}], \\tag{14}\\] \\[\\rho_{n} = \\frac{1}{2\\pi}\\sum_{s}\\{\\frac{2}{3}k^{(n)3}_{f,s}+s\\Delta_{n}[(m ^{*}_{n}+s\\Delta_{n})k^{(n)}_{f,s}+E^{(n)2}_{f}(\\arcsin\\frac{m^{*}_{n}+s\\Delta _{n}}{E^{(n)}_{f}}-\\frac{\\pi}{2})]\\}, \\tag{15}\\] and \\[\\rho_{l}=\\frac{eB}{2\\pi^{2}}[\\sum_{\ u=0}^{\ u_{max}}k^{(l)}_{f,\ u,1}+\\sum_{ \ u=1}^{\ u_{max}}k^{(l)}_{f,\ u,-1}], \\tag{16}\\] respectively. In the above expressions, \\(k^{(p)}_{f,\ u,s}\\) and \\(k^{(l)}_{f,\ u,s}\\) are the Fermi momenta of protons and leptons for the landau level \\(\ u\\) and the spin index \\(s=-1,1\\); \\(k^{(n)}_{f,s}\\) is the Fermi momentum of neutrons. They are related to the Fermi energies as \\[k^{(p)}_{f,\ u,s} = \\sqrt{E^{(p)2}_{f}-(\\sqrt{m^{*2}_{p}+2eB\ u}+s\\Delta_{p})^{2}}, \\tag{17}\\] \\[k^{(n)}_{f,s} = \\sqrt{E^{(n)}_{f}-(m^{*}_{n}+s\\Delta_{n})^{2}} \\tag{18}\\] and \\[k^{(l)}_{f,\ u,s}=\\sqrt{E^{(l)2}_{f}-(\\sqrt{m^{2}_{l}+2eB\ u}+s\\Delta_{l})^{2}}. \\tag{19}\\]The scalar number densities of nucleons have the form of \\[\\rho_{s}^{p} = \\frac{eBm_{p}^{*}}{2\\pi^{2}}\\left[\\sum_{\ u=0}^{\ u_{max}}\\frac{ \\sqrt{m_{p}^{*2}+2eB\ u}+\\Delta_{p}}{\\sqrt{m_{p}^{*2}+2eB\ u}}{\\rm ln}\\left| \\frac{k_{f,n,1}^{(p)}+E_{f}^{(p)}}{\\sqrt{m_{p}^{*2}+2eB\ u}+\\Delta_{p}}\\right|\\right. \\tag{20}\\] \\[+ \\left.\\sum_{\ u=1}^{\ u_{max}}\\frac{\\sqrt{m_{p}^{*2}+2eB\ u}- \\Delta_{p}}{\\sqrt{m_{p}^{*2}+2eB\ u}}{\\rm ln}\\left|\\frac{k_{f,n,-1}^{(p)}+E_{f }^{(p)}}{\\sqrt{m_{p}^{*2}+2eB\ u}-\\Delta_{p}}\\right|\\ \\right],\\] \\[\\rho_{s}^{n} = \\frac{m_{n}^{*}}{4\\pi^{2}}\\sum_{s}\\left[k_{f,s}^{(n)}E_{f}^{(n)} -(m_{n}^{*}+s\\Delta_{n})^{2}{\\rm ln}\\left|\\frac{k_{f}^{(n)}+E_{f}^{(n)}}{m_{n} ^{*}+s\\Delta_{n}}\\right|\\ \\right]. \\tag{21}\\] The energy densities of nucleons and leptons are given as \\[\\varepsilon_{p} = \\frac{1}{4\\pi^{2}}\\sum_{\ u=0}^{\ u_{max}}\\left[k_{f,\ u,1}^{(p) }E_{f}^{(p)}+(\\sqrt{m_{p}^{*2}+2eB\ u}+\\Delta_{p})^{2}{\\rm ln}\\left|\\frac{k_{f,\ u,1}^{(p)}+E_{f}^{(p)}}{\\sqrt{m_{p}^{*2}+2eB\ u}+\\Delta_{p}}\\right|\\ \\right] \\tag{22}\\] \\[+\\frac{1}{4\\pi^{2}}\\sum_{\ u=1}^{\ u_{max}}\\left[k_{f,\ u,-1}^{(p )}E_{f}^{(p)}+(\\sqrt{m_{p}^{*2}+2eB\ u}-\\Delta_{p})^{2}{\\rm ln}\\left|\\frac{k_{ f,\ u,-1}^{(p)}+E_{f}^{(p)}}{\\sqrt{m_{p}^{*2}+2eB\ u}-\\Delta_{p}}\\right|\\ \\right],\\] \\[\\varepsilon_{n} = \\frac{1}{4\\pi^{2}}\\sum_{s}\\left\\{\\frac{1}{2}k_{f,s}^{(n)}E_{f}^{ (n)3}+\\frac{2}{3}s\\Delta_{n}E_{f}^{(n)3}(\\arcsin\\frac{m_{n}^{*}+s\\Delta_{n}}{ E_{f}^{(n)}}-\\frac{\\pi}{2})+(\\frac{s\\Delta_{n}}{3}-\\frac{m_{n}^{*}+s\\Delta_{n}}{4})\\right.\\] (23) \\[\\left.\\times\\left.\\left[(m_{n}^{*}+s\\Delta_{n})k_{f,s}^{(n)}E_{f}^ {(n)}+(m_{n}^{*}+s\\Delta_{n})^{3}{\\rm ln}\\left|\\frac{k_{f}^{(n)}+E_{f}^{(n)}} {m_{n}^{*}+s\\Delta_{n}}\\right|\\ \\right]\\right\\},\\] \\[\\varepsilon_{l} = \\frac{1}{4\\pi^{2}}\\sum_{\ u=0}^{\ u_{max}}\\left[k_{f,\ u,1}^{(l) }E_{f}^{(l)}+(\\sqrt{m_{l}^{2}+2eB\ u}+\\Delta_{l})^{2}{\\rm ln}\\left|\\frac{k_{f, \ u,1}^{(l)}+E_{f}^{(l)}}{\\sqrt{m_{l}^{2}+2eB\ u}+\\Delta_{l}}\\right|\\ \\right]\\] (24) \\[+\\frac{1}{4\\pi^{2}}\\sum_{\ u=1}^{\ u_{max}}\\left[k_{f,\ u,-1}^{(l) }E_{f}^{(l)}+(\\sqrt{m_{l}^{2}+2eB\ u}-\\Delta_{l})^{2}{\\rm ln}\\left|\\frac{k_{f, \ u,-1}^{(l)}+E_{f}^{(l)}}{\\sqrt{m_{l}^{2}+2eB\ u}-\\Delta_{l}}\\right|\\ \\right].\\] The total energy density of the n-p-e-\\(\\mu\\) system is[11] \\[\\varepsilon=\\varepsilon_{p}+\\varepsilon_{n}+\\varepsilon_{e}+\\varepsilon_{\\mu}+ \\frac{1}{2}m_{\\sigma}^{2}\\sigma^{2}+U(\\sigma)+\\frac{1}{2}m_{\\delta}^{2}\\delta_ {0}^{2}+\\frac{1}{2}m_{\\omega}^{2}\\omega_{0}^{2}+\\frac{1}{2}m_{\\rho}^{2}R_{0,0}^ {2}+\\frac{B^{2}}{8\\pi}. \\tag{26}\\] where the last term is the contribution from the external magnetic field. Because of the charge neutrality and chemical equilibrium the pressure of the system can be obtained by \\[P=\\sum_{i}\\mu_{i}\\rho_{i}-\\varepsilon=\\mu_{n}\\rho_{b}-\\varepsilon. \\tag{27}\\] ## 3 Numerical results From above expressions, one can obtain the nucleon effective masses and the EOS of the system after solving the meson field equation, of (4)-(7), numerically. The chemical equilibrium conditions of \\(\\mu_{n}\\) = \\(\\mu_{p}\\) + \\(\\mu_{e}\\) and \\(\\mu_{e}\\) = \\(\\mu_{\\mu}\\), as well as the charge neutrality \\(\\rho_{p}\\) = \\(\\rho_{e}\\) + \\(\\rho_{\\mu}\\) are applied in the iteration procedure. The nucleon-meson coupling constants and the coefficients in the scalar field self-interactions are obtained by adjusting to the bulk properties of symmetric nuclear matter where the magnetic field is absence. Furthermore, for symmetric nuclear matter the leptons are omitted and protons and neutrons have equal densities. The saturation properties consist of the binding energy(\\(E/A\\)), compression modulus(\\(K\\)), symmetry energy(\\(a_{sym}\\)), the effective mass(\\(m_{N}^{*}/M_{N}\\)) and the pressure \\(p\\). The binding energy can be obtained by \\(E/A=\\varepsilon/\\rho_{b}-M_{b}\\). The symmetry energy reads[5] \\[a_{sym}=\\frac{1}{2}C_{\\rho}^{2}\\rho_{0}+\\frac{k_{f}^{2}}{6\\sqrt{k_{f}^{2}+m^{ *2}}}-C_{\\delta}^{2}\\frac{m^{*2}\\rho_{0}}{2(k_{f}^{2}+m^{*2})(1+C_{\\delta}^{2} A(k_{f},m^{*}))}, \\tag{28}\\] where \\[A(k_{f},m^{*})=\\frac{4}{(2\\pi)^{3}}\\int_{0}^{k_{f}}\\frac{k^{2}d^{3}k}{(k^{2}+m ^{*2})^{3/2}} \\tag{29}\\] is a function of the Fermi momentum, \\(k_{f}=k_{f}^{(p)}=k_{f}^{(n)}\\), and the effective mass, \\(m^{*}=m_{p}^{*}=m_{n}^{*}\\). For symmetric nuclear matter at saturation density \\(\\rho_{0}\\), we have defined \\(C_{\\sigma}=g_{\\sigma}/m_{\\sigma}\\), \\(C_{\\omega}=g_{\\omega}/m_{\\omega}\\), \\(C_{\\rho}=g_{\\rho}/m_{\\rho}\\), \\(C_{\\delta}=g_{\\delta}/m_{\\delta}\\). In the presence of \\(\\delta\\)-fields, the compression modulus is calculated to be \\[\\frac{1}{9}K = k_{f}^{2}\\frac{\\partial^{2}}{\\partial k_{f}^{2}}(\\frac{ \\varepsilon_{0}}{\\rho})\\mid_{\\rho=\\rho_{0}}\\] \\[= C_{\\omega}^{2}\\rho_{0}+\\sum_{N}\\frac{k_{f}^{2}}{6E_{f}^{(N)}}+ \\frac{\\rho_{0}}{2}-\\frac{(B_{p}+B_{n})^{2}+C_{\\delta}^{2}[f_{\\sigma}(B_{p}-B_ {n})^{2}+2A_{n}B_{p}^{2}+2A_{p}B_{n}^{2}]}{2f_{\\sigma}+C_{\\delta}^{2}[(A_{n}+ A_{p})f_{\\sigma}+2A_{n}A_{p}]+A_{n}+A_{p}},\\]where N = n, p, and \\[A_{N} = \\frac{6\\rho_{s}^{N}}{m_{N}^{*}}-\\frac{3\\rho_{0}}{E_{N}},\\] \\[B_{N} = \\frac{m_{N}^{*}}{E_{N}},\\quad f_{\\sigma}=\\frac{U(\\sigma)}{g_{ \\sigma}^{2}}.\\] Recently, improved empirical data for the compressibility and symmetry energy are available [19, 20, 21], which benefit to the study of evident isospin asymmetric nuclear matter. In this work we adopt the parameter sets obtained in Ref.[4]. The coupling constants and the corresponding saturation properties of nuclear matter are listed in Table 1. One can see that SetA with and without the \\(\\delta\\)-field produce the same saturation properties, which fits well into the range of new empirical data. Therefore, it is suitable to be used to investigate the \\(\\delta\\)-field effect in isospin asymmetric matter. Another set of parameters GM3[22] is widely used in neutron-star matter calculations. The results for neutron-star matter without magnetic fields are presented in Fig.1. The dotted lines in Fig. 1(a) denote the effective masses of neutrons and protons as functions of baryon densities in a n-p-e-\\(\\mu\\) system reckoned in the model of SetA including the \\(\\delta\\)-field. The splitting of proton and neutron masses can affect the transport properties of dense matter[5]. The difference between the proton and the neutron effective mass inclines to decrease with the increasing of the baryon density. It indicates that the strength of \\(\\delta\\)-field \\(-g_{\\delta}\\delta\\) decreases with densities. For comparison the results for the model of SetA without \\(\\delta\\)-field and set GM3 are presented in the figure too. In Fig. 1(b) and Fig. 1(c) we show the pressure densities and energy densities as functions of the baryon density. The deviation between the results of SetA(\\(NL\\sigma\\omega\\rho\\)) and SetA(\\(NL\\sigma\\omega\\rho\\delta\\)) are negligible. Although the proton fraction increases by including the \\(\\delta\\)-field[4], the EOS has little change due to the fact that the energy density and pressure density display the average contributions of protons and neutrons, and the effects of isospin vector fields are small and cancel somewhat. Alternatively, because of a larger effective mass the results of Set GM3 deviate from SetA evidently. In the following calculations, we present the numerical results under strong magnetic fields by separating the cases with and without the inclusion of the AMM effects. ### Effect of the magnetic fields without AMM terms We consider magnetic fields in the range of \\(10^{12}\\)- \\(10^{20}\\)G. There are two characteristic strengths of magnetic fields for the problem involved, which are critical fields for electrons(\\(B_{c}^{e}=4.414\\times 10^{13}\\)G) and protons(\\(B_{c}^{p}=1.487\\times 10^{20}\\)G)[11]. We are more interested in the results at the vicinity of \\(B=B_{c}^{e}\\) and ultrastrong magnetic fields around \\(B=10^{18}\\)G. Therefore, in the following calculations we consider magnetic fields of \\(B=10^{12}\\)G, \\(10^{13}\\)G, \\(10^{15}\\)G, \\(10^{5}\\times B_{c}^{e}\\), \\(10^{19}\\)G, \\(3\\times 10^{19}\\)G. For comparison, the results for \\(B=0\\) will be presented too. In order to manifest the influence of magnetic fields on neutron-star matter, in the results of pressure and energy density given below the magnetic energy part will be excluded. Figure 2 depicts the nucleon effective masses as functions of baryon densities for various magnetic fields. From Fig. 2(a) it can be seen that the effective mass of protons at \\(B=10^{12}G\\), \\(10^{13}\\)G is much larger than that without magnetic field. The results of \\(B=10^{15}\\)G, \\(10^{5}\\times B_{c}^{e}\\), \\(10^{19}\\) G are indistinguishable from those of \\(B=0\\)G. When the field strength further increases, a smaller \\(m_{p}^{*}\\) was obtained. The enhancement and suppression of proton effective masses mainly results from the variation of proton fraction caused by the effect of magnetic fields. The effective masses of neutrons given in Fig. 2(b), however, have little changes. The different response of the proton and neutron effective mass can be explained in Figure 3, where the meson field strength as functions of baryon densities are displayed. Fig. 3(a) gives the \\(\\sigma\\)-field strength -\\(g_{\\sigma}\\sigma\\) as a function of the density. One can find that the curves behave similar to that of the proton effective mass, except that the \\(m_{p}^{*}\\) varies more rapidly due to the including of the \\(\\delta\\)-field. Fig. 3(b) delineates that the \\(\\delta\\)-field strength changes substantially with magnetic fields around \\(B=10^{13}\\)G. The nucleon effective masses are entirety defined by the \\(\\sigma\\)- and \\(\\delta\\)-field. The proton effective mass is enhanced significantly at \\(B=10^{12}\\)G and \\(B=10^{13}\\)G because of the large cancellation between the \\(\\sigma\\)- and \\(\\delta\\)-field. With the increasing of the magnetic field, the strength of the \\(\\sigma\\) and \\(\\delta\\)-field approaches the field free case, so does the proton effective mass. The situation for neutron effective masses is different. The including of the \\(\\delta\\)-field lowers the effect of magnetic fields on the \\(\\sigma\\)-field. Thus the neutron effective mass is almost independent with magnetic fields. One can also see that at larger magnetic field(\\(B>10^{15}\\)G) and higher density the \\(\\delta\\)-field strength is near zero. That indicates that the effective masses of neutrons and protons tend to be same at higher density. At ultrastrong magnetic field(\\(B=3\\times 10^{19}\\)G), the \\(\\delta\\)-field strength turns out to be negative at density of \\(\\rho>2\\rho_{0}\\). We also present the strength of \\(\\omega\\)-field(\\(g_{\\omega}\\omega_{0}\\)) and \\(\\rho\\)-field(\\(g_{\\rho}R_{0,0}\\)) in Fig. 3(c) and Fig. 3(d), respectively. The \\(\\omega\\)-field strength is solely defined by the baryon density and thus linear with it. The \\(\\rho\\)-field represents the isospin asymmetry of nuclear matter. From Fig. 3(d), one can find that the \\(\\rho\\)-field strength for magnetic fields of \\(B=10^{12}\\)G and \\(10^{13}G\\) are much larger than that for others, which means that the proton fractions are very small and the neutron-star matter are extremely asymmetric. At ultrastrong magnetic fields (\\(B=3\\times 10^{19}\\)) the \\(\\rho\\)-field is almost zero, i.e, the density of protons is approximately equal to that of neutrons. The neutron-star matter inclines to isospin symmetric at very large magnetic field. Figure 4 shows the EOS of neutron-star matter under magnetic fields for SetA(NL\\(\\sigma\\omega\\rho\\delta\\)) with the AMM terms excluded. For pressure and energy density we present the matter part only. In Fig. 4(a), the energy per nucleon as a function of the baryon density is depicted for various magnetic fields., while the pressure is given in Fig. 4(b) and (c). One can see that the equation of state becomes stiffer around B = \\(10^{12}\\)G compared to the field-free case. At the range of \\(B=10^{15}\\)G - \\(10^{19}\\)G the EOS is indistinguishable to that of \\(B=0\\). At even larger magnetic field the EOS comes out to be softer which is in accordance with previous studies[11]. The variation of EOS with magnetic field strength can be understood by the particle fraction as discussed before. The results of SetA(\\(NL\\sigma\\omega\\rho\\)) without \\(\\delta\\)-field are shown in Figure 5. The general trends of the nucleon effective mass, the \\(\\rho\\)-field and EOS are similar to that as depicted in Fig. 2 and Fig. 4, but the magnitude of variation is suppressed. Thus one may conclude that the magnetic field effects change the proton fraction and the \\(\\delta\\)-field enhances the isospin asymmetric effects. From above, we can find that the \\(\\delta\\)-field leads to splitting of nucleon effective masses. Under magnetic field the proton effective mass decrease rapidly with magnetic field and in range of \\(10^{15}G<B<10^{19}\\)G the results is almost indistinguishable from that of \\(B=0\\). The neutron effective mass have little change for different fields because the effects of magnetic field on \\(\\sigma\\)-field and \\(\\delta\\)-field counteract. The EOS change a lot for different magnetic field because of the change of proton fraction. By including of \\(\\delta\\)-field, the EOS of neutron-star matter is stiffer for strong magnetic fields (\\(B<10^{15}\\)G) and become softer for ultrastrong magnetic fields (\\(B>10^{18}\\)G). The effect of \\(\\delta\\)-field decrease with magnetic field and is little at magnetic field of \\(B>10^{18}\\)G. The EOS for \\(10^{15}<B<10^{19}\\)G is similar with that of \\(B=0\\) and at ultrastrong magnetic field(\\(B>10^{19}\\)G) the neutron-star matter tends to symmetric. From above analysis, we also find that the fraction of proton play an important role in the description of nucleon effective mass and the EOS of neutron-star matter. ### Effect of AMM terms In previous works[11], the effect of AMM terms of nucleons on the EOS of \\(n-p-e-\\mu\\) system in the absence of \\(\\delta\\)-field are studied in detail. In our studies, we will introduce the AMM term of muons. At the magnetic field \\(B=10^{17}\\)-\\(10^{20}\\)G, the densities of muons are comparable with baryon densities[11]. The effect of the muon AMM term is then expected to make sense. We adopt the value \\(a_{\\mu}=1165203(8)\\times 10^{-10}(0.7ppm)\\)[17, 18], which is a present word average experimental value. In the following we show the results including the AMM terms of nucleons and muons, while the effect of the electron AMM term will be investigated in the next section. The proton effective mass as a function of the baryon density is shown in Fig. 6(a). One can find that the results for \\(B<10^{19}\\)G have very little change compared with the case without AMM terms. For \\(B=3\\times 10^{19}\\)G the proton effective mass no more decreases monotonically but tends to reach certain situation at high density. A similar situation is exhibited for the neutron effective mass as shown in Fig. 6(b). AT ultra high magnetic field the \\(m_{n}^{*}\\) becomes larger than the field-free case while for lower B they are indistinguishable. This is mainly caused by the effects of AMM terms on the \\(\\sigma\\)-field. In Figure 7 one can see that the scalar field is increased at large field, especially for high densities. The changes on the \\(\\delta\\)- and \\(\\rho\\)-field are negligible for with and without AMM terms. The EOS of the system are shown in Figure 9. Again, the main modifications due to the including of AMM terms come out for high fields of \\(B>10^{19}\\)G. The energy per nucleon becomes strongly binding at lower densities. Alternately, a stiffer pressure is displayed compared to the case without AMM terms. It should be mentionable that the EOS is only for matter, the magnetic energy itself has not yet been added. We can conclude from above that at ultrastrong magnetic fields(\\(B>10^{19}\\)G) the proton effective masse and neutron effective mass all become larger by including of AMM terms. And the effective masse of neutron is bigger than that of proton effective masses at lower densities (\\(\\rho<4\\rho_{0}\\)), since in this regions the density of protons is larger than that of neutrons. The EOS becomes much stiffer at \\(B=10^{19}\\)G because of the effect of AMM terms. The nucleon effective mass and EOS of this system have no evident change under magnetic fields of \\(B<10^{15}\\)G. The AMM terms play an important role only under very strong magnetic field of \\(B>10^{18}\\)G. and the effect of AMM terms decrease with magnetic field. On the other hand, the effect of \\(\\delta\\)-field decreases with magnetic field and is very small at \\(B>10^{18}\\)G. Its main contribution was shown at \\(B\\sim 10^{12}G\\) ### Results including AMM of electrons The critical field of electrons is about \\(10^{13}\\)G. Most of magnetic field strengths considered in this work are around or well beyond this point. It is generally believed that the high-order terms stemming from the vacuum polarization of electrons in an external magnetic field [23] may get into work near the critical point and cancel the electron AMM term. Nevertheless, it is interesting to check the effects of the electron AMM term nu merically in the present system. The results including the \\(\\delta\\)-field and AMM terms of all relevant particles are depicted in Figure 9. Fig. 9(a) and (b) show the nucleon effective masses as functions of baryon densities. The pressure as functions of baryon densities and energy densities are show in Fig. 10(c). One can see that the nucleon effective masses and EOS have negligible changes compared with the situation excluding the AMM of electrons. ## 4 Summary and outlook We have studied the properties of the neutron-star matter consisting of n-p-e-\\(\\mu\\) in strong magnetic fields. For nuclear interactions we applied the relativistic mean field theory with the exchange of \\(\\sigma\\)-, \\(\\omega\\)-, \\(\\rho\\)- and \\(\\delta\\)-mesons. Our main interest is to investigate the influences of isospin vector field on the asymmetric matter in the presence of magnetic fields. The effects of AMM terms of nucleons and leptons are included. Two sets of coupling constants with and without \\(\\delta\\)-field are used in calculations. The nucleon effective masses and EOS are studied in detail. Interesting results have been found for two regions of magnetic field strength. At lower field of \\(B\\sim 10^{12}\\)G, where the AMM terms play no role, the proton effective mass was enhanced significantly compared to the case of \\(B=0\\). The equation of state becomes much stiffer. This is mainly caused by the change of proton fraction. The neutron effective mass is almost independent of magnetic fields because the effect of \\(\\sigma\\)- and \\(\\delta\\)-field cancel to some extent. In the range of \\(B10^{15}--10^{18}\\)G, no obvious differences were found both on the nucleon effective masses and EOS for with and without the magnetic field. At larger field of \\(B\\sim 10^{19}\\)G, the proton effective mass increases with magnetic field at high density(\\(\\rho>5\\rho_{0}\\)) while at lower density it becomes less than the neutron effective mass. The EOS of neutron-star matter is softer at ultrastrong field but becomes stiffer with the inclusion of AMM terms. Besides, one can find that the neutron-star matter tends to be symmetric at the range calculated. Compared with the results without \\(\\delta\\)-field, we find that the effect of \\(\\delta\\)-field decreases with magnetic field and becomes little at \\(B>10^{19}\\)G. It can also be found that the effect of AMM terms increases with magnetic field and is very little when the field strength \\(B<10^{15}\\)G. At the end, we have presented the results including \\(\\delta\\)-field and AMM of nucleons, muons and electrons, and find that the effect of the electron AMM is very little. Particularly, the proton fraction can be proved to play an important role in descriptions of properties of neutron-star matter. The vector self-interaction terms of \\(\\omega\\)-field can influence the maximum mass, the rotational frequency and cooling properties of neutron stars[24]. The spin polarization of protons probably affect the structure and composition of neutron stars. These questions will be studied in forthcoming work. With the densities increasing, the hyperon and quark degrees of freedom must be considered for the core of neutron stars [3, 24]. The interactions of quarks are very deferent from that of nucleons and can lead to many new results[25]. All these can influence the EOS of neutron star matter and warrant further studies. **Acknowledgements:** The authors are grateful to N. Van. Giai, J. Schaffner and B. Liu for fruitful discussions. This work was supported by the National Natural Science Foundation of China under Grant No. 10275072. ## References * [1] J. D. Walecka, _Ann. Phys(N.Y)._**83**, 491(1974). * [2] B. D. Serot and J. D. Walecka, _Adv. Nucl. Phys._**16**, 1(1986). * [3] J. Schaffner and I. N. Mishustin, _Phys. Rev._**C53**, 1416(1996). * [4] B. Liu, H. Guo, V. Baran, M. Di. Toro and V. Greco, nucl-th/0409014, B. Liu, V. Greco, V. Baran, M. Colonna, and M. Di. Toro, _Phys. Rev._**C65**, 045201(2002) * [5] S. Kubis and M. Kutschera, _Phys. Lett._**B399**, 191(1997). * [6] D. P. Menezes and C. Providencia, _Phys. Rev._**C70**, 058801(2004). * [7] Bao-An Li, _phys. Rev._**C69**, 064602(2004). * [8] Bao-An Li and Lie-Wen Chen, nucl-th/0508024. * [9] A. Melatos, _APJ._**L77**, 519(1999). * [10] I. Lerche and D. N. Schramm, _APJ._**216**, 881(1977). * [11] A. Broderick, M. Prakash and J. M. Lattimer, _APJ._**537**, 351(2000). * [12] C. Y. Cardall, M. Prakash and J. M. Lattimer, _APJ._**554**, 322(2001). * [13] S. Chakrabarty, D. Bandyopadhyay, and S. Pal, _Phys. Rev. Lett._**78**, 2898(1997) * [14] N. K. Glendenning, _APJ._**293**, 470(1985). * [15] J. Boguta, A.R. Bodmer, _Nucl.Phys._**A292**, 413(1977). * [16] G. Mao, N. V. Kondratyev, A. Iwamoto, Z. Li, X. Wu, W. Greiner, N. I. Mikhailov, _Chin. Phys. Lett._**20**, 1238(2003). * [17] G. W. Bennett, B. Bousquet et al., _Phys. Rev. Lett._**89**, 101804(2002). * [18] Mark Byrne, Christopher Kolda and Jason E. Lennon, _Phys. Rev._**D67**, 075004(2003). * [19] D. Vretenar, T. Niksic and P. Ring, _Phys. Rev._**C68**, 024310(2003). * [20] G. Colo, N. Van Giai, J. Meyer, K. Bennaceur and P. Bonche, _Phys. Rev._**C70**, 024307(2004). * [21] V. B. Soubbotin, V. I. Tselyaev, X. Vinas, _Phys. Rev._**C69**, 064312(2004) * [22] N. K. Glendenning and S. A. Moszkowski, _Phys. Rev. Lett._**67**, 2414(1991). * [23] R.C. Duncan, astro-ph/0002442. * [24] N. K. Glendenning, _Z. Phy._**326**, 57(1987). * [25] P. Wang, S. Lawley, D. B. Leinweber, A. W. Thomas, A. G. Williams, nucl-th/0506014 \\begin{table} \\begin{tabular}{c c c c c c c c c c c c} \\hline \\hline Parameter Sets & \\(C_{\\sigma}^{2}\\) & \\(C_{\\omega}^{2}\\) & \\(C_{\\rho}^{2}\\) & \\(C_{\\theta}^{2}\\) & b & c & \\(\\rho_{0}\\) & E/A & \\(m_{\\rm X}^{*}/M_{N}\\) & \\(a_{\\rm sym}\\) & K \\\\ & (fm) & (fm) & (fm) & (fm) & (\\(fm^{-1}\\)) & & \\(fm^{-3}\\) & (MeV) & & (MeV) & (MeV) \\\\ setA(NL\\(\\rho\\)) & 10.32924 & 5.42341 & 0.94999 & 0.0000 & 0.03302 & -0.00483 & 0.16 & -16.0 & 0.75 & 31.3 & 240 \\\\ SetA(NL\\(\\rho\\delta\\)) & 10.32924 & 5.42341 & 3.1500 & 2.5000 & 0.03302 & -0.00483 & 0.16 & -16.0 & 0.75 & 31.3 & 240 \\\\ GM3 & 9.927 & 4.820 & 1.198 & 0.000 & 0.041205 & -0.002421 & 0.153 & -16.3 & 0.78 & 32.5 & 240 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Parameter sets and the corresponding saturation properties of nuclear matterFigure 1: Nucleon effective masses \\(m_{N}^{*}/M_{N}\\)(a) and pressure(b) as functions of baryon densities \\(\\rho/\\rho_{0}\\) for the neutron-star matter without magnetic fields; (c) shows the energy per nucleon as a function of baryon density. Figure 2: Effective masses of protons(a) and neutrons(b) as functions of the density for different magnetic field strengths. The parameter SetA(\\(NL\\sigma\\omega\\rho\\delta\\)) has been used in calculations. Figure 3: The strength of the \\(\\sigma\\)-field(a), \\(\\delta\\)-field(b), \\(\\omega\\)-field(c) and \\(\\rho\\)-field(d) as functions of densities for different magnetic fields. Figure 4: Energy per nucleon(a) and the matter part of pressure density \\(p_{m}\\)(b) as functions of the baryon density, Figure(c) shows the \\(p_{m}\\) as a function of matter energy densities \\(\\varepsilon_{m}\\). Figure 5: The results for n-p-e-\\(\\mu\\) system calculated with SetA(\\(NL\\sigma\\omega\\rho\\)) without \\(\\delta\\)-field. (a) shows the nucleon effective mass as a function of the baryon density for magnetic fields as presented in Figure2; (b) displays the \\(\\rho\\)-field strength \\(g_{\\rho}R_{0,0}\\) ; (c) and (d) show the energy per nucleon and pressure density, respectively. Figure 6: Same as Figure 2, except that the AMM terms of nucleons and muons are included. Figure 7: Same as Figure 3, except that the AMM terms of nucleons and muons are included. Figure 8: Same as Figure 4, except that the AMM of nucleons and muons are included. Figure 9: Results including the effect of AMM of electron for model SetA(NL\\(\\sigma\\omega\\rho\\delta\\)). (a) and (b) present the effective masses of proton and neutron, respectively; (c) and (d) show the pressure densities as functions of baryon densities and energy densities, respectively.
We study the effects of isovector-scalar meson \\(\\delta\\) on the equation of state (EOS) of neutron star matter in strong magnetic fields. The EOS of neutron-star matter and nucleon effective masses are calculated in the framework of Lagrangian field theory, which is solved within the mean-field approximation. From the numerical results one can find that the \\(\\delta\\)-field leads to a remarkable splitting of proton and neutron effective masses. The strength of \\(\\delta\\)-field decreases with the increasing of the magnetic field and is little at ultrastrong field. The proton effective mass is highly influenced by magnetic fields, while the effect of magnetic fields on the neutron effective mass is negligible. The EOS turns out to be stiffer at \\(B<10^{15}\\)G but becomes softer at stronger magnetic field after including the \\(\\delta\\)-field. The AMM terms can affect the system merely at ultrastrong magnetic field(\\(B>10^{19}\\)G). In the range of \\(10^{15}\\) G - \\(10^{18}\\) G the properties of neutron-star matter are found to be similar with those without magnetic fields.
Write a summary of the passage below.
arxiv-format/0508193v3.md
**Does the subtropical jet catalyze the mid-latitude** **atmospheric regimes?** Paolo M. Ruti\\({}^{(1)}\\), Valerio Lucarini\\({}^{(2)}\\), Alessandro Dell'Aquila\\({}^{(1)}\\), Sandro Calmantti\\({}^{(1)}\\), Antonio Speranza\\({}^{(2)}\\), \\({}^{1}\\) _Climate Section, ENEA, Roma, Italy_ \\({}^{2}\\)_Dept. of Mathematics and Computer Science, University of Camerino, Camerino, Italy_ ## 1 Introduction The notion that well-defined winter mid-latitude atmospheric _patterns of flow_ are _recurrent_ during the northern hemispheric winters [_Dole, 1983_] has been repeatedly put forward, investigated and debated since the early definition of Grosswetterlage [_Baur, 1951_], to the classical identification of Atlantic blocking [_Rex, 1950_], all the way to more recent work on regimes detection and identification [_Corti et al., 1999_]. In such a perspective, the relevant problem concerning the general circulation of the atmosphere has been understanding whether the large scale atmospheric circulation undergoes fluctuations around a single equilibrium [_Nitsche et al., 1994; Stephenson et al., 2004_] or multiple equilibria [_Charney and Devore, 1979; Hansen and Sutera, 1986; Mo and Ghil, 1988; Benzi and Speranza, 1989; among others_]. The understanding of this climatic property would also help in responding to practical needs such as addressing the feasibility of extended range weather forecasts or the robust detection of climate changes [_Corti et al., 1999_]. Coming to the dominant physical processes, the mid-latitude dynamics feature upper tropospheric westerlies and synoptic to planetary waves as typical ingredients. The radiative forcing and the Earth rotation constrain the characteristics of the mean axially symmetric circulation and thereby the strength of the midlatitude westerly winds (jet) [_Held and Hou, 1980_]. The observational evidence shows that the strength of the westerlies is Gaussian-distributed (\\(u_{mean}\\approx 30m\\;s^{-1}\\)). Instead, it has been proposed that the activity of the ultra-long planetary waves may have a bimodal distribution [_Hansen and Sutera, 1986_],although the issue of the statistical significance of non-unimodal distributions of planetary-wave amplitudes has been intensely debated [_Nitsche et al.__1994; Hansen and Sutera__1995; Stephenson et al.,_ 2004; Christiansen__2005_]. A theoretical support exists for the search of multimodal distribution of the activity of planetary waves. In the context of orographic resonance theories, the zonal flow - wave field interaction (via form-drag) was first proposed as a driving mechanisms allowing for the occurence of multiple equilibria of the planetary waves amplitude [_Charney and Devore,_ 1979]. However, transitions between the quasi-stable equilibria require for energetic reasons large variations (\\(\\Delta u\\approx 40m\\;s^{-1}\\)) of the mean westerlies, at odds with the \"normality\" of the distribution of the observed westerlies strength [_Malguzzi and Speranza,_ 1981; Benzi et al.,_ 1986]. Therefore, a dynamical interpretation featuring fixed strength of the westerlies was required. Benzi et al. [1986] suggested that, when the mean westerlies are subresonant or superresonant, the response of mid-latitude atmospheric long waves to orographic forcing can be well described in terms of linear Rossby waves. Instead, near topographic resonance the meridional structure of the zonal wind may produce a wave self non-linearity leading to multiple equilibrium amplitudes of the perturbation field [_Malguzzi et al.__1997_]. Thus, the bent resonance curve not only explains the existence of the multiple equilibria of the planetary wave amplitude, but also suggests that relatively small changes of the jet strength may imply a switch from unimodal to multimodal regimes of the atmospheric circulation (see Figure 1). ## 2 Data and methods In this study, we consider the two major reanalysis products released by NCEP-NCAR [_Kistler, et al, 2001_] and by ECMWF [_Simmons and Gibson, 2000_] (hereafter NCEP and ERA40, respectively). In particular, we consider the winter-time daily fields (DJF) of the 500 hPa geopotential height and of the 200 hPa zonal wind, for the overlapping time frame ranging from December 1st 1957 to August 31\\({}^{\\rm st}\\) 2002. Two proper counterparts to the dynamical parameters employed in the theories are extracted from such datasets by computing two robust indicators of relevant large scale features of the mid-latitude troposphere. The Wave Activity Index (WAI), introduced by Hansen and Sutera [1986], is computed as the root mean square of the zonal wavenumbers 2 to 4 of the winter 500 hPa geopotential height variance over the channel 32\\({}^{\\circ}\\)N - 72\\({}^{\\circ}\\)N. The WAI provides a synthetic picture of the ultra long planetary waves and captures the orographic resonance, since an approximate mode of zero phase velocity (resonance) is 3 [_Malguzzi and Speranza, 1981_]. The Jet Strength Index (JSI) is computed daily as the maximum of the zonal mean of the zonal wind at 200hPa, where the sub-tropical jet peaks. In order to filter out the synoptic atmospheric variability, we apply a low-pass filter to both WAI and JSI indexes by performing a 5-day running mean to the signal. In order to capture the anomalies with respect to the seasonal cycle we filter out from the WAI signal the most prominent spectral peaks occuring at 12, 6 and 4 months, which are directly related to influence of the external solar forcing. On the other hand, in the context of the bent resonance theory [_Benzi et al., 1986_] the jet strength is considered as an autonomous forcing parameter of the system,controlling and catalysing its internal variability. Therefore, we do not filter out from the JSI the seasonal cycle and its harmonics as done for the WAI. The signature of the synoptic atmospheric variability is removed by performing a 5-day running mean on both WAI and JSI indexes. Furthermore, the results are robust with respect to the different filtering techniques. The continuous probability density functions have been computed using a kernel estimation technique depending on a smoothing parameter h [Silverman, 1986]. ## 3 Results. Slight discrepancies between the two reanalyses are expected in view of the different description of midlatitude wave activity discussed by Dell'Aquila et al. (2005). Therefore, we first assess the overall equivalence of the picture provided by the winter WAI and JSI extracted from the two reanalyses, by performing one- and two-dimensional Kolmogorov-Smirnov test (_Fasano and Franceschini, 1987_) on the distribution of the single variables and on the joint PDFs. In both cases, the PDFs are equivalent at a confidence level larger than 95%. Therefore, we can safely consider the results obtained with NCEP as representative for both reanalyses and highlight the main differences where necessary. The empirical joint PDF, constructed by means of two-dimensional gaussian estimators (Figure 2) presents multiple, well defined, peaks distributed over the WAI-JSI space. A major peak (point A in Figure 2) corresponds to weak upper tropospheric jet (JSI \\(\\sim\\) 40 m s-1) and low-to-intermediate activity of the planetary waves (WAI \\(\\sim\\) 60 m). A second peak, (point B) corresponds to intermediate values of JSI and low values of WAI (JSI \\(\\sim\\) 43 m s-1; WAI \\(\\sim\\) 55 m). The third peak (point C) corresponds to similar intensities of the jet (JSI \\(\\sim\\) 45 m s-1) and very high values of (WAI \\(\\sim\\) 70 m). The fourth peak (point D), corresponding to intense sub-tropical jet (JSI \\(\\sim\\) 50 m s-1), features relatively weak activity of planetary waves (WAI \\(\\sim\\) 55 m). These features are consistent with the theoretical framework sketched in Figure 1, where three different regions can be separated, characterized by low, intermediate and high intensities of the tropospheric jet and by a different number of equilibrium amplitudes of the planetary waves. While the 2d joint PDF essentially provides a qualitative view on the properties of the system, a more stringent statistical interpretation of such analogy can be highlighted by considering the distribution of WAI obtained by fixing the range of JSI variability. We split the entire WAI-JSI space into three sectors, characterized by low (38 m s-1 \\(<\\) JSI \\(<\\) 41 m s-1), intermediate (42 m s-1 \\(<\\) JSI \\(<\\) 47 m s-1) and high (48 m s-1 \\(<\\) JSI \\(<\\) 52 m s-1) intensities of the jet, each sector comprising about 1/6, 1/3, and 1/6 of the total sample population, respectively. The empirical distributions of WAI (Figure 2b) are meant to be a statistically robust representation of the atmospheric planetary waves in each sub range of the JSI values. Well-distinct peaks are observed in the intermediate range, while for weak or strong JSI unimodal distributions appear. Notice that strong oceanic tropical forcing (El Nino) [_Philander, 1990_] implies zonal elongation and strengthening of the sub-tropical jet respect to normal conditions, resulting in strong JSI (more than 48 m/s), and locating the El Nino years in the upper region of the phase-space (Figure 2a). Statistical robustness in the properties of the PDFs is required in order to avoid artificial results [_Stephenson et al., 2004_]. Since we are testing the hypothesis of having a specific number of peaks in each of the considered JSI sub-range, we estimate the optimal kernel width \\(h_{0}\\) by generating an ensemble of surrogate datasets (1000 members) and then choosing the value of \\(h\\) that maximizes the trade-off between having as many surrogate distributions with the correct number of peaks and as few surrogate distributions with the wrong number of peaks. The surrogate datasets are generated with a bootstrap Montecarlo experiment in which 45 winters are selected randomly with repetition among the reanalysis period lasting from 1958 to 2002 for both JSI and WAI at the same time. We stratify the data according to the values of JSI and, after spanning a whole range of values of \\(h\\) we find that the best trade-off is realized for \\(h=h_{0}=3.5\\,m\\) with a broad maximum ranging from \\(h=3\\,m\\) to \\(h=4\\,m\\), in close agreement with a recent study, where a different measure of the skill score was considered [_Christiansen, 2005_]. For this value of \\(h\\) we have that among the surrogate datasets extracted from NCEP (ERA40), in the low-JSI range 94% (68%) of the realizations have unimodal distribution of WAI; in the intermediate-JSI range 88% (82%) of the realizations have bimodal distribution and in the high-JSI range, 79% (55%) of the realizations have unimodal distribution. It is important to test null-hypothesis of unimodality for the intermediate range of JSI. A first test is performed by constructing a Gaussian distribution equivalent to the WAI pdf for intermediate JSI. A second test is performed by considering the unimodal distribution obtained by increasing the initial smoothing parameter in the WAI pdf until a unimodal distribution is attained. For both NCEP-ERA40 reanalysis, such marginal kernel width h\\({}_{\\text{u}}\\) is 5 m. From these two unimodal distributions, we extract a set of 10000 WAI surrogate winter time series and then, computing each time the pdf with the kernel width h\\({}_{0}\\), we count the fraction of the computed WAI pdfs having a dip larger than that obtained with the original data. For the test with the Gaussian-equivalent distribution, we have that for both NCEP and ERA40 datasets, the bimodality is statistically significant at the 99.9% level. For the test with the marginal unimodal distribution, the null-hypothesis can be rejected with over 97% (91%) confidence level for the NCEP (ERA40) dataset. These results are only weakly sensitive to the choice of the kernel width. Since our 1D pdf results from selecting days with inter-mediate JSI values, it is not the result of a continuous time-series. So, it is intrinsically impossible to take into account time lagged correlation as done in Christiansen (2005) when assessing the significance of the 1-D pdf properties. Therefore, the statistical confidence of this test may be overestimated. The position and height of the peaks of wave activity corresponding to each class of intensity of the jet are safely characterized in a statistical sense. In particular, the two peaks observed for intermediate values of JSI are well separated (\\(\\sim\\) 20 m). Moreover, the two reanalyses agree in the position of the peaks, despite the different degree of statistical confidence. Our results appear robust since similar and consistent results - although with different degree of statistical confidence - are obtained with all the values of \\(h\\) ranging from 2 m to 5 m. Using the density PDFs given in Figure 2b to stratify the data, the time-mean maps of the zonal anomalies for all days corresponding to the peaks of the bimodal distribution and of the unimodal distributions have been computed. For peaks A and D (Figure 3 i-iii), we select the days characterized by a range of WAI values associated to a population which is higher than half the height of the corresponding distribution. For peaks B and C, we consider the days falling in a range of WAI values that starts at the dip value of the bimodal distribution and extends to twice the distance from the associated relative maximum. The difference map between the two intermediate JSI patterns B and C (Figure 3-ii) indicates a more amplified wave number three component for the pattern C, with higher geopotential height centers over the northern Pacific and Alaska, the Northern Sea, and the Siberian land. Previous analyses [_Hansen and Sutera, 1995; Christiansen 2005_], which did not stratify data with the jet strength, highlighted a predominance of the wave number two in the difference map. The eddy field corresponding to the low JSI (pattern A) shows the lowest ridge over the Rockies and the higher Greenland through, while for the high JSI (pattern D) the highest ridge over the Rockies is observed. ## 4 Conclusions. Our analysis indicates that in the Northern Hemisphere the statistics of the planetary atmospheric waves can be characterized in terms of the sub-tropical jet, consistently with the physical framework proposing the nonlinear modification of the topographic resonance due to the nonlinear wave self-interaction [_Benzi et al., 1986_] as the basic mechanism for the low-frequency variability of the atmosphere. Our physically-based approach is somewhat complementary to the dynamical system-based approach presented in Christiansen [2005]. We have proved on the available NCEP and ERA 40 global reanalyses that for intermediate jet strength, an indicator of the planetary waves presents a bimodal behavior, while for higher or lower values of the jet strength the estimated pdf is unimodal. Thus, the interpretation of the interaction between the tropics and the mid-latitudes should be addressed not only considering the wave trains emanating from the tropics [_Hoskins and Ambrizzi, 1993_], but also in the perspective of the role of the sub-tropical jet in defining the statistical properties of the planetary waves. The zonal mean circulation (Hadley circulation), and the tropical oceanic forcings (i.e. ENSO) play a relevant role in this view. In this perspective, the strongest El Nino years could be located in the upper branch of a hysteresis cycle, where the system undergoes a unique solution. Similar conclusion has been obtained by Molteni et al. [2005]. ## References * Baur (1951) Baur, F. (1951), Extended range weather forecasting. _Compendium of Meteorology_, Amer. Meteorol. Soc., 814-833. * Benzi and Speranza (1989) Benzi, R., Speranza, A. (1989), Statistical properties of low frequency variability in the Northern Hemisphere. _J. Climate 2_, 367-379. * Benzi et al. (1986) Benzi R., Malguzzi., P., Speranza, A., Sutera (1986), A. The statistical properties of general atmospheric circulation: observational evidence and a minimal theory of bimodality. _Quart. J. Roy. Met. Soc., 112_, 661-674. * Charney and Devore (1979) Charney, J. G., Devore, J.C. (1979) Multiple flow equilibria in the atmosphere and blocking. _J. Atmos. Sci., 36_, 1205-1216 * Christiansen (2005) Christiansen, Bo, (2005) On the bimodality of planetary-scale atmospheric wave amplitude index. _J. Atmos. Sci.,_. In press. * Corti et al. (1999) Corti, S., Molteni F., Palmer, T.N. (1999) Signature of recent climate change in frequencies of natural atmospheric circulation regimes. _Nature_, 398, 799-802. * Dell'Aquila et al. (2005) Dell'Aquila, A., Lucarini, V., Ruti, P.M., Calmanti S. (2005) Hayashi Spectra of the Northern Hemisphere Mid-latitude Atmospheric Variability in the NCEP-NCAR and ECMWF Reanalyses. _Climate Dynamics_ DOI: 10.1007/s00382-005-0048-x. * Dole (1983) Dole, R. M. (1983) Persistent anomalies of the extratropical Northern Hemisphere wintertime circulation. _Large-Scale Dynamical Processes in the Atmosphere_, B. J. Hoskins and R. P. Pearce, eds., Academic Press, NY, 95-109. * Fasano and Franceschini (1987) Fasano, G., Franceschini (1987), A. A multidimensional version of the Kolmogorov-Smirnov test. _Mon. Not. R. Astr. Soc. 225_, 155-170. * Hansen and Sutera (1986) Hansen, A.R., Sutera, A. (1986) On the probability density distribution of Planetary-Scale Atmospheric Wave amplitude. _J. Atmos. Sci., 43_, 3250-3265. * Hansen and Sutera (1995) Hansen, A.R., Sutera, A. (1995) The probability density distribution of Planetary-Scale Atmospheric Wave amplitude Revisited. _J. Atmos. Sci., 52_, 2463-2472. * Held and Hou (1980) Held, I.M, Hou, A.Y. (1980) Nonlinear axially symmetric circulations in a nearly inviscid atmosphere. _J. Atmos. Sc., 37_, 515-533. * Hoskins and Ambrizzi (1993) Hoskins, B.J., Ambrizzi, T. (1993) Rossby wave propagation on a realistic longitudinally varying flow. _J. Atmos. Sc., 50_, 1661-1671. * Kistler et al. (2001) Kistler R, et al. (2001) The NCEP-NCAR 50-year reanalysis: Monthly means CD-ROM and documentation. _Bull. Am. Meteorol. Soc. 82_, 247-267 * Malguzzi and Speranza (1981) Malguzzi, P., Speranza, A. (1981) Local Multiple Equilibria and Regional Atmospheric Blocking. _J. Atmos. Sc., 9_, pp. 1939-1948. * Malguzzi et al. (1997) Malguzzi, P., Speranza A., Sutera A., Caballero R. (1997) Nonlinear amplification of stationary Rossby waves near resonance, Part II. _J. Atmos. Sci., 54_, 2441-2451. * Mo and Ghil (1988) Mo, K., Ghil, M. (1988) Cluster analysis of multiple planetary flow regimes. _J. Geophys. Res._, _93_, 10927-10952. * Molteni et al. (2005) Molteni, F., F. Kucharski and S. Corti, (2005) On the predictability of flow-regime properties on interannual to interdecadal timescales. In: \"Predictability of weather and climate\", T.N. Palmer and R. Hagedorn Eds., _Cambridge University Press_. In press. * Morbidelli et al. (2005)Nitsche, G., J. M. Wallace, and C. Kooperberg (1994) Is there evidence of multiple equilibria in planetary wave amplitude statistics? J. Atmos. Sci., _51_, 314-322. * [25] Philander, S.G. (1990) _El Nino, La Nina, and the Southern Oscillation_. San Diego, Academic Press * [26] Rex, D.F. (1950) Blocking action in the middle troposphere and its effect upon regional climate. Part 2: The climatology of blocking action. _Tellus_, **2**, 275-301. * [27] Silverman, B.W. (1986) Density _Estimation for Statistics and Data Analysis_. Chapman & Hall. * [28] Simmons, A. J., Gibson, J.K. (2000) The ERA-40 Project Plan, ERA-40 Project Report Series No. 1, ECMWF, 62 pp. * [29] Stephenson, D.B., Hannachi, A., O'Neill, A. (2004) On the existence of multiple climate regimes. _Quart. J. Roy. Mel. Soc._, _130_, 583-605. ## Acknowledgments. The authors wish to thank A. Sutera and two anonymous referees for useful suggestions. NCEP data have been provided by the NOAA-CIRES Climate Diagnostics Center ([http://www.cdc.noaa.gov/](http://www.cdc.noaa.gov/)). The ECMWF ERA-40 data have been obtained from the ECMWF data server. Figure 1: Qualitative behavior of the amplitude of the orographic wave (WAI) as a function of an indicator of the jet strength (JSI). Stable equilibria are indicated by the large dots. Figure 2: a) Two-dimensional joint pdf using the index of the planetary waves (WAI) and the index of the tropospheric jet strength (JSI). b) One-dimensional pdf of the planetary waves index (WAI) for low JSI (red), intermediate (blue) and for high JSI (black). NCEP dataset, 1958-2002 winters. Units: WAI [m], JSI [m/s].
Understanding the atmospheric low-frequency variability is of crucial importance in fields such as climate studies, climate change detection, and extended-range weather forecast. The Northern Hemisphere climate features the planetary waves as a relevant ingredient of the atmospheric variability. Several observations and theoretical arguments seem to support the idea that winter planetary waves indicator obey a non-gaussian statistics and may present a multimodal probability density function, thus characterizing the low-frequency portion of the climate system. We show that the upper tropospheric jet strength is a critical parameter in determining whether the planetary waves indicator exhibits a uni- or bimodal behavior, and we determine the relevant threshold value of the jet. These results are obtained by considering the data of the NCEP-NCAR and ECMWF reanalyses for the overlapping period. Our results agree with the non-linear orographic theory, which explains the statistical non-normality of the low-frequency variability of the atmosphere and its possible bimodality.
Write a summary of the passage below.
arxiv-format/0508354v8.md
# The Effects of Metallicity and Grain Size on Gravitational Instabilities in Protoplanetary Disks Kai Cai, Richard H. Durisen, Scott Michael, Aaron C. Boley Astronomy Department, Indiana University, Bloomington, IN 47405 [email protected] Annie C. Mejia Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195-1580 [email protected] Megan K. Pickett Department of Chemistry and Physics, Purdue University Calumet, 2200 169th St., Hammond, IN 46323 [email protected] Paola D'Alessio Centro de Radioastronomia y de Astrofisica, Apartado Postal 72-3, 58089 Morelia, Michoacan, Mexico [email protected] ## 1 Introduction The past decade has seen the discovery of over 150 exoplanets ([http://www.obspm.fr/planets](http://www.obspm.fr/planets)). One statistical trend that has emerged from these data is that the probability of finding a gas giant planet around a star with current techniques increases with the host star's metallicity (Santos et al., 2001; Fischer & Valenti, 2005). As shown by Fischer & Valenti (2005), the high metal content of planet host stars seems to be primordial. Therefore, this trend, if real (Sozzetti et al., 2005), indicates that short-period gas giant planets are more likely to occur in metal-rich than in metal-poor protoplanetary disks. The two contending theories for gas giant formation are core accretion plus gas capture (Pollack et al., 1996) and disk instability (Boss, 2002; Mayer et al., 2004). Calculations show that the metallicity relation can be explained within the framework of core accretion (e.g., Ida & Lin, 2004; Kornet et al., 2005). For disk instability, Boss (2002) finds that, in his three-dimensional hydrodynamics disk simulations with radiative cooling, clump formation by disk instability occurs for all metallicities over the range 0.1 to 10 Z\\({}_{\\odot}\\), due to rapid cooling by convection (Boss, 2004), and he attributes the abundance of short period gas giants around high metallicity stars to migration (Boss, 2005), a mechanism also invoked by Ida & Lin (2004) to explain part of the metallicity correlation. By contrast, Mejia (2004), who uses a somewhat more sophisticated treatment of radiative boundary conditions, finds much longer cooling times and no fragmentation into dense clumps in her disk instability simulations. Here we report results of new disk calculations based on Mejia's methods in which the opacity is varied by using different metallicities and grain sizes. Even over a much narrower range of metallicities than considered by Boss (2002), we find that the strength of the GI's does vary noticeably and that disk fragmentation is not seen for any metallicity or grain size tested. Methods We conduct protoplanetary disk simulations using the Indiana University Hydrodynamics Group code (see Pickett et al., 1998, 2000, 2003; Mejia, 2004; Mejia et al., 2005), which solves the equations of hydrodynamics in conservative form on a cylindrical grid \\((r,\\phi,z)\\) to second order in space and time using an ideal gas equation of state. Self-gravity and shock mediation by artificial bulk viscosity are included. Reflection symmetry is assumed about the equatorial plane, and free outflow boundaries are used along the top, outer, and inner edges of the grid. We adopt the treatment of radiative physics detailed in Mejia (2004) with few modifications. Let \\(\\tau_{R}\\) be the optical depth, defined by using the Rosseland mean opacity measured vertically down from above. Energy flow in cells with \\(\\tau_{R}>2/3\\) is calculated in all three directions using flux-limited diffusion (Bodenheimer et al., 1990). Cells with \\(\\tau_{R}<2/3\\), in the disk atmosphere and in the outer disk, cool radiatively using an optically thin LTE emissivity. Atmosphere heating by high-altitude shocks and upward moving photons from the photosphere are included. In this paper, we also assume that an external envelope heated by the star (Natta, 1993; D'Alessio, Calvet & Hartmann, 1997) shines vertically down on the disk. This IR irradiation is characterized by a black body flux with a temperature \\(T_{irr}\\). The optically thick and thin regions are coupled, over one or two cells, by an Eddington-like grey atmosphere fit that defines the boundary flux for the diffusion approximation. The opacities and molecular weights for a solar composition are from D'Alessio et al. (2001), with a power-law grain size distribution of \\(n(a)\\sim a^{-3.5}\\) ranging from 0.005 \\(\\mu\\)m to a largest grain size \\(a_{max}\\) that can be varied. To model variations in metallicity Z, the mean opacities are multiplied by a factor \\(f_{\\kappa}=\\) Z/Z\\({}_{\\odot}\\), as was done by Boss (2002). Tests of our radiative scheme for a vertically stratified gas layer with a constant gravity, a constant input flux at the base, and a grey opacity show relaxation to an Eddington-like solution with the correct flux from the photospheric layers. ## 3 Simulations ### Initial Model and the Set of Simulations The initial axisymmetric model for all the calculations is the same as that used by Mejia et al. (2005). The central star is 0.5 \\(M_{\\odot}\\), and the nearly Keplerian disk of 0.07 \\(M_{\\odot}\\) has a surface density \\(\\Sigma(r)\\propto r^{-0.5}\\) from 2.3 AU to 40 AU. The initial grid has (256, 128, 32) cells in \\((r,\\phi,z)\\) above the midplane. When the disk expands at the onset of GI's, the grid is extended radially and vertically. The initial minimum value of the Toomre stability parameter \\(Q\\) is about 1.5, and so the disk is marginally unstable to GI's. The initial model is seeded with low-amplitude random density noise. We use \\(T_{irr}\\) = 15 K, which is lower than the 50 K assumed in Boss (2002) because our larger and less massive disk is mostly stabilized by \\(T_{irr}\\) = 50 K. In this paper, we present simulations with four metallicities Z = 1/4 Z\\({}_{\\odot}\\) (one-quarter solar metallicity), 1/2 Z\\({}_{\\odot}\\), Z\\({}_{\\odot}\\), and 2 Z\\({}_{\\odot}\\). The 1/4 Z\\({}_{\\odot}\\) simulation was started from the 1/2 Z\\({}_{\\odot}\\) disk after 13.0 outer rotation periods (ORPs) of evolution, to save computing resources. Here 1 ORP (about 250 yrs) is the initial rotation period at 33 AU. The varied metallicity cases use a maximum grain size \\(a_{max}\\) = 1\\(\\mu\\)m in the dust opacities. An additional simulation with \\(a_{max}\\) = 1 mm and Z = Z\\({}_{\\odot}\\) is conducted to explore the effects of grain growth. ### Results The current calculations resemble those presented in Mejia (2004) and Mejia et al. (2005). The disks remain fairly axisymmetric until a burst phase of rapid growth in non-axisymmetric structure. Subsequently, the disks gradually transition into a quasi-steady asymptotic phase, where heating and cooling are in rough balance, and average quantities change slowly (see also Lodato & Rice, 2005). Table 1 summarizes some of the results. In the table, Duration refers to the simulation length measured in ORPs, \\(t_{1}\\) is the time in ORPs at which the burst phase begins, \\(t_{2}\\) is the approximate time in ORPs when the simulation enters the asymptotic state, \\(\\langle A\\rangle\\) is a time-averaged integrated Fourier amplitude for all non-axisymmetric structure (see below), \\(t_{cool}\\) is the final global cooling time obtained by dividing the final total internal energy of the disk by the final total luminosity, and Thin% is the percentage of disk volume that is optically thin during the asymptotic phase. One noticeable effect is that the onset of the burst phase (\\(t_{1}\\)) is delayed for higher metallicity and larger grain size (Table 1), as expected due to higher opacity and therefore slower cooling. Note that, over the bulk of our large cool disk, increasing \\(a_{max}\\) increases the opacity. Although the time to reach the asymptotic phase is relatively insensitive to grain size and metallicity, the overall final \\(t_{cool}\\) listed in Table 1 illustrates that the correlation between cooling time and opacity carries over to late times. During the asymptotic phase, in all cases, the Toomre \\(Q\\) values remain roughly constant with time, with values ranging between 1.3 to 1.8 for \\(r\\) = 10 to 40 AU, and the mass inflow rates peak near 15 AU at \\(\\sim 10^{-6}M_{\\odot}\\)/yr, with negligible difference between 1/2 Z\\({}_{\\odot}\\) and 2 Z\\({}_{\\odot}\\) to the accuracy that we can measure these inflows (Mejia et al., 2005). Although there are some regions of superadiabatic gradients, the rapid overall convective cooling reported by Boss (2002, 2004) does not occur. We do see upward and downward motions, which we attribute to hydraulic jumps (Boley & Durisen2006). Whether or not some of these motions are actually thermal convection, they do not result in rapid cooling for our disks. In Figure 1, which shows midplane densities at 15 ORPs, the spiral structure appears stronger for 1/4 Z\\({}_{\\odot}\\) than for 2 Z\\({}_{\\odot}\\). In order to quantify differences in GI strength, we compute integrated Fourier amplitudes (Imamura et al., 2000) \\[A_{m}=\\frac{\\int\\rho_{m}rdrdz}{\\int\\rho_{0}rdrdz},\\] where \\(\\rho_{0}\\) is the axisymmetric component of the density and \\(\\rho_{m}\\) is the amplitude of the \\(\\cos(m\\phi)\\) component. Although variable, after \\(\\sim 14\\) ORPs, the \\(A_{m}\\)'s for most \\(m\\)'s are greater for 1/4 Z\\({}_{\\odot}\\) than for higher Z's. To measure total nonaxisymmetry, we sum the \\(A_{m}\\)'s and average this sum over 14.5 to 15.5 ORPs. As shown in Table 1, this global measure \\(\\langle A\\rangle\\) is greatest for 1/4 Z\\({}_{\\odot}\\) and generally decreases with increasing metallicity and grain size. Figure 2 plots the cumulative energy loss due to cooling computed for only half the disk as a function of time. The upper curves show energy loss from the disk interior after compensating for energy input by residual irradiation and by the glowing disk upper atmosphere; the lower curves show net energy loss from optically thin regions after accounting for heating due to envelope irradiation. Due to our restricted vertical resolution and use of the Eddington atmosphere fit over one or two vertical cells, the \"thick\" curves effectively include most of the photospheric layers for most columns through the disk. The \"thin\" curves tally additional cooling from extended layers above the photospheric cells, usually with \\(\\tau_{R}<<1\\), plus the parts of the outer disk that are optically thin all the way to the midplane. The initial cooling rates for the optically thick regions plus photosphere clearly differ. In fact, the initial slopes of the \"thick\" curves give \\(t_{cool}\\sim\\) Z/Z\\({}_{\\odot}\\) ORPs. However, the initial disks are far from radiatively relaxed, and so there are transients. Remarkably, by the asymptotic phase, all the disk interior-plus-photosphere curves converge to similar energy loss rates. During these late times, the differences between the total cooling rates are dominated by the optically thin regions, which are larger for the lower metallicity cases, as indicated by the Thin% entry in Table 1. The overall asymptotic phase \\(t_{cool}\\)'s in Table 1, based on summing the thick and thin loss rates, are longer for higher metallicity and larger grain size. Altogether, the evidence in Table 1 and Figures 1 and 2 shows that higher opacity leads to slower cooling and that slower cooling produces lower GI amplitudes. We remind the reader that we detect these differences over a much narrower range of metallicities (1/4 to 2 Z\\({}_{\\odot}\\)) than considered by Boss (2002) (0.1 to 10 Z\\({}_{\\odot}\\)). As in Mejia (2004), except for brief transients during the burst phases of some runs, these disks do not form dense clumps, in apparent disagreement with Boss (2002). To investigate whether the disk evolution depends on spatial resolution in the asymptotic phase (Boss, 2000, 2005; Pickett et al., 2003), both the \\(1/4\\) Z\\({}_{\\odot}\\) and \\(2\\) Z\\({}_{\\odot}\\) simulations are extended for another 2 ORP's with quadrupled azimuthal resolution (512 zones), and the disks do not fragment into dense clumps. This is consistent with the analytic arguments in Rafikov (2005) that an unstable disk and fast radiative cooling are incompatible constraints for realistic disks at 10 AU (see Boss 2005 for a different perspective). Indeed, if \\(t_{cool}\\) listed in Table 1 is a good measure of local cooling times in these disks, we do not expect fragmentation. Gammie (2001) shows that fragmentation occurs only if the local \\(t_{cool}\\) is less than about half the local disk orbit period \\(P_{rot}\\)(see also Rice et al., 2003; Mejia et al., 2005), except possibly near sharp opacity edges (Johnson & Gammie, 2003). We only find locallized cooling times shorter than 0.5 \\(P_{rot}\\) in the asymptotic phase of the 2 Z\\({}_{\\odot}\\) case, and then only in the 30 to 40 AU region, which is optically thin. This occurs because, even though \\(t_{cool}\\sim\\) Z in optically thick regions (higher optical depth), \\(t_{cool}\\sim\\) Z\\({}^{-1}\\) in thin ones (more emitters). As a result, this disk displays the steepest drop of local \\(t_{cool}\\) with \\(r\\). The short local \\(t_{cool}\\)'s appear to be highly variable and transient. The continuation of this simulation for 2 ORPs at higher azimuthal resolution (512 zones) does not show evidence for fragmentation into clumps. It could prove important to push our simulations to higher Z in the future. ## 4 Discussion Our results show that GI strength decreases as metallicity increases and, contrary to Boss (2002), that global radiative cooling is too slow for fragmentation into dense clumps. In the asymptotic phase, cooling rates for the disk interior plus photospheric layers converge for all Z, but the total cooling, including the optically thin regions, is higher for lower Z. Thus, the optically thin upper atmosphere and outer disk play a role in the cooling rate of an evolved disk. In fact, the fractional volume of the optically thin regions becomes very large at late times (see Thin% in Table 1). Note also that the optically thick region fractional volume, \\(1-{\\rm Thin}\\%\\), varies roughly as Z. The greater surface area of the disk photosphere at higher Z tends to compensate for the higher opacity and leads to convergence of the cooling rates for the parts of the disk contained within the photospheric layers. In this respect, we confirm Boss's conclusion that the outcome of the radiative evolution is somewhat insensitive to metallicity. However, the important difference is that we do not see fragmentation into dense clumps, presumably because our cooling rates are much lower than in Boss (2002). For the 1mm case, the optically thin regions have a much smaller volume (Table 1) and contribute little to cooling. Outside the inner few AU, bigger grains make the disk more opaque to longer wavelengths, and \\(t_{cool}\\) is thus considerably larger, even initially. Our results argue against direct formation of gas giants by disk instability in two ways - the global radiative cooling times seem too long for fragmentation to occur and GI's are stronger overall for lower metallicity. Nevertheless, it is still possible that GI's play an important role in gas giant planet formation. Durisen et al. (2005) suggest that dense gas rings produced by GI's will enhance the growth rate of solid cores by drawing solids toward their centers (Haghighipour & Boss, 2003) and thereby accelerating core accretion. Such rings are indeed produced in the inner disks of all our calculations regardless of metallicity or grain size, and they appear to be still growing when the calculations end. In the weaker GI environments of high metallicity, there is less self-gravitating turbulence to interfere with the radial drift of solids (Durisen et al., 2005). In this way, rings may provide a natural shelter and gathering place for growing embryos and cores. The apparent disagreement between our results and those of Boss (2002, 2004) could be due to any number of differences in techniques and assumptions, such as artificial viscosity, opacities, equations of state, initial disk models and perturbations, grid shapes and resolution, and radiative boundary conditions, including the way that we handle irradiation. We are now collaborating with Boss in an effort to pinpoint which of these is the principal cause (K. Cai et al., in preparation). Preliminary results suggest that it is the radiative boundary conditions. We are therefore developing alternative techniques for disk radiative transfer that we hope are more reliable and accurate. We thank A.P. Boss and an anonymous referee for useful comments. This work was supported in part by NASA Origins of Solar Systems grants Nos. NAG5-11964 and NNG05GN11G, by NASA Planetary Geology and Geophysics grant No. NAG5-10262, and by a Shared University Research grant from IBM, Inc. to Indiana University. ## References * () Bodenheimer, P., Yorke, H.W., Rozyczka, M., & Tohline, J.E., 1990, ApJ, 355, 651 * () Boley, A.C., & Durisen, R.H., 2006, ApJ, in press (astro-ph 0510305) * () Boss, A.P., 2000, ApJ, 536, L101 * () Boss, A.P., 2002, ApJ, 567, L149 * () Boss, A.P., 2004, ApJ, 610, 456 * () Boss, A.P., 2005, ApJ, 629, 535 * () D'Alessio, P., Calvet, N., & Hartmann, L., 1997, ApJ, 474, 397 * () D'Alessio, P., Calvet, N., & Hartmann, L., 2001, ApJ, 553, 321 * () Durisen, R.H., Cai, K., Mejia, A.C., & Pickett, M.K., 2005, Icarus, 173, 417 * () Fischer, D.A., & Valenti, J., ApJ, 622, 1102 * () Gammie, C.F., 2001, ApJ, 553, 174 * () Haghighipour, N., & Boss, A.P., 2003, ApJ, 583, 996 * () Ida, S., & Lin, D.N.C., ApJ, 616, 567 * () Imamura, J.N., Durisen, R.H., & Pickett, B.K., 2000, ApJ, 528, 946 * () Johnson, B.M., & Gammie, C.F., 2003, ApJ, 597, 131 * () Kornet, K., Bodenheimer, P., Rozyczka, M., & Stepinski, T.F., 2005, A&A, 430, 1133 * () Lodato, G., & Rice, W.K.M., MNRAS, 358, 1489 * () Mayer, L., Quinn, T., Wadsley, J., & Stadel, J., 2004, ApJ, 609, 1045 * () Mejia, A.C., Ph.D. dissertation, Indiana University * () Mejia, A.C., Durisen, R.H., Pickett, M.K., & Cai, K., 2005, ApJ, 619, 1098 * () Natta, A., 1993, ApJ, 412, 761 * () Pickett, B.K., Cassen, P.M., Durisen, R.H., & Link, R. 1998, ApJ, 504, 468 * () Pickett, B.K., Cassen, P.M., Durisen, R.H., & Link, R. 2000, ApJ, 529, 1034 * ()Pickett, B.K., Mejia, A.C., Durisen, R.H., Cassen, P.M., Berry, D.K., & Link, R.P. 2003, ApJ, 590, 1060 * () Pollack, J.B., Hubickyj, O., Bodenheimer, P. Lissauer, J.J., Podolak, M., & Greenzwieg, Y., 1996, Icarus, 124, 62 * () Rafikov, R.R. 2005, ApJ, 621, L69 * () Rice, W.K.M., Armitage, P.J., Bate, M.R., & Bonnell, I.A., 2003, MNRAS, 339, 1025 * () Santos. N. C., Israelian, G., & Mayor, M., 2001, A&A, 373, 1019 * () Sozzetti, A., Latham, D. W., Torres, G., Stefanik, R. P., Boss, A. P., Carney, B. W., & Laird, J. B., in Proceedings of the Gaia Symposium \"The Three-Dimensional Universe with Gaia\" (ESA SP-576). Editors: C. Turon, K.S. O'Flaherty, M.A.C. Perryman., 309 \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \\hline \\hline Case & \\(f_{\\kappa}\\) & \\(a_{max}\\) & Durationa & \\(t_{1}\\)a & \\(t_{2}\\)a & \\(\\langle A\\rangle\\) & \\(t_{cool}\\)a & Thin\\% \\\\ \\hline 1/4 Z\\({}_{\\odot}\\) & 1/4 & 1 \\(\\mu\\)m & 3.8b & N/A & N/A & 1.29 & 2.1 & 99\\% \\\\ 1/2 Z\\({}_{\\odot}\\) & 1/2 & 1 \\(\\mu\\)m & 15.6 & 4.0 & 10 & 1.09 & 2.9 & 98\\% \\\\ Z\\({}_{\\odot}\\) & 1.0 & 1 \\(\\mu\\)m & 15.7 & 5.0 & 10 & 1.10 & 3.2 & 94\\% \\\\ 2 Z\\({}_{\\odot}\\) & 2.0 & 1 \\(\\mu\\)m & 16.5 & 5.0 & 10 & 0.72 & 3.7 & 86\\% \\\\ 1mm & 1.0 & 1 mm & 17.2 & 7.0 & 11 & 0.88 & 4.5 & 44\\% \\\\ \\hline \\end{tabular} \\end{table} Table 1: Simulation ResultsFigure 1: Midplane density maps at 15 ORPs for the 1/4 Z\\({}_{\\odot}\\) (left panel) and 2 Z\\({}_{\\odot}\\) (right panel) simulations. Each square is 113 AU on a side. Densities are displayed on a logarithmic scale running from light grey to black (print version) or dark blue to dark red (online version), as densities range from about 4.8\\(\\times 10^{-16}\\) to 4.8\\(\\times 10^{-11}\\) g cm\\({}^{-3}\\), respectively, except that both scales saturate to white at even higher densities. Figure 2: Cumulative total energy loss as a function of time due to radiative cooling in optically thick (upper set, labelled “thick”) and optically thin regions (lower set, labelled “thin”). Both of these are net global cooling after heating by irradiation is subtracted. The curves labeled by a metallicity value all use \\(a_{max}=1\\)\\(\\mu\\)m. The curves labeled “1mm” are for a calculation with \\(a_{max}=1\\)mm and solar metallicity. Note that the 1/4 Z\\({}_{\\odot}\\) run starts from the 1/2 Z\\({}_{\\odot}\\) simulation at about 13 ORPs. A color version of this figure appears on line.
Observational studies show that the probability of finding gas giant planets around a star increases with the star's metallicity. Our latest simulations of disks undergoing gravitational instabilities (GI's) with realistic radiative cooling indicate that protoplanetary disks with lower metallicity generally cool faster and thus show stronger overall GI-activity. More importantly, the global cooling times in our simulations are too long for disk fragmentation to occur, and thedisks do not fragment into dense protoplanetary clumps. Our results suggest that direct gas giant planet formation via disk instabilities is unlikely to be the mechanism that produced most observed planets. Nevertheless, GI's may still play an important role in a hybrid scenario, compatible with the observed metallicity trend, where structure created by GI's accelerates planet formation by core accretion. accretion, accretion disks -- hydrodynamics -- instabilities -- planetary systems: formation -- planetary systems: protoplanetary disks
Give a concise overview of the text below.
arxiv-format/0509015v1.md
# Backbending phenomena in light nuclei at \\(A\\sim 60\\) mass region S. U. El-Kameesy [email protected] Department of Physics, Faculty of Science, Ain-Shams University, Cairo, Egypt. H. H. Alharbi [email protected] National Center for Mathematics and Physics, KACST, P.O. Box 6086, Riyadh 11442, Saudi Arabia, H. A. Alhendi [email protected] Department of Physics and Astronomy, College of Science, King Saud University, P.O. Box 2455, Riyadh 11454, Saudi Arabia November 6, 2021 ## I Introduction Investigations of the ground state bands of nuclei at \\(A\\sim 60\\) mass region have recently become a particularly interesting subject in nuclear structure studies [1; 2; 3; 4; 5]. These nuclei exhibit a range of interesting features, including oblate and prolate deformations as well as rapid variations in shape as a function of both spin and particle number. The sudden disappearance of E2 strength at certain spins indicates a shape change and requires the inclusion of upper pf configuration [6]. The shell model predictions have allowed calculations in the full fp model space. These calculations have shown that the collective properties of rotor like energies, backbending and large B(E2) values can be reproduced. Hara _et al_[5]. studied the backbending mechanism of \\({}^{48}\\)Cr within the projected shell model (PSM) [7], which has been successful in describing well deformed heavy nuclei and those of transitional region [8], it was concluded that the backbending in \\({}^{48}\\)Cr is due to a band crossing involving an excited band built on simultaneously broken pairs of neutrons and protons in the intruder subshell \\(f_{7/2}\\). This result differs from that of Tanaka _et al_[9] based on the Cranked Hartee-Fock-Bogoliubov (CHFB), which claims that the backbending in \\({}^{48}\\)Cr is not due to level crossing. The application of the generator coordinate method (GCM) has showed that the backbending in \\({}^{48}\\)Cr can be interpreted as due to crossing between the deformed and spherical bands [7]. Accordingly, while the backbending phenomenon in medium heavy nuclei are well described and commonly understood as a band crossing phenomenon involving strong pairing correlation [10], the origin of backbending in medium light nuclei has been debated. Additionally, the role of the paring force in the backbending phenomenon is not clearly outlined. The interest of the \\(1f_{7/2}\\) nuclei has been extended to levels above the \\(1f_{7/2}\\) band termination, in particular in connection with a possible building up of superdeformation [11], where the quality of SM calculations is probably not as good because of the possible contributions from the SD shell and the \\(1g_{9/2}\\) orbital [2]. Cranking model analysis of \\({}^{80}\\)Br energy levels reveals a signature inversion at a spin of \\(12\\hbar\\) and a probably neutron alignment at \\(\\hbar\\omega\\approx 0.7\\) MeV. The results are discussed within the framework of the systematics of similar bands in the lighter Br isotopes and the cranked-shell model [12]. The present study has been initiated due to the above contradictions of different models in describing the backbending phenomenon in medium light nuclei. Our study is based on applying a modified version of the exponential model with pairing attenuation [13]. It is hoped by such work to have a strong evidence that the pairing force contribution still plays an effective role in the backbending mechanism in this mass region at \\(A\\sim 60\\). In the following section we briefly present the model and in the next section the model is applied to some even-even medium light nuclei. Finally the last section contains our conclusion. Model description Sood and Jain [14] have previously developed an exponential model based on the exponential dependence of the nuclear moment of inertia on pairing correlation [15]. They gave the following relation: \\[E\\left(I\\right)=\\frac{\\hbar^{2}}{2\\varphi_{0}}I\\left(I+1\\right)Exp\\left[\\Delta_{ 0}\\left(1-\\frac{I}{I_{c}}\\right)^{\\frac{1}{2}}\\right] \\tag{1}\\] Excellent results have been obtained by means of this approach in describing the ground state bands in deformed nuclei up to the point where backbending occurs. They selected the value \\(18\\hbar\\) for \\(I_{C}\\) as an input cutoff corresponds to the point where the rotational frequency \\(\\omega\\) in the \\(\\varphi-\\omega^{2}\\) plots reaches a minimum value and the pairing correlations disappear completely. Zhou and Zheng [17] have demonstrated that \\(I_{C}=85\\hbar\\) is a suitable choice in their calculations concerning superdeformed bands near \\(A\\sim 190\\), since the pairing correlation in that region is still strong event at very high rotational frequency. For medium light nuclei, \\(I_{C}\\) can take values smaller than \\(18\\hbar\\) because the backbending phenomenon in this region (\\(A\\approx 60\\)) lies at spin \\(I\\approx 10\\hbar\\)[4]. These works led us to use a suitable \\(I_{C}\\) values to represent both the variation of the moment of inertia and the paring correlation and to give the model the ability to describe well the \\(\\varphi-\\omega^{2}\\) plot regions, in particular the forward and down-bending regions, which lie after the backbending region. The modified version of the exponential model with pairing attenuation has the following form [13; 16]: \\[E\\left(I\\right)=\\frac{\\hbar^{2}}{2\\varphi_{0}}I\\left(I+1\\right)Exp\\left[ \\Delta_{0}\\left(1-\\frac{I}{I_{c}}\\right)^{\\frac{1}{2}}\\right] \\tag{2}\\] Where \\(\\varphi_{0}\\), \\(\\Delta_{0}\\) and \\(\ u\\) are the free parameters of the model, which are adjusted to give a least-square fit to the experimental data. This approach is supported by Ma and Rasmussen suggestion that there is an exponential dependence of the moment of inertia on the parameter \\(\ u\\) for a wide range \\(\ u\\) values [18]. ## III Application to some even-even medium light nuclei The anomalous behavior, i.e. backbending of several medium light even-even nuclei (Zn, Ge, Se, Kr, Sr), has been studied using our improved modified version of the exponential model with pairing attenuation. The parameters of the model (Table 1) were determined by means of a least-square fitting procedure involving the experimental known energy levels [19]. The plots of the calculated data of \\(2\\varphi_{I}/\\hbar^{2}\\) versus \\((\\hbar\\omega)^{2}\\) for these isotopes are given in figure 1, where the experimental data are also presented. From the excitation energies \\(E\\left(I\\right)\\) of the yrast bands we deduce the moment of inertia and the squared rotational frequency \\(\\omega^{2}\\) by using the well-known relations \\[\\frac{2\\varphi}{\\hbar^{2}}=\\frac{4I-2}{E\\left(I\\right)-E\\left(I-2\\right)}\\;, \\tag{3}\\] and \\[\\left(\\hbar\\omega\\right)^{2}=\\left(I^{2}-I+1\\right)\\left[\\frac{E\\left(I \\right)-E\\left(I-2\\right)}{2I-1}\\right]^{2} \\tag{4}\\] In figure 1 the experimental data show a clear evidence of backbending phenomenon in all the presented nuclei at \\(I=8-12\\hbar\\). It is clear from the same figure that the predictions of the applied improved exponential model reproduce very well the backbending phenomenon in those nuclei and its application improves as \\(A\\) increases. This result may give an indication that the pairing force contribution to the backbending phenomenon increases as \\(A\\) increases in the mass region under investigation. Another noticeable success of the model is shown in Figure 1 \"concerning \\({}^{68}\\)Ge, \\({}^{72}\\)Se, \\({}^{78}\\)Kr and \\({}^{80}\\)Sr\" where the forward and down-bending regions are very well described by its calculations. ## IV Conclusion The present results of the improved exponential model with pairing attenuation give a firm confirmation that the backbending in medium light nuclei at low spins (\\(I=8-12\\hbar\\)) can be interpreted due to paring force which supports the band crossing mechanism in analogy with the earlier calculations [5] based on the projected shell model \\begin{table} \\begin{tabular}{c c c c c} \\hline \\hline **Nucleus** & \\(2\\phi_{0}/\\hbar^{2}\\) & \\(\\Delta_{0}\\) & \\(\ u\\) & \\(\\mathbf{I_{c}}\\) \\\\ \\hline \\({}^{90}\\)Cr & 39.8033 & 1.66542 & 0.678977 & 26 \\\\ \\({}^{62}\\)Zn & 16.1483 & 1.30409 & 0.313542 & 18 \\\\ \\({}^{64}\\)Ge & 15.5511 & 1.38726 & 0.234353 & 18 \\\\ \\({}^{68}\\)Ge & 25.516 & 1.90454 & 0.401219 & 20 \\\\ \\({}^{74}\\)Kr & 47.5087 & 1.03889 & 0.428975 & 40 \\\\ \\({}^{78}\\)Kr & 24.5699 & 1.31551 & 0.214932 & 80 \\\\ \\({}^{80}\\)Kr & 35.0581 & 1.53869 & 0.452602 & 20 \\\\ \\({}^{72}\\)Se & 38.7645 & 1.45564 & 0.375074 & 30 \\\\ \\({}^{74}\\)Se & 40.4335 & 1.26539 & 0.446761 & 28 \\\\ \\({}^{76}\\)Se & 40.7577 & 1.60714 & 0.180245 & 50 \\\\ \\({}^{78}\\)Se & 33.3453 & 1.50423 & 0.466925 & 20 \\\\ \\({}^{78}\\)Sr & 40.4515 & 0.68491 & 0.125873 & 80 \\\\ \\({}^{80}\\)Sr & 43.9731 & 1.03228 & 0.146307 & 80 \\\\ \\({}^{82}\\)Sr & 48.9001 & 1.25575 & 0.173081 & 80 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: The fitting parameters of the present model. Figure 1: Calculated and observed moment of inertia \\(2\\varphi/\\hbar^{2}\\) vs. \\((\\hbar\\omega)^{2}\\) for yrast levels of some light nuclei. The dots represent experimental values. (PSM) and the generator coordinator method (GCM). Furthermore, our simple modified formula is able to describe well the forward and down-bending regions of the \\(\\varphi-\\omega^{2}\\) plots. ## References * (1) J.A. Cameron _et al._, Phys. Lett. B **235**, 239 (1990). * (2) F. Brandolini _et al._, Nucl. Phys. A **642**, 387 (1998). * (3) E. Caurier, A.P. Zuker, A. Poves, and G. Martinez-Pinedo, Phys. Rev. C **50**, 225 (1994); E. Caurier, F. Nowacki and A. Poves, Phys.Rev.Lett. **95**, 042502 (2005). * (4) V. Velazquez, J.G. Hirsch, Y. Sun, Nucl. Phys. A **686**, 129 (2001). * (5) K. Hara, Y. Sun, and T. Mizusaki, Phys. Rev. Lett. **83**, 1922 (1999). * (6) J.A. Cameron _et al._, Phys. Rev C **58**, 808 (1998). * (7) K. Hara, Y. Sun, Int. J. Mod. Phys. E **4**, 637 (1995); G.-L. Long and Y. Sun, Phys. Rev. C **63**, 021305 (2001). * (8) J. A. Sheikh, K. Hara, Phys. Rev. Lett. **82**, 3968 (1999). * (9) T. Tanaka, K. Iwasawa, and F. Sakata, Phys. Rev. C **58**, 2765 (1998). * (10) R.A. Sorenson, Nucl. Phys. A **269**, 301 (1976). * (11) S.M. Lenzi _et al._, Phys. Rev. C **56**, 1313 (1997). * (12) I. Ray _et al._, Nucl. Phys. A **678**, 258 (2000). * (13) H. A. Alhendi, H. H. Alharbi, S. U. El-Kameesy, \"_Improved Exponential model with pairing attenuation and the backbending phenomenon_\", nucl-th/0409065. (Submitted) * (14) P. C. Sood and A. K. Jain, Phys Rev. C **18**, 1906 (1978). * (15) A. Bohr and R. Mottelson, Nuclear structure. Voll II, Benjamin, W.A. Inc., New York, Amesterdam (1975). * (16) H. H. Alharbi, H. A. Alhendi, and S. U. El-Kameesy, J. Physics G, (to appear) * (17) Shan-Gui Zhou and Chunkai Zheng, Phys. Rev. C **55**, 2324 (1997). * (18) C.W. Ma and J.O. Rasmussen, Phys. Rev. C **9**, 1083 (1974). * (19) J.K. Tuli, Evaluated nuclear structure data file, Nucl. Instr. Meth. in Phys. Res. A **369**, 506 (1996).
Recent studies of the backbending phenomenon in medium light weight nuclei near \\(A\\sim 60\\) expanded greatly our interest about how the single particle orbits are nonlinearly affected by the collective motion. As a consequence we have applied a modified version of the exponential model with the inclusion of paring correlation to describe the energy spectra of the ground state bands and/or the backbending phenomenon in mass region at \\(A\\sim 60\\). A firm conclusion is obtained concerning the successful validity of the proposed modified model in describing the backbending phenomenon in this region. Comparison with different theoretical descriptions is discussed. pacs: 21.10.-k, 21.60.+v, 21.90.+f + Footnote †: preprint: APS/123-QED
Provide a brief summary of the text.
arxiv-format/0509314v1.md
# Evidence for the Onset of Deconfinement from Longitudinal Momentum Distributions? Observation of the Softest Point of the Equation of State Marcus Bleicher Institut fur Theoretische Physik, J. W. Goethe Universitat, 60438 Frankfurt am Main, Germany ###### Over the last years, a wealth of detailed data in the \\(20A-160A\\) GeV energy regime has become available. The systematic study of these data revealed surprising (non-monotonous) structures in various observables around \\(30A\\) GeV beam energy. Most notable irregular structures in that energy regime include, * the sharp maximum in the K\\({}^{+}/\\pi^{+}\\) ratio [1; 2], * a step in the transverse momentum excitation function (as seen through \\(\\langle m_{\\perp}\\rangle-m_{0}\\) ) [2; 3], * an apparent change in the pion per participant ratio [2] and * increased ratio fluctuations (due to missing data at low energies it is unknown if this is a local maximum or an ongoing increase of the fluctuations) [4]. It has been speculated, that these observation hint towards the onset of deconfinement already at \\(30A\\) GeV beam energy. Indeed, increased strangeness production [5] and enhanced fluctuations have long been predicted as a sign of QGP formation [6; 7; 8; 9; 10; 11] within different frameworks and observables. The suggestion of an enhanced strangeness to entropy ratio (\\(\\sim K/\\pi\\)) as indicator for the onset of QGP formation was especially advocated in [12]. Also the high and approximately constant \\(K^{\\pm}\\) inverse slopes of the \\(m_{T}\\) spectra above \\(\\sim 30A\\) GeV - the'step' - was also found to be consistent with the assumption of a parton \\(\\leftrightarrow\\) hadron phase transition at low SPS energies [13; 14]. Surprisingly, transport simulations (supplemented by recent lattice QCD (lQCD) calculations) have also suggested that partonic degrees of freedom might already lead to visible effects at \\(\\sim 30A\\) GeV [15; 16; 17]. Finally, the comparison of the thermodynamic parameters \\(T\\) and \\(\\mu_{B}\\) extracted from the transport models in the central overlap region [18] with the experimental systematics on chemical freeze-out configurations [19; 20; 21] in the \\(T-\\mu_{B}\\) plane do also suggest that a first glimpse on a deconfined state might be possible around \\(10A-30A\\) GeV. In this letter, we explore whether similar irregularities are also present in the excitation function of longitudinal observables, namely rapidity distributions. Here we will employ Landau's hydrodynamical model [22; 23; 24; 25; 26; 27; 28]. This model entered the focus again after the most remarkable observation that the rapidity distributions at all investigated energies can be well described by a single Gaussian at each energy. The energy dependence of the width can also be reasonably described by the same model. For recent applications of Landau's model to relativistic hadron-hadron and nucleus-nucleus interactions the reader is referred to [29; 30; 31; 32; 33] (and Refs. therein). The main physics assumptions of Landau's picture are as follows: The collision of two Lorentz-contracted nuclei leads to full thermalization in a volume of size \\(V/\\sqrt{s_{\\rm NN}}\\). This justifies the use of thermodynamics and establishes the system size and energy dependence. Usually, a simple equation of state \\(p=c_{s}^{2}\\epsilon\\) with \\(c_{s}^{2}=1/3\\) (\\(c_{s}\\) denotes the speed of sound) is assumed. For simplicity, chemical potentials are not taken into account. From these assumptions follows a universal formula for the distribution of the produced entropy, determined mainly by the initial Lorentz contraction and Gaussian rapidity spectrum for newly produced particles. Under the condition that \\(c_{s}\\) is independent of temperature, the rapidity density is given by [25; 26]: \\[\\frac{dN}{dy}=\\frac{Ks_{\\rm NN}^{1/4}}{\\sqrt{2\\pi\\sigma_{y}^{2}}}\\,\\exp\\left( -\\frac{y^{2}}{2\\sigma_{y}^{2}}\\right) \\tag{1}\\] with \\[\\sigma_{y}^{2}=\\frac{8}{3}\\frac{c_{s}^{2}}{1-c_{s}^{4}}\\ln(\\sqrt{s_{\\rm NN}}/ 2m_{p})\\quad, \\tag{2}\\] where \\(K\\) is a normalisation factor and \\(m_{p}\\) is the proton mass. The model relates the observed particle multiplicity and distribution in a simple and direct way to the parameters of the QCD matter under consideration. Let us now analyze the available experimental data on rapidity distributions of negatively charged pions in terms of the Landau model. Fig. 1 shows the measured root mean square \\(\\sigma_{y}\\) of the rapidity distribution of negatively charged pions in central Pb+Pb (Au+Au) reactions as a function of the beam rapidity. The dotted line indicates the Landau model predictions with the commonly used constant sound velocity \\(c_{s}^{2}=1/3\\). The fullline shows a linear fit through the data points, while the data points [3; 33; 34; 35] are depicted by full symbols. At a first glance the energy dependence looks structureless. The data seem to follow a linear dependence on the beam rapidity \\(y_{p}\\) without any irregularities. However, the general trend of the rapidity widths is also well reproduced by Landau's model with an equation of state with a fixed speed of sound. Nevertheless, there seem to be systematic deviations. At low AGS energies and at RHIC, the experimental points are generally underpredicted by Eq. (2), while in the SPS energy regime Landau's model overpredicts the widths of the rapidity distributions. Exactly these deviations from the simple Landau picture do allow to gain information on the equation of state of the matter produced in the early stage of the reaction. By inverting Eq. (2) we can express the speed of sound \\(c_{s}^{2}\\) in the medium as a function of the measured width of the rapidity distribution: \\[c_{s}^{2}=-\\frac{4}{3}\\frac{\\ln(\\sqrt{s_{\\rm NN}}/2m_{p})}{\\sigma_{y}^{2}}+ \\sqrt{\\left[\\frac{4}{3}\\frac{\\ln(\\sqrt{s_{\\rm NN}}/2m_{p})}{\\sigma_{y}^{2}} \\right]^{2}+1}\\quad. \\tag{3}\\] Let us now investigate the energy dependence of the sound velocities extracted from the data. Figure 2 shows the speed of sound as a function of beam energy for central Pb+Pb (Au+Au) reactions as obtained from the data using Eq. (3). The sound velocities exhibit a clear minimum (usually called the softest point) around a beam energy of \\(30A\\) GeV. A localized softening of the equation of state is a long predicted signal for the mixed phase at the transition energy from hadronic to partonic matter [36; 37; 38]. Therefore, we conclude that the measured data on the rapidity widths of negatively charged pions are indeed compatible with the assumption of the onset of deconfinement at the lower SPS energy range. However, presently we can not rule out that also an increased resonance contribution may be the cause of the softening [39]. In conclusion, we have explored the excitation functions of the rapidity widths of negatively charged pions in Pb+Pb (Au+Au) collisions. * The rapidity spectra of pions produced in central nucleus-nucleus reactions at all investigated energies can be well described by single Gaussians. * The energy dependence of the width of the pion rapidity distribution follows the prediction of Landau's hydrodynamical model if a variation of the sound velocity is taken into account. * The speed of sound excitation function extracted from the data has a pronounced minimum (softest point) at \\(E_{\\rm beam}=30A\\) GeV. * This softest point might be due to the formation of a mixed phase indicating the onset of deconfinement at this energy. Figure 2: Speed of sound as a function of beam energy for central Pb+Pb (Au+Au) reactions as extracted from the data using Eq. (3). The statistical errors (not shown) are smaller than 3%. Figure 1: The root mean square \\(\\sigma_{y}\\) of the rapidity distributions of negatively charged pions in central Pb+Pb (Au+Au) reactions as a function of the beam rapidity \\(y_{p}\\). The dotted line indicates the Landau model prediction with \\(c_{s}^{2}=1/3\\), while the full line shows a linear fit through the data points. Data (full symbols) are taken from [3; 33; 34; 35]. The statistical errors given by the experiments are smaller than the symbol sizes. Systematic errors are not available. Further explorations of this energy domain is needed and can be done at the future FAIR facility and by CERN-SPS and BNL-RHIC experiments. ## Acknowledgements The author thanks C. Blume and M. Gazdzicki for fruitful and stimulating discussions. This work was supported by GSI, DFG and BMBF. This work used computational resources provided by the Center for Scientific Computing at Frankfurt (CSC). ## References * (1) S. V. Afanasiev _et al._ [The NA49 Collaboration], Phys. Rev. C **66** (2002) 054902 [arXiv:nucl-ex/0205002]. * (2) M. Gazdzicki _et al._ [NA49 Collaboration], J. Phys. G **30** (2004) S701 [arXiv:nucl-ex/0403023]. * (3) C. Blume, J. Phys. G: Nucl. Part. Phys. 31, S57 (2005) * (4) C. Roland [NA49 Collaboration], J. Phys. G **31** (2005) S1075. * (5) P. Koch, B. Muller and J. Rafelski, Phys. Rept. **142** (1986) 167. * (6) M. Bleicher, S. Jeon and V. Koch, Phys. Rev. C **62** (2000) 061902 [arXiv:hep-ph/0006201]. * (7) E. V. Shuryak and M. A. Stephanov, Phys. Rev. C **63** (2001) 064903 [arXiv:hep-ph/0010100]. * (8) H. Heiselberg and A. D. Jackson, Phys. Rev. C **63** (2001) 064904 [arXiv:nucl-th/0006021]. * (9) B. Muller, Nucl. Phys. A **702** (2002) 281 [arXiv:nucl-th/0111008]. * (10) M. Gazdzicki, M. I. Gorenstein and S. Mrowczynski, Phys. Lett. B **585**, 115 (2004) [arXiv:hep-ph/0304052]. * (11) M. I. Gorenstein, M. Gazdzicki and O. S. Zozulya, Phys. Lett. B **585**, 237 (2004) [arXiv:hep-ph/0309142]. * (12) M. Gazdzicki and M. I. Gorenstein, Acta Phys. Polon. B **30**, 2705 (1999). * (13) M. I. Gorenstein, M. Gazdzicki and K. A. Bugaev, Phys. Lett. B **567**, 175 (2003) [arXiv:hep-ph/0303041]. * (14) Y. Hama, F. Grassi, O. Socolowski, T. Kodama, M. Gazdzicki and M. Gorenstein, Acta Phys. Polon. B **35** (2004) 179. * (15) H. Weber, C. Ernst, M. Bleicher _et al._, Phys. Lett. B **442**, 443 (1998). * (16) E. L. Bratkovskaya _et al._, Phys. Rev. Lett. **92**, 032302 (2004) * (17) E. L. Bratkovskaya _et al._, Phys. Rev. C **69**, 054907 (2004) * (18) L. V. Bravina _et al._, Phys. Rev. C **60**, 024904 (1999), Nucl. Phys. A **698**, 383 (2002). * (19) P. Braun-Munzinger and J. Stachel, Nucl. Phys. A **606**, 320 (1996). * (20) P. Braun-Munzinger and J. Stachel, Nucl. Phys. A **638**, 3 (1998). * (21) J. Cleymans and K. Redlich, Phys. Rev. C **60**, 054908 (1999). * (22) E. Fermi, Prog. Theor. Phys. **5**, 570 (1950). * (23) L. D. Landau, Izv. Akad. Nauk Ser. Fiz. **17**, 51 (1953). * (24) S. Z. Belenkij and L. D. Landau, Usp. Fiz. Nauk **56**, 309 (1955). * (25) E. V. Shuryak, Yad. Fiz. **16**, 395 (1972). * (26) P. Carruthers, Annals N.Y.Acad.Sci. 229, 91 (1974). * (27) P. Carruthers and M. Doung-van, Phys. Rev. D **8**, 859 (1973). * (28) P. Carruthers, LA-UR-81-2221 * (29) E. L. Feinberg, Z. Phys. C **38** (1988) 229. * (30) J. Stachel and P. Braun-Munzinger, Phys. Lett. B **216**, 1 (1989). * (31) P. Steinberg, arXiv:nucl-ex/0405022. * (32) M. Murray, arXiv:nucl-ex/0404007. * (33) G. Roland, Talk presented at Quark Matter 2004, see proceedings. * (34) J.Klay _et al._ [E895 Collaboration], Phys. Rev. C **68**, 054905 (2003) * (35) I.G. Bearden _et al._ [Brahms Collaboration], Phys. Rev. Lett. **94**, 162301 (2005) * (36) C. M. Hung and E. V. Shuryak, Phys. Rev. Lett. **75**, 4003 (1995) [arXiv:hep-ph/9412360]. * (37) D. H. Rischke, Y. Pursun, J. A. Maruhn, H. Stoecker and W. Greiner, Heavy Ion Phys. **1**, 309 (1995) [arXiv:nucl-th/9505014]. * (38) J. Brachmann, A. Dumitru, H. Stoecker and W. Greiner, Eur. Phys. J. A **8**, 549 (2000) [arXiv:nucl-th/9912014]. * (39) R. Hagedorn, Nuov. Cim. Suppl. 3, 147 (1965); J. Rafelski and R. Hagedorn, Bielefeld Symp., ed. H. Satz, pp. 253 (1980)
We analyze longitudinal pion spectra from \\(E_{\\rm lab}=2A\\) GeV to \\(\\sqrt{s_{\\rm NN}}=200\\) GeV within Landau's hydrodynamical model. From the measured data on the widths of the pion rapidity spectra, we extract the sound velocity \\(c_{s}^{2}\\) in the early stage of the reactions. It is found that the sound velocity has a local minimum (indicating a softest point in the equation of state, EoS) at \\(E_{\\rm beam}=30A\\) GeV. This softening of the EoS is compatible with the assumption of the formation of a mixed phase at the onset of deconfinement.
Condense the content of the following passage.
arxiv-format/0510026v1.md
# A decision support system for ship identification based on the curvature scale space representation ## 1 Introduction During the last years, there has been a sustained increase in the deployment of imaging sensors for surveillance and intelligence operations in naval scenarios, with sensors installed either on ground or aboard naval platforms. This increase in the operational sensor base highlights the need to reduce to a minimum the load of visual tasks performed by the system operator, in order to decrease the cost and to increase the reliability and the availability of the system. A visual task of obvious importance is that of ship identification, which appears almost ubiquitously in naval surveillance operations. For years, this identification operation has been carried out by trained personnel, visually comparing the acquired silhouette to the reference ones stored in a database. All-visual identification is an error-prone operation requiring a painstaking effort, with operational and budgetary implications that could seriously hamper the increase of observation units at the pace required by current needs. In this context, it becomes apparent the need of a system with a high degree of automation that could assist the operator during the identification process, reducing the time to complete the task while simultaneously improving the reliability of the system. In this paper, we describe the characteristics of a developed computer-based ship identification assistant developed by us, and report the performance figures obtained on actual operational imagery. The input data for the system is a silhouette of the vessel to be identified, which has been acquired from an approximate side view of the object. This silhouette is preprocessed and compared to those stored in a database, retrieving a small number of potential matches ranked by their similarity to the target silhouette. This set of potential matches is presented to the system operator, who then makes the final ship identification. The system must cope with the large number of classes (\\(>1000\\), taking only military vessels into account) involved in this problem, and must be capable of operating on the imagery provided by different types of sensors (CCD, FLIR,image intensifier), and acquired under changing illumination and atmospheric conditions, at variable observation ranges. Moreover, the system must be tolerant regarding small variations in the observation angle, which lead to silhouettes that do not correspond to a perfect side view of the object. Finally, the system is required to produce its result in a limited amount of time (established in around 1 minute, for execution on a commercial PC platform) to ensure operational capacity. To solve the abovementioned problem while meeting the necessary requirements, a system based on the use of the curvature scale-space (CSS) has been devised. The CSS transform[1, 2], which is part of the MPEG-7 standard, provides a robust mean to describe closed contours, and has been successfully used in several recognition tasks [3, 4], showing the ability to discriminate between a large number of classes, while simultaneously dismissing slight variations in shape that are perceptually irrelevant but unavoidable in any operational scenario. The CSS representation describes the evolution in the location of the zero crossings of the silhouette curvature at different scales, which are obtained by convolving the silhouette with Gaussians of different variances. The smoothing operation induced by Gaussian convolution reduces both the sizes and the depths of the silhouette concavities and convexities. This reduction in size makes pairs of neighboring zero crossings to approach, ending up by collapsing when a certain degree of Gaussian smoothing is applied. Representing the evolution of the silhouette zero crossing locations with varying Gaussian smoothing yields a lobed figure, which is known as CSS image. For a thorough mathematical treatment of the CSS representation, refer to [2, 3]. The (x, y) coordinates of each lobe maximum in the CCS image represent, respectively, the location at the silhouette and the Gaussian kernel variance for which a collapse in neighboring zero crossings has occurred, and constitute useful features for classification in the standard CCS approach. Despite the power and robustness of this method, it entails certain characteristics that deem it as inadequate for its direct application to the problem to be solved. This has leaded us to incorporate certain modifications into the standard CCS method, which, in our environment, have lead to a significant increase in performance. Most notably, the CCS representation has been found to be unstable for a substantial number of vessel shapes. In these cases, slight variations in shape between the target and the model silhouettes have been found to induce abrupt changes such as lobe splitting in the corresponding CCS representations, thus precluding its use for reliable classification. To overcome this problem, the direct use of curvature zero crossings was abandoned in favor of curvature extrema (maximum/minimum), which have been found to be significantly more robust. Also, the use of local curvature was replaced with the more robust concept of lobe concavity, leading to a significant gain in performance. These modifications to the standard approach, together with some other additional changes are detailed in section 2. Section 3 provides an outline of the complete algorithm, whereas section 4 shows the classification results on operational imagery, which prove the high performance and robustness of the developed method. Finally, some conclusions are drawn in section 5. ## 2 Drawbacks of the Standard CCS Representation and Proposed Modifications. The CCSS Representation In this section, we describe a number of problems that preclude the direct application of the CCS representation to our problem, together with the proposed solutions. This has lead to the implementation of a modified representation, which will be referred to in this paper as the Concavity-Convexity Scale Space (CCSS) representation. Major modifications are the tracking of curvature extrema instead of zero crossings, the choice of the lobe concavity concept instead of local curvature, the application of concavity-convexity thresholding to remove the effect of shallow concavities and the use of deck projection instead of arc length to describe feature location. Instability of the CSS representation: the lobe splitting problem. Enhanced robustness by use of curvature extrema. A basic requirement for silhouette representation is that it has to remain stable under slight variations in shape, as those commonly found between the silhouette of a target and that of its associated model. However, it has been found that the above requirement is not fulfilled for the CSS representation of a substantial number of vessel silhouettes. In those cases,small variations in shape, which are perceptually not relevant, can induce significant differences in the evolution of zero crossings, which end up by merging with different neighbors in target and model silhouettes. This yields CSS representations which are widely different in shape, despite the remarkable similarity of target and model in visual terms. In Fig. 1, a typical example is presented, where the visual similarity between the preprocessed silhouettes of the warship image and the corresponding model can be appreciated. In principle, it would be expected that these similar shapes would result in CCS representations having a high degree of resemblance. The resulting CCS images are showed in Fig. 2, where a notorious difference in the disposition of lobes A and B becomes evident. The cause of this divergent behavior can be easily traced back by examining the evolution of the zero crossing locations in both shapes. Fig. 3 shows the target and model silhouettes at three representative stages of the evolution, corresponding to convolution with Gaussians of increasing variance \\(\\sigma_{A}\\), \\(\\sigma_{B\\text{ and }}\\sigma_{C}\\). The location of the curvature zero crossings responsible for the generation of lobes A, B and C are also marked in each drawing. For the first selected stage in the evolution (variance \\(\\sigma_{A}\\)), six curvature zero crossings (labeled from A to F) are found in the deck region of both silhouettes, delimiting a sequence of three concavities interleaved with two convexities. The visual appearance of both shapes is remarkably similar. At the next stage of the evolution, (variance \\(\\sigma_{B}\\)), the E- and F-zero crossings have joined together in both the target and the model silhouettes, giving rise to Fig. 2: Target and model CSS Images with lobe splitting problem Fig. 3: Target and model silhouettes at three representative stages of their evolution the target and the model CSS image. The appearance of both shapes at this point in evolution is structurally identical, although with somewhat less prominent concavities and convexities in the target silhouette. Further evolution in the target silhouette causes zero crossings B and C to collapse, giving raise to lobe B in the target CSS image. Finally, zero crossings A and D merge, forming lobe A in the target CSS image. The evolution of the model silhouette at large smoothing factors is exactly the opposite, causing the collapse, respectively, of zero crossings A and B, and of C and D, leading to the formation of two separate lobes. A considerable proportion of the training cases under analysis were affected at various degrees by this problem, pinpointing the need to seek a more stable silhouette representation. In this context, it is important to note that curvature extrema are known to be more stable than curvature inflection points (zero crossings) [5, 6], as they explicitly mark concavities or convexities in the shape. This sequence of concavities and convexities is generally preserved between a target and its corresponding model silhouette, even in cases where the lobe splitting problem appears in the CSS representation, as can be observed in Fig. 2 and Fig. 3. Hence, tracking of curvature extrema, instead of zero crossings, is proposed as a means to improve silhouette representation. This alternative representation also provides additional information concerning the curvature sign between neighboring zero crossings, which was not explicitly taken into account in the standard CSS representation, and which can be used to advantage in the identification task. Fig. 4 shows the improvement in stability obtained with the new representation, based on curvature extrema. ### Model-to-target variations of the silhouette's arc length. Enhanced stability by deck projection The arc length of the ship's closed silhouette is normalized before comparison in both target and model. As the bottom part of the silhouette is usually featureless, the relative weight of the deck arc length will increase according to the level of detail of the deck features. Due to this fact, a feature on a certain deck position will appear at different locations (it will be shifted) in the maximum and minimum CSS images depending on the level of detail contained in the silhouette. In Fig. 5 this fact can be appreciated: the target-extracted silhouette is more detailed than the model one. In Fig. 6, panels a-b, the CSS images are showed for both cases, clearly revealing a shift in the corresponding curves. Fig. 7 displays the target and model silhouettes at an intermediate stage of the evolution, showing the differences in the normalized arc length location of the main antenna in the deck. A solution to this problem can be simply obtained by projecting the locations of the curvature extrema onto horizontal coordinates of the deck, which are independent of arc length and, consequently, independent of the silhouette level of detail. In this sense, curvature extrema should be easier to compare and the results should be more accurate. Target and model extrema CSS images with deck projection are displayed in Fig. 6, panels c-d, showing a significant reduction in the problematic shift. Fig. 4: CSS representation (black), and extrema CSS representation: maximum (white) and minimum CSS (dark gray) Fig. 5: Original target image, associated extracted silhouette and pre-processed model image. ### Locality of the curvature measure. Improved robustness by means of lobe concavity Curvature is inherently a local measure. Small contour variations, especially at high smoothing degrees, could displace the curvature extreme almost freely around a section of its arc length, which results in a poor way of describing lobe characteristics. In order to overcome this problem, we have replaced the value of the curvature extrema as a means to describe a simple concavity/convexity with another parameter (C in Fig. 8) related to the curve distance to the line defined by the lobe delimiting zero crossings (\\(Z_{0}\\), \\(Z_{1}\\)). In particular, we describe the arc by means of both the location and the distance to the zero crossing line of the arc point that lies at maximum distance of this line (see Fig. 8). These descriptors correlate better with the macroscopic appearance of the arc, and have proven to be more robust against small variations in shape. A variation of these features, in which mean instead of maximum deviation has been considered, has already been proposed in [7], as a means to filter out shallow concavities. The improvement in the identification performance obtained when replacing the curvature by the concavity has been experimentally established, yielding a remarkable increase in the correct identification ratio. Using concavity instead of curvature in the synthesizing process of CSS images gives place to our proposed representation: Concavity-Convexity Scale Space (CCSS) images. Curves in this representation are more stable than those in CSS ones, solving the problem of curvature extrema displacement previously mentioned. Fig. 9 visually compares CSS and CCSS images for a selected warship, remarking the structural differences between both types of representations. Fig. 6: Comparison between target (gray) and model (black) in Extrema CSS Image with and without deck projection Fig. 7: Curve convolution and relative position of the selected maximum. ### The effect of shallow concavities This problem was previously analyzed in [7]. The present work proposes a new approximation in accordance with the characteristics of our own specific environment. Here, the problem appears in its most acute form in long straight regions of the warship silhouette. In Fig. 10 the silhouettes of two conflictive vessels are showed. Both silhouettes are identical, except for a minute 1-pixel step located on the straight section of the deck of the second warship. In Fig, 11, traditional CSS representations for the warships shown in Fig. 10 have been depicted. The slight variation in the deck mentioned above has as a consequence the appearance of a spurious lobe of considerable size, which obviously hinders any further attempt of comparison. If extrema CCSS images are displayed for both ships, some spurious curves are also obtained, as can be observed in Fig. 12. These curves have revealed themselves as a cause of misclassification and need to be filtered out. This can be achieved considering that, to be relevant, a CCSS curve must be associated with an important enough concavity (convexity) in the corresponding warship silhouette. Accordingly, CCSS points with an associated lobe concavity below an empirically determined threshold are filtered out prior to classification. In this context it is worth to mention that the use of lobe concavity, with its enhanced stability over that of curvature values, has significantly contributed to the reliability and robustness of this filtering step. In Fig. 13 the concavity-convexity filtered images corresponding to both ships are presented. As it can be seen, the filtering process has completely eliminated the curves corresponding to the spurious lobe, while the rest of the CCSS structure remains almost unaltered. After filtering, both images look remarkably similar. Fig. 11: Traditional CSS images for warships of Fig. 8 Fig. 10: Comparison between two warships affected by the problem of shallow concavities ## 3 Description of the Method ### Silhouette preprocessing Prior to computing the concavity-convexity scale space representation (CCSS), both the target and the model silhouettes undergo a preprocessing procedure. The preprocessing of the model silhouette can be carried out off-line, previous to the identification operation. The goal of the preprocessing stage is to increase the stability of the CCSS representation by filtering out noise-derived shape artifacts or deck objects (such as small antennas) that will not be usually distinguishable in the target at the operational observation distances and which, in any case, may often be subjected to modifications during the vessel's lifetime. The preprocessing stage consists of the sequential application of three steps. Firstly, a morphological filtering is applied with a threefold aim: remove bright elongated objects from the deck (small antennas, noise-induced bumps, etc.), fill in small streaks on the deck and eliminate holes in the shape prior to contour extraction. The first operation is carried out by means of a morphological opening, which has been modified to avoid the creation of isolated regions. To achieve this, a morphological reconstruction method has been used, similar to that described in [8]. The second operation is performed using a classical morphological closing. The third operation is solved by means of a 4-connected background extraction followed by the determination of the complementary region. Fig. 14 summarizes the preprocessing stage by displaying the silhouettes of the raw target and model as well as the resulting ones obtained after applying the morphological filtering. As can be observed, the filtering process noticeably increases the perceptual similarity between both shapes. Figure 12: Extrema CCSS images for original and 1 pixel-modified silhouettes Figure 13: Extrema CCSS images for original and 1 pixel-modified silhouettes with concavity-convexity thresholdingOnce the morphological filtering has been applied, the contour is extracted using a classical contour following algorithm with backtracking [9]. Finally, the location of the bow and the stern are determined, and the silhouette is normalized with respect to arc length. ### Concavity-Convexity Scale Space (CCSS) Concavity-convexity curves are computed by applying a two-pass method at each smoothing scale. In the first pass, the curvature at each silhouette point is computed, and curvature zero crossings are located. In the second pass, the line defined by each pair of neighboring zero crossings is determined. This pair of points delimits a silhouette arc corresponding to a single concavity or convexity. The distance between each point of the silhouette arc and the line defined by the zero crossing pair is computed, retaining the location and the distance of the point farthest apart from the line. This distance value is affected by the curvature sign at that point, thus enabling the distinction between concavities and convexities. ### Matching process and computation of the model-to-target assignment cost The aim of the matching process is to obtain an optimal assignment of the CCSS points of the target image to those of a model. Under this assignment, a matching cost could be computed, numerically describing the similarity of the target CCSS curves to that of a certain model. Applying this procedure to all models in the database, a matching cost can be associated to each target-model pair, which can then be sorted in terms of similarity to the target. The models that ranked among the top ones are then presented to the operator, who makes the final decision. The matching process is performed in two steps. In the first step, the curves corresponding to the model and the target are relatively shifted to optimize their correspondence. In the second step, all potential assignment of target and model curves are systematically explored, computing the cost of each assignment. The assignment corresponding to minimum cost is the one kept. A more detailed description of these steps is presented below. **i) Shift correction** Sometimes, visibility and noise perturbations could induce slight inaccuracies in the determination of bow and stern. This can cause a shift in the CCSS curves, which subsequently distorts the computation of the matching cost between target and model. Hence, this shift should be corrected before proceeding with the matching process. Obviously, this shift must be checked to be homogeneous for all curves of the target image, to avoid sweeping out real target-to-model horizontal differences in the CCSS curves. First, maximum CCSS curves, for both the model and the target CCSS images, are analyzed following a line-by-line approach. Each point in the target CCSS image is associated to a point in the same line of the model CCSS image, where the criteria is that point-to-point horizontal distances are to be minimized. This process is repeated for each point in the CCSS target image, obtaining an average distance to model curves. Fig. 14: Target and model silhouettes before and after morphological filtering To improve the reliability of the obtained shift correction, we repeat the process for minimum CCSS images, obtaining a second distance average. These two distances are averaged and the result is applied to correct for shift between target and model curves. Application of the shift correction procedure to non-corresponding target and model curves will generally lead to large differences between the two computed distances, meaningless average shift corrections and final penalization of these incorrect associations. #### 3.2.1 ii) Matching cost calculation Once the horizontal distance has been corrected, the core of the matching process can then be applied. CCSS images will be considered at this point as a set of rows of points of interest. The global matching cost will be obtained from partial point-to-point row matching cost, where each step is repeated both for maximum and for minimum CCSS images. For each row in the target and model CCSS images, two lists of relevant maximum (minimum) points are extracted to be compared. Concavity associated values have revealed to be essential for matching cost estimation, therefore a pair (x, c) will be stored for each maximum (minimum) points in the actual row. Comparison cost is then obtained by using an exhaustive method to explore the solution tree: the Recursive Matching Matrix (RMM). The set of relevant row points for a given image will be noted as \\(I\\), while the set of relevant row points for the model will be referred to as \\(M\\). Therefore, I(i, x) represents the horizontal position of the i\\({}^{\\text{th}}\\) extrema in the current row and I(i, c) represents the concavity value of the i\\({}^{\\text{th}}\\) extrema in the current row. Similar considerations apply to M. To exhaustively search the space of potential matches, RMM must have a number of columns that exceeds or equals the number of rows. Let us denote by \\(|\\text{I}|\\) and \\(|\\text{M}|\\), respectively, the cardinality of sets I and M. If \\(|\\text{I}|>|\\text{M}|\\), RMM will be (\\(|\\text{M}|\\) x \\(|\\text{I}|\\)), otherwise, RMM will be (\\(|\\text{I}|\\) x \\(|\\text{M}|\\)). Without loss of generality, let us suppose that the first case holds (the case \\(|\\text{M}|\\geq|\\text{I}|\\) can be treated similarly). Each cell (i, j) of RMM will store the result of evaluating the following expression: \\[\\text{RMM (i,j)}=[\\text{ \\alpha * abs( I ( i,x)- M ( j, x)) ) }]+[\\text{ (1 - \\alpha) * abs(I ( i, c)- M ( j, c ) ) }], \\tag{1}\\] where _abs_ stands for absolute value, and \\(\\alpha\\) represents a scale factor between distance and concavity. A value of \\(\\alpha\\)=0.2 has demonstrated good empirical results. Figure 15: Pseudocode for Recursive Matching Matrix Cost Calculation The algorithm in Fig. 15 provides optimum cost-for-row matching. It is still needed to add a penalty term to the cost to take into account those curves in the target that have no correspondence in the model and vice versa: \\[\\mathit{OPTCost}_{f}=\\mathit{OPTCost}+\\left(\\text{ abs( M - N ) * }\\sigma\\right), \\tag{2}\\] where \\(\\mathit{OPTCost}\\) is the cost obtained by application of algorithm described in Fig. 15, and \\(\\sigma\\) is a gain factor. Empirically, good results have been obtained with a value of \\(\\sigma=\\left(70\\text{ * }\\alpha\\right)\\), being \\(\\alpha\\) the scale factor between distance and concavity previously mentioned. By repeating this execution procedure for each row in actual CCSS images and then adding up the resulting costs, the total matching cost will be obtained for the maximum-CCSS image. Minimum-CCSS image matching cost can be obtained in a similar way. By adding maximum and minimum partial costs, a global target-model cost is obtained. ### Solution set sorting Once obtained the global target-to-model cost for each model in the search space, the models are sorted in terms of their similarity to the target to be identified. The models ranked first in this sorted list are displayed to the operator, who makes the final identification decision. ## 4 Experimental Results The proposed method has been applied using a search database composed of 1129 vessel silhouettes generated from scanned line drawings of world warships [10]. The developed decision support system prototype allows the user to extract a vessel silhouette from a real image and present it as input data to the search engine. The output of the system is the 1129 silhouettes set, sorted in terms of the probability of matching the original image. A set of 50 warship images acquired in real operational conditions has been used to test the system. The silhouettes corresponding to each of these images were extracted and injected to the system, which sorted the 1129 vessel database in terms of similarity to the input silhouette. Table 1 lists the frequencies at which the matching model appears in a given position of the sorted database. This information is graphically displayed on the adjacent figure. As it can be seen, the correct warship models appear with a probability close to 80% in the first two positions of the sorted database. All input silhouettes were correctly identified in the first six positions of the database. Finally, Fig. 16 shows four examples of system warship silhouette identification and the four-most similar results for each one of the warships. Each column corresponds to a different target image, where the top rows show the original image and the extracted silhouette, while the bottom rows display the four silhouettes of the models that are most likely to match the original vessel. The correct warships appear highlighted in the 1st, 1st, 2nd and 3rd row respectively form left to right. In the last two cases, the identification could be considered to yield optimum results, as only models corresponding to the same class are ranked above the matching one. This fact can be visually checked in Fig. 16. \\begin{table} \\begin{tabular}{c|c|c|c|c|c|c|c|c} & 1st pos. & 2nd pos. & 3rd pos. & 4th pos. & 5th pos. & 6th pos. & Other \\\\ \\hline Relative & \\multirow{2}{*}{30} & \\multirow{2}{*}{9} & \\multirow{2}{*}{3} & \\multirow{2}{*}{4} & \\multirow{2}{*}{3} & \\multirow{2}{*}{1} & \\multirow{2}{*}{0} \\\\ Frequency & & & & & & & \\\\ \\hline Relative & \\multirow{2}{*}{30} & \\multirow{2}{*}{39} & \\multirow{2}{*}{42} & \\multirow{2}{*}{46} & \\multirow{2}{*}{49} & \\multirow{2}{*}{50} & \\multirow{2}{*}{50} \\\\ Cumulative & & & & & & & \\\\ Frequency & & & & & & & \\\\ \\hline Absolute & \\multirow{2}{*}{0.6} & \\multirow{2}{*}{0.18} & \\multirow{2}{*}{0.06} & \\multirow{2}{*}{0.08} & \\multirow{2}{*}{0.06} & \\multirow{2}{*}{0.02} & \\multirow{2}{*}{0.0} \\\\ Frequency & & & & & & & \\\\ \\hline Absolute & \\multirow{2}{*}{0.6} & \\multirow{2}{*}{0.78} & \\multirow{2}{*}{0.84} & \\multirow{2}{*}{0.92} & \\multirow{2}{*}{0.98} & \\multirow{2}{*}{1.0} & \\multirow{2}{*}{1.0} \\\\ Cumulative & & & & & & & \\\\ Frequency & & & & & & & \\\\ \\end{tabular} \\end{table} Table 1: Frequencies table showing system results for 50 warships test set. Fig. 16 Original images, extracted silhouettes and top-ranked results for warships from the test set Fig. 17 Warship identification example for infrared image Fig. 18 Viewpoint angle effects in CCSS method Fig. 17 Warship identification example for infrared image Fig. 16 Original images, extracted silhouettes and top-ranked results for warships from the test set The type of sensor used in warships image capture has empirically showed to have little relevance as long as the silhouette is visually recognizable. Test cases included some infrared (IR) and intensified images (II), with which results of similar quality were achieved. Fig. 17 shows an IR test case and his respective four best results. The correct warship appears in the 2\\({}^{\\text{nd}}\\) place in the 1129-warship sorted output set. The first result corresponds to a ship of the same class. One of the most important inheritances from the curvature scale space method is that of noise tolerance. This feature is essential in silhouette identification. Different system operators or silhouette extraction algorithms will extract different silhouettes, what theoretically could be modelled as a noise factor in the respective ideal warship image boundary. The quality of the results obtained in this direction supports the goodness of the approximation. This method has revealed to be affected by large modifications in the viewpoint angle. Best performance is obtained with an image capture viewpoint angle near 90\\({}^{\\text{o}}\\) with respect to ship direction. Anyway, the system has demonstrated an optimum behaviour for angle variations within \\(\\pm\\) 10\\({}^{\\text{o}}\\) with regard to profile view. Fig. 18 shows the differences between the rotated and the ideal extracted silhouette, as well their respective CCSS images. This rotation, which is larger than 10\\({}^{\\text{o}}\\), causes in this case the misidentification of the warship. ## 5 Conclusions A new system to support identification of military vessels has been reported. The system is based on an evolution of the Curvature Scale Space (CSS) method, which has lead to a significant increase in the performance and robustness of the system. This novel approach, the Concavity-Convexity Scale Space (CCSS) representation, overcomes problems such as lobe-splitting, model-to-target silhouette arc length variations, shallow concavities and curvature artifacts that preclude the direct use of the CSS representation to solve this problem. This CCSS representation has been embedded on a complete identification system comprising the steps of silhouette preprocessing, computation of the CCSS representation, matching and model database sorting. The system has been applied on operational imagery acquired with sensors operating in different spectral ranges, proving the high performance and robustness of the developed method. ## 6 Acknowledgements The authors gratefully acknowledge the financial support provided by the Subdireccion General de Tecnologia y Centros (SDGTECEN) of the Spanish Ministry of Defence. ## References * [1] A.P.Whitkin, \"Scale-space filtering\", _Proc. IJCAI_, Karlsruhe, Germany, 1983 * [2] F. Mokhtarian and A.K. Mackworth,, \"A theory of multi-scale curvature based shape representation for planar curves\". IEEE Transactions on Pattern Analysis and Machine Intelligence, **vol. 8**, n\\({}^{\\text{o}}\\) 1, pp. 34-43, 1986. * [3] F. Mokhtarian. \"Silhouette-based isolated object recognition through curvature scale space\". IEEE Transactions on Pattern Analysis and Machine Intelligence, **vol. 17**, n\\({}^{\\text{o}}\\) 5, pp. 539-544, 1995. * [4] F. Mokhtarian, S. Abbasi and J. Kittler, \"Robust and efficient shape indexing through curvature scale-space\". _British Machine Vision Conference_, 1998. * [5] H. Asada and M. Brady. \"The curvature primal sketch\", IEEE Transactions on Pattern Analysis and Machine Intelligence, **vol. 8**, pp 2-14, 1986. * [6] J. Sporring, X. Zabulis, P.E. Trahanias and S. Orphanoudakis, \"Shape similarity by piecewise linear alignment\", _Proceedings of the Fourth Asian Conference on Computer Vision (ACCV'00)_, Taipei, Taiwan, January 2000, pp. 306-311. * [7] S. Abbasi, F. Mokhtarian, and J. Kittler, \"Enhancing CSS-based shape retrieval for objects with shallow concavities\". Image and Vision Computing, **vol. 18**, pp. 199-211, 2000. * [8] A. Banerji, J. Goutsias. \"Detection of minelike targets using grayscale morphological image reconstruction\", _SPIE vol. 2496_, 1995. * [9] W. K. Pratt. _Digital Image Processing_. 2\\({}^{\\text{o}}\\) Ed., John Wiley & Sons, New York, 1991. * [10]_Weyers warhips of the world 2005/2007_, Bernard & Graefe, Bonn, 2005.
In this paper, a decision support system for ship identification is presented. The system receives as input a silhouette of the vessel to be identified, previously extracted from a side view of the object. This view could have been acquired with imaging sensors operating at different spectral ranges (CCD, FLIR, image intensifier). The input silhouette is preprocessed and compared to those stored in a database, retrieving a small number of potential matches ranked by their similarity to the target silhouette. This set of potential matches is presented to the system operator, who makes the final ship identification. This system makes use of an evolved version of the Curvature Scale Space (CSS) representation. In the proposed approach, it is curvature extrema, instead of zero crossings, that are tracked during silhouette evolution, hence improving robustness and enabling to cope successfully with cases where the standard CCS representation is found to be unstable. Also, the use of local curvature was replaced with the more robust concept of lobe concavity, with significant additional gains in performance. Experimental results on actual operational imagery prove the excellent performance and robustness of the developed method. C 2019 1Centre de Investigacion y Desarrollo de la Armada, Madrid, Spain 2SENER Ingenieria y Sistemas, S.A., Tres Cantos, Madrid, Spain CSS, Curvature Scale Space, CCSS, Concavity-Convexity Scale Space, pattern recognition, decision support system, silhouette classification, ship.
Write a summary of the passage below.
arxiv-format/0510117v4.md
# Tail asymptotics for monotone-separable networks Marc Lelarge Department of Mathematics, University of California, Berkeley, CA 94720 [email protected] November 6, 2021 ## 1. Introduction Consider the \\(GI/GI/1\\) single server queue: we denote \\(X_{n}=\\sigma_{n}-\\tau_{n}\\) where \\(\\{\\sigma_{n}\\}\\) and \\(\\{\\tau_{n}\\}\\) are independent and identically distributed (i.i.d) non-negative random variables, \\(\\sigma_{n}\\) is the amount of service of customer \\(n\\) and \\(\\tau_{n}\\) is the inter-arrival time between customer \\(n\\) and \\(n+1\\). Assume that \\(\\mathbb{E}[X_{1}]<0\\), then the supremum of the random walk \\(S_{n}=X_{1}+\\cdots+X_{n}\\) defined by \\(M:=\\sup_{n\\geq 1}S_{n}\\) is finite almost surely and has the same distribution as the stationary workload of the single server queue. If we assume moreover that \\(\\mathbb{E}[\\exp(\\epsilon X_{1})]<\\infty\\) for some \\(\\epsilon>0\\), then the following asymptotics is standard: \\[\\lim_{x\\to\\infty}\\frac{1}{x}\\log\\mathbb{P}(M>x)=-\\theta^{*},\\quad\\text{where }\\theta^{*}=\\sup\\big{\\{}\\theta>0,\\,\\log\\mathbb{E}\\left[e^{\\theta X}\\right]<0 \\big{\\}}. \\tag{1.1}\\] Motivated by queueing applications, this case has been extensively studied in the literature and much finer estimates are available, see the works of Iglehart [10] and Pakes [12]. The main goal of this paper is to derive analogous results to (1.1) for networks. In the context of a network, we consider the maximal deter \\(Z\\) which is the time to empty the network when stopping further arrivals. Clearly in the single server queue, the maximal deter corresponds to the workload. In the case of queues in tandem, it corresponds to the end to end delay. Our Theorem 2.2 gives the logarithmic tail asymptotics for the maximal deter of a monotone separable network. The main difficulty in our task is the absence of closed form formula for \\(Z\\). The proof of the theorem will proceed by deriving upper and lower bounds for monotone separable networks. This class, which was introduced by Baccelli and Foss in [3], contains several classical queueing network models like generalized Jackson networks, max-plus networks, polling systems and multiserver queues. In this paper, we choose to put a particularemphasis on tandem queues that fall in the class of open Jackson networks, and in the class of open (max,plus) systems which both belong to the class of monotone separable networks. It serves as a pedagogical example to apply our main theorem under various stochastic assumptions and it enables us to link our results with existing asymptotics results from queueing literature. The paper is structured as follows. In Section 2, we give the precise definition of a monotone separable network and its associated maximal dater. We then give the main result of this paper in Section 2.2. The case of queues in tandem is dealt with great details in Section 3. In particular, we show that a kind of phase transition is possible when service times at both station are dependent. We also link our result to the literature. Finally technical proofs are deferred to Section 4. ## 2. Tail asymptotics for monotone-separable networks In this paper, we consider open stochastic networks with a single input process \\(N\\) which is a marked point process with points \\(\\{T_{n}\\}\\) corresponding to exogenous arrival times and marks \\(\\{\\zeta_{n}\\}\\) which describe the service times and routing decisions. More precisely a stochastic network is described by the following framework (introduced by Baccelli and Foss [3]) * The network has a single input point process \\(N\\), with points \\(\\{T_{n}\\}\\); for all \\(m\\leq n\\in\\mathbb{Z}\\), let \\(N_{[m,n]}\\) be the \\([m,n]\\)-restriction of \\(N\\), namely the point process with points \\(\\{T_{\\ell}\\}_{m\\leq\\ell\\leq n}\\). * The network has a.s. finite activity for all finite restrictions of \\(N\\): for all \\(m\\leq n\\in\\mathbb{Z}\\), let \\(X_{[m,n]}(N)\\) be the time of last activity in the network, when this one starts empty and is fed by \\(N_{[m,n]}\\). We assume that for all finite \\(m\\) and \\(n\\) as above, \\(X_{[m,n]}(N)\\) is finite. We assume that there exists a set of functions \\(\\{f_{\\ell}\\}\\), \\(f_{\\ell}:\\mathbb{R}^{\\ell}\\times K^{\\ell}\\to\\mathbb{R}\\), such that: \\[X_{[m,n]}(N)=f_{n-m+1}\\{(T_{\\ell},\\zeta_{\\ell}),\\,m\\leq\\ell\\leq n\\}, \\tag{2.1}\\] for all \\(n,m\\) and \\(N=\\{T_{n}\\}\\), where the sequence \\(\\{\\zeta_{n}\\}\\) is that describing service times and routing decisions. **Example**.: Consider a \\(G/G/1/\\infty\\to./G/1/\\infty\\) tandem queue. Denote by \\(\\{\\sigma_{n}^{(i)}\\}\\) the sequence of service times in station \\(i=1,2\\) and \\(N=\\{T_{n}\\}\\) the sequence of arrival times at the first station. With the notation introduced above, we have \\(\\zeta_{n}=(\\sigma_{n}^{(1)},\\sigma_{n}^{(2)})\\) and the time of last activity is given by, \\[X_{[m,n]}(N)=\\sup_{m\\leq k\\leq n}\\left\\{T_{k}+\\sup_{k\\leq i\\leq n}\\sum_{j=k}^{ i}\\sigma_{j}^{(1)}+\\sum_{j=i}^{n}\\sigma_{j}^{(2)}\\right\\}. \\tag{2.2}\\] We refer to the Appendix for an explicit derivation of Equation (2.2). \\(X_{[m,n]}(N)\\) is simply the last departure time from the network, when only customers \\(m,m+1,\\ldots,n\\) enter the network. We say that a network described as above is monotone-separable if the functions \\(f_{n}\\) are such that the following properties hold for all input point process \\(N\\): 1. **Causality:** for all \\(m\\leq n\\), \\[X_{[m,n]}(N)\\geq T_{n};\\]2. **External monotonicity:** for all \\(m\\leq n\\), \\[X_{[m,n]}(N^{\\prime})\\geq X_{[m,n]}(N),\\] whenever \\(N^{\\prime}:=\\{T_{n}^{\\prime}\\}\\) is such that \\(T_{n}^{\\prime}\\geq T_{n}\\) for all \\(n\\); 3. **Homogeneity:** for all \\(c\\in\\mathbb{R}\\) and for all \\(m\\leq n\\) \\[X_{[m,n]}(N+c)=X_{[m,n]}(N)+c,\\] where \\(N+c\\) is the point process with points \\(\\{T_{n}+c\\}\\); 4. **Separability:** for all \\(m\\leq\\ell<n\\), if \\(X_{[m,\\ell]}(N)\\leq T_{\\ell+1}\\), then \\[X_{[m,n]}(N)=X_{[\\ell+1,n]}(N).\\] **Remark 2.1**.: Clearly, tandem queues belong to the class of monotone-separable networks. ### Stability and stationary maximal daters In this section, we introduce stochastic assumptions ensuring the stability of the network. More general results can be found in Baccelli and Foss [3] and we refer to it for the statements given in this section without proof. By definition, for \\(m\\leq n\\), the \\([m,n]\\) maximal dater is \\[Z_{[m,n]}(N):=X_{[m,n]}(N)-T_{n}.\\] Note that \\(Z_{[m,n]}(N)\\) is a function of \\(\\{\\zeta_{l}\\}_{m\\leq\\ell\\leq n}\\) and \\(\\{\\tau_{l}\\}_{m\\leq\\ell\\leq n}\\) only, where \\(\\tau_{n}=T_{n+1}-T_{n}\\). In particular, \\(Z_{n}:=Z_{[n,n]}(N)\\) is not a function of \\(N\\) (which makes the notation consistent). **Lemma 2.1**.: _[_3_]_ **Internal monotonicity of \\(X\\) and \\(Z\\)** Under the above conditions, the variables \\(X_{[m,n]}\\) and \\(Z_{[m,n]}\\) satisfy the internal monotonicity property: for all \\(N\\), \\(m\\leq n\\),_ \\[X_{[m-1,n]}(N) \\geq X_{[m,n]}(N),\\] \\[Z_{[m-1,n]}(N) \\geq Z_{[m,n]}(N).\\] In particular, the sequence \\(\\{Z_{[-n,0]}(N)\\}\\) is non-decreasing in \\(n\\). We define the _stationary maximal dater_ as \\[Z:=Z_{(-\\infty,0]}(N)=\\lim_{n\\to\\infty}Z_{[-n,0]}(N)\\leq\\infty.\\] **Example.** In the case of the tandem queues, the stationary maximal dater is given by: \\[Z=\\sup_{p\\leq q\\leq 0}\\left\\{\\sum_{k=p}^{q}\\sigma_{k}^{(1)}+\\sum_{k=q}^{0} \\sigma_{k}^{(2)}-(T_{0}-T_{p})\\right\\}, \\tag{2.3}\\] and \\(Z\\) is the stationary end to end delay of the network. **Lemma 2.2**.: _[_3_]_ **Subadditive property of \\(Z\\)** Under the above conditions, \\(\\{Z_{[m,n]}(N)\\}\\) satisfies the following subadditive property: for all \\(m\\leq\\ell<n\\), for all \\(N\\),_ \\[Z_{[m,n]}(N)\\leq Z_{[m,\\ell]}(N)+Z_{[\\ell+1,n]}(N).\\] We assume that the sequence \\(\\{\\tau_{n},\\zeta_{n}\\}_{n}\\) is a sequence of i.i.d. random variables. The following integrability assumptions are also assumed to hold (recall that \\(Z_{n}=Z_{[n,n]}(N)\\) does not depend on \\(N\\)): \\[\\mathbb{E}[\\tau_{n}]:=a<\\infty,\\quad\\mathbb{E}[Z_{n}]<\\infty.\\]Denote by \\(N^{0}=\\{T^{0}_{n}\\}\\) the degenerate input process with \\(T^{0}_{n}=0\\) for all \\(n\\). This degenerate point process plays a crucial role for the derivation of the stability condition. The following lemma follows from Lemma 2.2 in which we take as input point process \\(N^{0}\\) (note that the constant \\(\\gamma\\) defined below is denoted \\(\\gamma(0)\\) in [3] to emphasize the fact that the input point process is \\(N^{0}\\)). **Lemma 2.3**.: _[_3_]_ _Under the foregoing stochastic assumption, there exists a non-negative constant \\(\\gamma\\) such that_ \\[\\lim_{n\\to\\infty}\\frac{Z_{[-n,0]}(N^{0})}{n}=\\lim_{n\\to\\infty} \\frac{\\mathbb{E}\\left[Z_{[-n,0]}(N^{0})\\right]}{n}=\\gamma\\:a.s.\\] The main result on the stability region is the following: **Theorem 2.1**.: _[_3_]_ _Under the foregoing stochastic assumptions, either \\(Z=\\infty\\) a.s. or \\(Z<\\infty\\) a.s._ * _If_ \\(\\gamma<a\\)_, then_ \\(Z<\\infty\\) _a.s._ * _If_ \\(Z<\\infty\\) _a.s., then_ \\(\\gamma\\leq a\\)_._ A proof is given in Section 4.1, where we derive an upper bound and a lower bound that will be used for the study of large deviations. **Example.** In the case of tandem queues, the constant \\(\\gamma\\) is easy to compute. We have \\[\\lim_{n\\to\\infty}\\sup_{-n\\leq q\\leq 0}\\frac{\\sum_{k=-n}^{q}\\sigma^{(1)}_{k}+ \\sum_{k=q}^{0}\\sigma^{(2)}_{k}}{n}=\\max\\left(\\mathbb{E}[\\sigma^{(1)}_{1}], \\mathbb{E}[\\sigma^{(2)}_{1}]\\right).\\] Hence Theorem 2.1 gives the standard stability condition: \\(\\max\\left(\\mathbb{E}[\\sigma^{(1)}_{1}],\\mathbb{E}[\\sigma^{(2)}_{1}]\\right)< \\mathbb{E}[\\tau_{1}]\\). ### Moment generating function and tail asymptotics In the rest of the paper, we will make the following assumptions: * Assumption **(AA)** on the arrival process into the network \\(\\{T_{n}\\}\\): \\(\\{T_{n}\\}\\) is a renewal process independent of the service time and routing sequences \\(\\{\\zeta_{n}\\}\\). * Assumption **(AZ)**: the sequence \\(\\{\\zeta_{n}\\}\\) is a sequence of i.i.d. random variables, such that the random variable \\(Z_{0}\\) is light-tailed, i.e. for \\(\\theta\\) in a neighborhood of \\(0\\), \\[\\mathbb{E}[e^{\\theta Z_{0}}]<+\\infty.\\] * Stability: \\(\\gamma<a=\\mathbb{E}[T_{1}-T_{0}]\\) see Theorem 2.1. The subadditive property of \\(Z\\) directly implies the following property (which is proved in Lemma 4.1): for any monotone separable network that satisfies assumption **(AZ)**, the following limit \\[\\Lambda_{Z}(\\theta) = \\lim_{n\\to\\infty}\\frac{1}{n}\\log\\mathbb{E}\\left[e^{\\theta Z_{[1,n ]}(N^{0})}\\right], \\tag{2.4}\\] exists in \\(\\mathbb{R}\\cup\\{+\\infty\\}\\) for all \\(\\theta\\). Note that the subadditive property of \\(Z\\) is valid regardless of the point process \\(N\\) (see Lemma 2.2). Like in the study of the stability of the network, it turns out that the right quantity to look at is \\(Z_{[m,n]}(N^{0})\\) where \\(N^{0}\\) is the degenerate input point process with all its point equal to \\(0\\). We also define: \\[\\Lambda_{T}(\\theta) = \\log\\mathbb{E}\\left[e^{\\theta(T_{1}-T_{0})}\\right].\\] **Theorem 2.2**.: _Under previous assumptions, the tail asymptotics of the stationary maximal dater is given by,_ \\[\\lim_{x\\to\\infty}\\frac{1}{x}\\log\\mathbb{P}(Z>x)=-\\theta^{*}<0,\\] _where \\(\\theta^{*}=\\sup\\left\\{\\theta>0,\\,\\Lambda_{T}(-\\theta)+\\Lambda_{Z}(\\theta)<0\\right\\}\\)._ It is relatively easy to see that under our light-tailed assumption the stationary maximal dater \\(Z\\) will be light-tailed (see Corollary 3 in [4]). The main contribution of Theorem 2.2 is to give an explicit way of computing the rate of decay of the tail distribution of \\(Z\\). We refer the interested reader to [11] for more details on the computation of \\(\\Lambda_{Z}\\) in the case of (max,plus)-linear networks. In Section 3, we continue the study of our example and deal with the case of queues in tandem under various stochastic assumptions. This case of study allows us to show a phase transition phenomena and to compare our theorem with results of the literature. Note that in the context of heavy-tailed asymptotics, the moment generating function is infinite for all \\(\\theta>0\\). There is no general result for the tail asymptotics of the maximal dater of a monotone separable network. However the methodology derived by Baccelli and Foss [4] for subexponential distributions allows to get exact asymptotics for (max,plus)-linear networks [6] and generalized Jackson networks [5]. ## 3. A Case of Study: Queues in tandem ### The impact of dependence We continue our example and consider a stable \\(G/G/1/\\infty\\to./G/1/\\infty\\) tandem queue where \\(\\{\\sigma_{n}^{(i)}\\}\\) is the sequence of service times in station \\(i=1,2\\) and \\(\\{\\tau_{n}\\}\\) is the sequence of inter-arrival times at the first station. We assume that the sequences \\(\\{(\\sigma_{n}^{(1)},\\sigma_{n}^{(2)}\\}\\) and \\(\\{\\tau_{n}\\}_{n}\\) are sequences of i.i.d. random variables such that \\(\\gamma=\\max\\left(\\mathbb{E}[\\sigma_{1}^{(1)}],\\mathbb{E}[\\sigma_{1}^{(2)}] \\right)<\\mathbb{E}[\\tau_{1}]\\). We consider two cases: * case 1: the sequences \\(\\{\\sigma_{n}^{(1)}\\}\\), \\(\\{\\sigma_{n}^{(2)}\\}\\), \\(\\{\\tau_{n}\\}\\) are independent. * case 2: the sequences \\(\\{\\sigma_{n}^{(1)}\\}\\) and \\(\\{\\tau_{n}\\}\\) are independent and we have \\(\\sigma_{n}^{(2)}=\\sigma_{n}^{(1)}\\). We denote \\(\\Lambda_{i}(\\theta)=\\log\\mathbb{E}[\\exp(\\theta\\sigma_{1}^{(i)})]\\) and \\(\\delta=\\sup\\{\\theta\\geq 0,\\,\\mathbb{E}\\left[e^{\\theta\\sigma_{1}^{(1)}}\\right]<\\infty\\}\\). A direct application of Theorem 2.2 gives an extension of the results of Ganesh [9]: **Corollary 3.1**.: _The tail asymptotics of the stationary end to end delay for two queues in tandem is given by_ \\[\\lim_{x\\to\\infty}\\frac{1}{x}\\log\\mathbb{P}(Z>x)=-\\theta^{*},\\] _where_ * _in case 1:_ \\(\\theta^{*}=\\min(\\theta^{1},\\theta^{2})\\) _with_ \\(\\theta^{i}=\\sup\\{\\theta>0,\\,\\Lambda_{i}(\\theta)+\\Lambda_{T}(-\\theta)<0\\}\\)_;_ * _in case 2:_ \\(\\theta^{*}=\\min\\left(\\theta^{1},\\frac{\\delta}{2}\\right)\\)_._ In case 1, \\(\\theta^{i}\\) is the rate of exponential decay for the tail distribution of the stationary workload of a single server queue with interarrival \\(\\tau_{n}\\) and service time \\(\\sigma_{n}^{(i)}\\) and we have \\(\\theta^{*}=\\min(\\theta^{1},\\theta^{2})\\). It is well-known that the stability of such a network is constraint by the \"slowest\" component. Here we see that in a large deviations regime, the \"bad\" behavior of the network is due toa \"bottleneck\" component (which is not necessarily the same as the \"slowest\" component in average). Note that in the particular case where the random variables \\(\\sigma_{n}^{(1)},\\sigma_{n}^{(2)},\\tau_{n}\\) are exponentially distributed with mean \\(1/\\mu^{1},1/\\mu^{2},a\\), we have \\(\\theta^{i}=\\mu^{i}-a^{-1}\\), and in this case the \"slowest\" component in average is also the \"bottleneck\" component in the large deviations regime. In the case where the service times are the same at both stations, Corollary 3.1 shows that the tail behavior of the random variable \\(\\sigma_{1}^{(1)}\\) described by \\(\\delta\\) matters. To simplify and to get a parametric model, assume that the arrival process is Poisson with intensity \\(\\lambda:=a^{-1}\\) and the service times are exponentially distributed with mean \\(1/\\mu\\). Then depending on the intensity of the arrival process \\(\\lambda\\), two situations may occur: \\[\\lambda\\leq\\mu/2 \\Rightarrow \\theta^{*}=\\mu/2,\\] \\[\\lambda>\\mu/2 \\Rightarrow \\theta^{*}=\\mu-\\lambda.\\] In words, we have 1. if \\(\\lambda<\\mu/2\\), then the tail asymptotics of the end-to-end delay is the same as the total service requirement of a single customer; 2. if \\(\\lambda>\\mu/2\\), then the tail asymptotics of the end-to-end delay is the same as in the independent case. This shows that the behavior of tandems differs from that of a single server queue. In particular Anantharam [1] shows that for \\(GI/GI/1\\) queues, the build-up of large delays can happen in one of two ways: * If the service times have exponential tails, then it involves a large number of customers (whose inter-arrival and service times differ from their mean values). * If the service times do not have exponential tails, then large delays are caused by the arrival of a single customer with large service requirement. We see that the first behavior is still valid for queues in tandem when the service times are independent at each station or if the intensity of the arrival process is sufficiently large. In contrast, when the service times are the same at both station, we see that a single customer can create large delays in the network even under the assumption of exponential service times (if the intensity of arrivals is sufficiently small). Note that this phenomena is rather simple and results intrinsically from the fact that the network considered is of dimension greater than \\(2\\) (i.e. one cannot get such a phenomena with a single server queue). Proof.: Recall that we have \\[Z_{[1,n]}(N^{0})=\\sup_{1\\leq k\\leq n}\\sum_{i=1}^{k}\\sigma_{i}^{(1)}+\\sum_{i=k}^ {n}\\sigma_{i}^{(2)}.\\] In case 1, we have \\[\\log\\mathbb{E}\\left[e^{\\theta Z_{[1,n]}(N^{0})}\\right] \\leq \\log\\left(\\sum_{k=1}^{n}e^{k\\Lambda_{1}(\\theta)+(n-k)\\Lambda_{2}( \\theta)}\\right)\\] \\[\\leq \\log n+n\\max\\left(\\Lambda_{1}(\\theta),\\Lambda_{2}(\\theta)\\right).\\] Hence we have \\(\\Lambda_{Z}(\\theta)=\\max\\left(\\Lambda_{1}(\\theta),\\Lambda_{2}(\\theta)\\right)\\) and the corollary follows. In case 2, we have \\(Z_{[1,n]}(N^{0})=\\sum_{i=1}^{n}\\sigma_{i}^{(1)}+\\max_{i}\\sigma_{i}^{(1)}=\\max_{i }\\left(2\\sigma_{i}^{(1)}+\\sum_{j\ eq i}\\sigma_{j}^{(1)}\\right)\\), hence we have \\[\\log\\mathbb{E}\\left[e^{\\theta Z_{[1,n]}(N^{0})}\\right] \\geq \\max\\left(n\\Lambda_{1}(\\theta),\\Lambda_{1}(2\\theta)\\right)\\text{ and,}\\] \\[\\log\\mathbb{E}\\left[e^{\\theta Z_{[1,n]}(N^{0})}\\right] \\leq (n-1)\\Lambda_{1}(\\theta)+\\log n+\\Lambda_{1}(2\\theta).\\] It follows that \\[\\Lambda_{Z}(\\theta)=\\left\\{\\begin{array}{ll}\\Lambda_{1}(\\theta)&,\\;\\theta< \\eta/2\\\\ \\infty&,\\;\\theta>\\eta/2\\end{array}\\right.\\] and the corollary follows. ### Comparison with the literature In the context of two queues in tandem, if we define \\[Y_{n}=\\sup_{-n\\leq q\\leq 0}\\sum_{k=-n}^{q}\\sigma_{k}^{(1)}+\\sum_{k=q}^{0}\\sigma _{k}^{(2)}-(T_{0}-T_{-n}),\\] then we have in view of (2.3), \\(Z=\\sup_{n}Y_{n}\\). The supremum of a stochastic process has been extensively studied in queueing theory but we do not know of any general results that would allow to derive Corollary 3.1. To end this section and to make the connection with the existing literature, we state the following result **Corollary 3.2**.: _Consider the system of queues in tandem descried above. Under assumptions of Theorem 2.2 and if_ 1. _the sequence_ \\(\\{Y_{n}/n\\}\\) _satisfies a large deviation principle (LDP) with a good rate function I;_ 2. _there exists_ \\(\\epsilon>0\\) _such that_ \\(\\Lambda_{Z}(\\theta^{*}+\\epsilon)<\\infty\\)_,_ _where \\(\\theta^{*}\\) is defined as in Theorem 2.2. Then we have_ \\[\\lim_{x\\to\\infty}\\frac{1}{x}\\log\\mathbb{P}(Z>x)=-\\theta^{*}=-\\inf_{\\alpha>0} \\frac{I(\\alpha)}{\\alpha}. \\tag{3.1}\\] This kind of result has been extensively studied in the queueing literature (we refer to the work of Duffy, Lewis and Sullivan [8]). However, we see that considering the moment generating function instead of the rate function allows us to get a more general result than (3.1) since we do not require the assumption (2) on the tail. Indeed this assumption ensures that the tail asymptotics of \\(\\mathbb{P}(Y_{n}>nc)\\) for a single \\(n\\) value cannot dominate those of \\(\\mathbb{P}(Z>x)\\). In this case, equation (3.1) has a nice interpretation: the natural drift of the process \\(Y_{n}\\) is \\(\\mu n\\), where \\(\\mu<0\\). The quantity \\(I(\\alpha)\\) can be seen as the cost for changing the drift of this process to \\(\\alpha>0\\). Now in order to reach level \\(x\\), this drift has to last for a time \\(x/\\alpha\\). Hence the total cost for reaching level \\(x\\) with drift \\(\\alpha\\) is \\(xI(\\alpha)/\\alpha\\) and the process naturally choose the drift with the minimal associated cost. As already discussed, this heuristic is valid only if an assumption as (2) holds. Note also that in our framework, we do not assume any LDP to hold for the sequence \\(\\{Y_{n}/n\\}\\). In particular, as shown by Corollary 3.1, the computation of the moment generating function \\(\\Lambda_{Z}\\) is much easier than deriving a LDP for \\(\\{Y_{n}/n\\}\\). Lastly, we should stress that for general monotone separable networks, the maximal dater \\(Z\\) cannot be expressed as the supremum of a simple stochastic process in which case, the derivation of the tail asymptotics of \\(Z\\) requires new techniques. Proof.: We have only to show that \\(\\theta^{*}=\\inf_{\\alpha>0}\\frac{I(\\alpha)}{\\alpha}\\). Thanks to Varadhan's Integral Lemma (see Theorem 4.3.1 in [7]), we have \\[\\lim_{n\\to\\infty}\\frac{1}{n}\\log\\mathbb{E}\\left[e^{\\theta Y_{n}}\\right]=\\Lambda( \\theta)=\\sup_{x}\\{\\theta x-I(x)\\},\\] for \\(\\theta<\\theta^{*}+\\epsilon\\), where \\(\\Lambda(\\theta)=\\Lambda_{Z}(\\theta)+\\Lambda_{T}(-\\theta)\\). Then, the corollary follows from the following observations for \\(\\theta>0\\), \\[\\theta<\\inf_{\\alpha>0}\\frac{I(\\alpha)}{\\alpha} \\Leftrightarrow \\theta\\alpha-I(\\alpha)<0,\\,\\forall\\alpha\\] \\[\\Leftrightarrow \\sup_{\\alpha}\\{\\theta\\alpha-I(\\alpha)\\}=\\Lambda(\\theta)<0.\\] ## 4. Proof of the tail asymptotics ### Upper \\(G/g/1/\\infty\\) queue and lower bound for the maximal deter The material of this subsection is not new and may be found in various references (that are given in what follows). For the sake of completeness, we include all the proofs. We derive now upper and lower bounds for the stationary maximal deter \\(Z\\). These bounds allow to prove Theorem 2.1 and will be the main tools for the study of large deviations. We first derive a lower bound that can also be found in the textbook [2] see proof of Theorem 2.11.3. **Proposition 4.1**.: _We have the following lower bound_ \\[Z\\geq\\sup_{n\\geq 0}\\left(Z_{[-n,0]}(N^{0})+T_{-n}-T_{0}\\right).\\] Proof.: For \\(n\\) fixed, let \\(N^{n}\\) be the point process with point \\(T_{j}^{n}=T_{-n}-T_{0}\\), for all \\(j\\). Then \\[Z_{[-n,0]} = X_{[-n,0]}(N)-T_{0}\\geq X_{[-n,0]}(N^{n})\\] \\[= X_{[-n,0]}(N^{0})+T_{-n}-T_{0}=Z_{[-n,0]}(N^{0})+T_{-n}-T_{0},\\] where we used external monotonicity in the first inequality and homogeneity between the first and second line. Proof.: of Theorem 2.1 part (b) Suppose that \\(\\gamma>a\\), then we have \\[\\liminf_{n\\to\\infty}\\frac{Z_{[-n,0]}(N)}{n}\\geq\\gamma-a>0,\\] which concludes the proof of part (b). We assume now that \\(\\gamma<a\\). We pick an integer \\(L\\geq 1\\) such that \\[\\mathbb{E}\\left[Z_{[-L,-1]}(N^{0})\\right]<La, \\tag{4.1}\\] which is possible in view of Lemma 2.3. Without loss of generality, we assume that \\(T_{0}=0\\). Part (a) of Theorem 2.1 follows from the following proposition (that can be found in [4]): **Proposition 4.2**.: _The stationary maximal later \\(Z\\) is bounded from above by the stationary response time \\(\\hat{R}\\) in the \\(G/G/1/\\infty\\) queue with service times_ \\[\\hat{s}_{n}:=Z_{[L(n-1)+1,Ln]}(N^{0})\\] _and inter-arrival times \\(\\hat{\\tau}_{n}:=T_{Ln}-T_{L(n-1)}\\), where \\(L\\) is the integer defined in (4.1). Since \\(\\mathbb{E}[\\hat{s}_{1}]<\\mathbb{E}[\\hat{\\tau}_{1}]=La\\), this queue is stable. With the convention \\(\\sum_{0}^{-1}=0\\), we have,_ \\[Z\\leq\\hat{s}_{0}+\\sup_{k\\geq 0}\\sum_{i=-k}^{-1}\\left(\\hat{s}_{i}-\\hat{\\tau}_{i+1 }\\right).\\] Proof.: To an input process \\(N\\), we associate the following upper bound process, \\(N^{+}=\\{T_{n}^{+}\\}\\), where \\(T_{n}^{+}=T_{kL}\\) if \\(n=(k-1)L+1,\\ldots,kL\\). Note that \\(T_{n}^{+}\\geq T_{n}\\) for all \\(n\\). Then for all \\(n\\), since we assumed \\(T_{0}=0\\), we have thanks to the external monotonicity, \\[X_{[-n,0]}(N)=Z_{[-n,0]}(N)\\leq X_{[-n,0]}(N^{+})=Z_{[-n,0]}(N^{+}). \\tag{4.2}\\] We show that for all \\(k\\geq 1\\), \\[Z_{[-kL+1,0]}(N^{+})\\leq\\hat{s}_{0}+\\sup_{-k+1\\leq i\\leq 0}\\sum_{j=-i}^{-1}( \\hat{s}_{j}-\\hat{\\tau}_{j+1}). \\tag{4.3}\\] This inequality will follow from the two next lemmas **Lemma 4.1**.: _Assume \\(T_{0}=0\\). For any \\(m<n\\leq 0\\),_ \\[Z_{[m,0]}(N)\\leq Z_{[n,0]}(N)+(Z_{[m,n-1]}(N)-\\tau_{n-1})^{+}.\\] Proof.: Assume first that \\(Z_{[m,n-1]}(N)-\\tau_{n-1}\\leq 0\\), which is exactly \\(X_{[m,n-1]}(N)\\leq T_{n}\\). Then by the separability property, we have \\[Z_{[m,0]}(N)=X_{[m,0]}(N)=X_{[n,0]}(N)=Z_{[n,0]}(N).\\] Assume now that \\(Z_{[m,n-1]}(N)-\\tau_{n-1}>0\\). Let \\(N^{\\prime}=\\{T_{j}^{\\prime}\\}\\) be the input process defined as follows \\[\\forall j\\leq n-1,\\quad T_{j}^{\\prime} = T_{j},\\] \\[\\forall j\\geq n,\\quad T_{j}^{\\prime} = T_{j}+Z_{[m,n-1]}(N)-\\tau_{n-1}.\\] Then we have \\(N^{\\prime}\\geq N\\) and \\(X_{[m,n-1]}(N^{\\prime})\\leq T_{n}^{\\prime}\\), hence by the external monotonicity, the separability and the homogeneity properties, we have \\[Z_{[m,0]}(N) = X_{[m,0]}(N)\\leq X_{[m,0]}(N^{\\prime})\\] \\[= X_{[n,0]}(N^{\\prime})=X_{[n,0]}(N)+Z_{[m,n-1]}(N)-\\tau_{n-1}\\] \\[= Z_{[n,0]}(N)+Z_{[m,n-1]}(N)-\\tau_{n-1}.\\] From this lemma we derive directly **Lemma 4.2**.: _Assume \\(T_{0}=0\\). For any \\(n<0\\),_ \\[Z_{[n,0]}(N)\\leq\\sup_{n\\leq k\\leq 0}\\left(\\sum_{i=k}^{-1}(Z_{i}-\\tau_{i+1}) \\right)+Z_{0},\\] _with the convention \\(\\sum_{0}^{-1}=0\\)_Applying Lemma 4.2 to \\(Z_{[-kL+1,0]}(N^{+})\\) gives (4.3). We now return to the proof of Proposition 4.2. We have \\[Z = \\lim_{k\\to\\infty}Z_{[-kL+1,0]}\\] \\[= \\sup_{k\\geq 0}Z_{[-kL+1,0]}(N)\\] \\[\\leq \\sup_{k\\geq 0}Z_{[-kL+1,0]}(N^{+})\\quad\\mbox{thanks to \\eqref{eq:2.2}}\\] \\[\\leq \\sup_{k\\geq 0}\\left(\\hat{s}_{0}+\\sup_{-k+1\\leq i\\leq 0}\\sum_{j=-i} ^{-1}(\\hat{s}_{j}-\\hat{\\tau}_{j+1})\\right)=\\hat{R},\\quad\\mbox{thanks to \\eqref{eq:2.2}}.\\] from Lemma 4.2. ### Moment generating function **Lemma 4.1**.: _The function \\(\\Lambda_{Z}(.)\\) defined by (2.4) is a proper convex function with \\(\\Lambda_{Z}(\\theta)<\\infty\\) for all \\(\\theta<\\eta\\) and \\(\\Lambda_{Z}(\\theta)=\\infty\\) for all \\(\\theta>\\eta\\), where \\(\\eta=\\sup\\left\\{\\theta,\\,\\mathbb{E}[\\exp(\\theta Z_{0})]<\\infty\\right\\}\\)._ Proof.: Let \\[\\Lambda_{Z,n}(\\theta) = \\log\\mathbb{E}\\left[e^{\\theta\\frac{Z_{[1,n]}(N^{0})}{n}}\\right].\\] Thanks to the subadditive property of \\(Z\\), we have, \\[Z_{[1,n+m]}(N^{0})\\leq Z_{[1,n]}(N^{0})+Z_{[n+1,n+m]}(N^{0}),\\] and \\(Z_{[1,n]}(N^{0})\\) and \\(Z_{[n+1,n+m]}(N^{0})\\) are independent. Hence for \\(\\theta\\geq 0\\), we have, \\[\\Lambda_{Z,n+m}((n+m)\\theta)\\leq\\Lambda_{Z,n}(n\\theta)+\\Lambda_{Z,m}(m\\theta).\\] Hence we can define for any \\(\\theta\\geq 0\\), \\[\\Lambda_{Z}(\\theta) = \\lim_{n\\to\\infty}\\frac{1}{n}\\log\\mathbb{E}\\left[e^{\\theta Z_{[1, n]}(N^{0})}\\right]=\\lim_{n\\to\\infty}\\frac{\\Lambda_{Z,n}(n\\theta)}{n}=\\inf_{n \\geq 1}\\frac{\\Lambda_{Z,n}(n\\theta)}{n},\\] as an extended real number. The fact that \\(\\Lambda_{Z}\\) is a proper convex function follows from Lemma 2.3.9 of [7]. The last fact follows from, \\[\\Lambda_{Z}(\\theta)\\leq\\log\\mathbb{E}\\left[e^{\\theta Z_{1}}\\right]\\mbox{ and, }\\log\\mathbb{E}\\left[e^{\\theta Z_{1}}\\right]\\leq\\Lambda_{Z,n}(n\\theta)\\mbox{ for }\\theta\\geq 0\\mbox{ and all }n\\geq 1.\\] We define \\[\\Lambda(\\theta)=\\Lambda_{T}(-\\theta)+\\Lambda_{Z}(\\theta)\\mbox{ and }\\Lambda_{n}( \\theta)=\\Lambda_{T}(-\\theta)+\\Lambda_{Z,n}(\\theta).\\] Note that \\(\\Lambda_{Z}(.)\\) and \\(\\Lambda_{T}(.)\\) are proper convex functions, hence \\(\\Lambda(.)\\) is a well defined convex function. Recall that \\(\\theta^{*}\\) is defined as follows: \\[\\theta^{*}=\\sup\\{\\theta>0,\\,\\Lambda(\\theta)<0\\}.\\] The following lemma is used repeatedly in what follows, **Lemma 4.2**.: _Under the foregoing assumptions, we have \\(\\theta^{*}>0\\) and_ \\[\\Lambda(\\theta)<0 \\text{if}\\quad\\theta\\in(0,\\theta^{*}),\\] \\[\\Lambda(\\theta)>0 \\text{if}\\quad\\theta>\\theta^{*}.\\] Proof.: Let \\[\\theta_{n}=\\sup\\{\\theta>0,\\,\\Lambda_{n}(n\\theta)<0\\}. \\tag{4.4}\\] We fix \\(n\\) such that \\(\\mathbb{E}[Z_{[1,n]}(N^{0})]\\leq na\\), which is possible in view of the stability condition. We first show that \\(\\theta_{n}>0\\) and \\[\\Lambda_{n}(n\\theta)<0 \\text{if}\\quad\\theta\\in(0,\\theta_{n}), \\tag{4.6}\\] \\[\\Lambda_{n}(n\\theta)>0 \\text{if}\\quad\\theta>\\theta_{n} \\tag{4.5}\\] The function \\(\\theta\\mapsto\\Lambda_{n}(n\\theta)\\) is convex, continuous and differentiable on \\([0,\\eta)\\). Hence we have \\[\\Lambda_{n}(n\\delta)=\\delta\\left(\\mathbb{E}[Z_{[1,n]}(N^{0})]-a\\right)+o( \\delta),\\] which is less than zero for sufficiently small \\(\\delta>0\\). Hence, the set over which the supremum in the definition of \\(\\theta_{n}\\) is taken is not empty and \\(\\theta_{n}>0\\). Now (4.5) and (4.6) follow from the definition of \\(\\theta_{n}\\), the convexity of \\(\\theta\\mapsto\\Lambda_{n}(n\\theta)\\) and the fact that \\(\\Lambda_{n}(0)=0\\). We now show that \\(\\theta_{n}\\to\\theta^{*}\\) as \\(n\\to\\infty\\). We have for \\(\\theta\\geq 0\\) \\[\\lim_{n\\to\\infty}\\frac{\\Lambda_{n}(n\\theta)}{n}=\\inf_{n\\geq 1}\\frac{\\Lambda_{n} (n\\theta)}{n}=\\Lambda(\\theta).\\] Hence for \\(\\theta\\geq 0\\), we have \\(\\frac{\\Lambda_{n}(n\\theta)}{n}\\geq\\Lambda(\\theta)\\) and \\[\\forall\\theta\\in(0,\\theta_{n}),\\quad\\Lambda(\\theta)\\leq\\frac{\\Lambda_{n}(n \\theta)}{n}<0.\\] This implies that \\(\\theta^{*}\\geq\\theta_{n}>0\\). If \\(\\theta^{*}<\\infty\\), we can choose \\(\\epsilon>0\\) such that \\(\\theta^{*}-\\epsilon>0\\) and then we have \\(\\Lambda_{n}(n(\\theta^{*}-\\epsilon))/n\\to\\Lambda(\\theta^{*}-\\epsilon)<0\\). Hence for sufficiently large \\(n\\), we have \\(\\frac{\\Lambda_{n}(n(\\theta^{*}-\\epsilon))}{n}<0\\), hence \\(\\theta^{*}-\\epsilon\\leq\\theta_{n}\\), and we proved that \\(\\theta_{n}\\to\\theta^{*}\\). \\(\\Lambda(.)\\) is a convex function and since \\(\\Lambda(0)=0\\), the lemma follows. If \\(\\theta^{*}=\\infty\\), we still have \\(\\theta_{n}\\to\\infty\\) (that will be needed in proof of Lemma 4.4) by the same argument as above with \\(\\theta^{*}-\\epsilon\\) replaced by any real number. ### Lower Bound **Lemma 4.3**.: _Under previous assumptions, we have_ \\[\\liminf_{x\\to\\infty}\\frac{1}{x}\\log\\mathbb{P}(Z>x)\\geq-\\theta^{*}.\\] Proof.: We have (see Proposition 4.1) \\[Z\\geq\\sup_{n}\\left\\{Z_{[-n,0]}(N^{0})+T_{-n}-T_{0}\\right\\}. \\tag{4.7}\\] We denote \\(Y_{n}=Z_{[-n,1]}(N^{0})+T_{-n}+T_{0}\\), the lemma will follow from the following fact: \\[\\liminf_{x\\to\\infty}\\frac{1}{x}\\log\\mathbb{P}(\\sup_{n}Y_{n}>x)\\geq-\\theta^{*}.\\]Note that we have \\[\\lim_{n\\to\\infty}\\frac{1}{n}\\log\\mathbb{E}\\left[e^{\\theta Y_{n}}\\right]=\\Lambda( \\theta).\\] In particular, we are in the setting of Gartner-Ellis theorem see Theorem 2.3.6 in [7] which will be the main tool of the proof. First note that we only need to consider the case \\(\\theta^{*}<\\infty\\). We consider first the case where there exists \\(\\theta>\\theta^{*}\\) such that \\(\\Lambda(\\theta)<\\infty\\). First note that the function \\(\\theta\\mapsto\\Lambda(\\theta)\\) is convex, hence the left-hand derivatives \\(\\Lambda^{\\prime}(\\theta-)\\) and the right-hand derivatives \\(\\Lambda^{\\prime}(\\theta+)\\) exist for all \\(\\theta>0\\). Moreover, we have \\(\\Lambda^{\\prime}(\\theta-)\\leq\\Lambda^{\\prime}(\\theta+)\\) and the function \\(\\theta\\mapsto\\frac{1}{2}(\\Lambda^{\\prime}(\\theta-)+\\Lambda^{\\prime}(\\theta+))\\) is non-decreasing, hence \\(\\Lambda^{\\prime}(\\theta)=\\Lambda^{\\prime}(\\theta-)=\\Lambda^{\\prime}(\\theta+)\\) except for \\(\\theta\\in\\Delta\\), where \\(\\Delta\\) is at most countable. Since \\(\\Lambda(\\theta)<\\infty\\) for \\(\\theta>\\theta^{*}\\), we have \\(\\Lambda(\\theta^{*})=0\\) and \\(\\Lambda^{\\prime}(\\theta^{*}+)>0\\). To prove this, assume that \\(\\Lambda^{\\prime}(\\theta^{*}+)=0\\). Take \\(\\theta<\\theta^{*}\\), thanks to Lemma 4.2, we have \\(\\Lambda(\\theta)<0\\). Choose \\(\\epsilon>0\\) such that \\(0<\\Lambda(\\theta^{*}+\\epsilon)<\\epsilon|\\Lambda(\\theta)|\\). We have \\[\\frac{\\Lambda(\\theta^{*}+\\epsilon)}{\\epsilon}<\\frac{-\\Lambda(\\theta)}{\\theta^{ *}-\\theta},\\] which contradicts the convexity of \\(\\Lambda(\\theta)\\). Hence, we can find \\(t\\leq\\theta^{*}+\\epsilon\\) such that \\[0<\\Lambda(t),\\quad t\ otin\\Delta.\\] Note that these conditions imply \\(t>\\theta^{*}\\) and \\(\\Lambda^{\\prime}(t)\\geq\\Lambda^{\\prime}(\\theta^{*}+)>0\\). Thanks to Gartner-Ellis theorem (Theorem 2.3.6 in [7]), we have \\[\\liminf_{n\\to\\infty}\\frac{1}{n}\\log\\mathbb{P}(Y_{n}>n\\alpha)\\geq-\\inf_{x\\in \\mathcal{F},\\,x>\\alpha}\\Lambda^{*}(x), \\tag{4.8}\\] where \\(\\mathcal{F}\\) is the set of exposed point of \\(\\Lambda^{*}\\) and \\(\\Lambda^{*}(x)=\\sup_{\\theta>0}(\\theta x-\\Lambda(\\theta))\\). Note that from the monotonicity of \\(\\theta x-\\Lambda(\\theta)\\) in \\(x\\) as \\(\\theta\\) is fixed, we deduce that \\(\\Lambda^{*}\\) is non-decreasing. Moreover take \\(\\alpha=\\Lambda^{\\prime}(t)\\), then \\(\\Lambda^{*}(\\alpha)=t\\alpha-\\Lambda(t)\\) and \\(\\alpha\\in\\mathcal{F}\\) by Lemma 2.3.9 of [7]. Given \\(x>0\\), define \\(n=\\lceil x/\\alpha\\rceil\\). We have \\[\\frac{1}{x}\\log\\mathbb{P}(\\sup_{n}Y_{n}>x)\\geq\\frac{1}{n\\alpha}\\log\\mathbb{P}( Y_{n}\\geq n\\alpha),\\] taking the limit in \\(x\\) and \\(n\\) (while \\(\\alpha=\\Lambda^{\\prime}(t)\\) is fixed) gives thanks to (4.8), \\[\\liminf_{x\\to\\infty}\\frac{1}{x}\\log\\mathbb{P}(\\sup_{n}Y_{n}>x)\\geq-\\frac{t \\alpha-\\Lambda(t)}{\\alpha}\\geq-t\\geq-\\theta^{*}-\\epsilon.\\] We consider now the case where for all \\(\\theta>\\theta^{*}\\), we have \\(\\Lambda(\\theta)=\\infty\\), i.e. \\(\\theta^{*}=\\eta\\) defined in Lemma 4.1. Take \\(K>0\\) and define \\(\\tilde{Z}^{K}_{[n,m]}=Z_{[n,m]}(N^{0})\\prod_{i=n}^{m}\\mathds{1}(Z_{i}\\leq K)\\) and \\(\\tilde{Z}^{K}=\\sup_{n\\geq 0}(\\tilde{Z}^{K}_{[-n,0]}+T_{-n})\\). By (4.7), we have \\(Z\\geq\\tilde{Z}^{K}\\). It is easy to see that the proof of Lemma 4.1 is still valid (note that the subadditive property carries over to \\(\\tilde{Z}^{K}_{[n,m]}\\)) and the following limit exists \\[\\tilde{\\Lambda}^{K}_{Z}(\\theta)=\\lim_{n\\to\\infty}\\frac{1}{n}\\log\\mathbb{E} \\left[e^{\\theta\\tilde{Z}^{K}_{[1,n]}}\\right]=\\inf_{n}\\frac{1}{n}\\log\\mathbb{E} \\left[e^{\\theta\\tilde{Z}^{K}_{[1,n]}}\\right].\\] Moreover thanks to the subadditive property of \\(Z\\), we have \\(\\mathbb{P}(\\tilde{Z}^{K}_{[1,n]}\\leq nK)=1\\), so that \\(\\tilde{\\Lambda}^{K}_{Z}(\\theta)\\leq\\theta K\\). Hence by the first part of the proof, we have \\[\\liminf_{x\\to\\infty}\\frac{1}{x}\\log\\mathbb{P}(\\tilde{Z}^{K}>x)\\geq-\\tilde{ \\theta}^{K},\\] with \\(\\tilde{\\theta}^{K}=\\sup\\{\\theta>0,\\,\\tilde{\\Lambda}^{K}_{Z}(\\theta)+\\Lambda_{T }(-\\theta)<0\\}\\). We now prove that \\(\\tilde{\\theta}^{K}\\to\\eta\\) as \\(K\\) tends to infinity which will conclude the proof. Note that for any fixed \\(\\theta\\geq 0\\), the function \\(\\tilde{\\Lambda}^{K}_{Z}(\\theta)\\) is nondecreasing in \\(K\\) and \\(\\lim_{K\\to\\infty}\\tilde{\\Lambda}_{Z}^{K}(\\theta)=\\tilde{\\Lambda}_{Z}(\\theta)\\leq \\Lambda_{Z}(\\theta)\\). This directly implies that \\(\\tilde{\\theta}^{K}\\geq\\eta\\). Take \\(\\theta>\\eta\\), so that \\(\\Lambda_{Z}(\\theta)=\\infty\\). If \\(\\tilde{\\Lambda}_{Z}(\\theta)<\\infty\\), then for all \\(K\\), we have \\(\\tilde{\\Lambda}_{Z}^{K}(\\theta)\\leq\\tilde{\\Lambda}_{Z}(\\theta)<\\infty\\). But, we have \\(\\tilde{\\Lambda}_{Z}^{K}(\\theta)=\\inf_{n}\\frac{1}{n}\\log\\mathbb{E}\\left[e^{ \\theta\\tilde{\\Lambda}_{[1,n]}^{K}}\\right]\\), so that there exists \\(n\\) such that \\[\\mathbb{E}\\left[e^{\\theta Z_{[1,n]}(N^{0})},\\,\\max(Z_{1},\\ldots,Z_{n})\\leq K \\right]\\leq e^{n(\\tilde{\\Lambda}_{Z}^{K}(\\theta)+1)}\\leq e^{n(\\tilde{\\Lambda}_ {Z}(\\theta)+1)},\\] but the left-hand side tends to infinity as \\(K\\to\\infty\\). Hence we proved that for all \\(\\theta>\\eta\\), we have \\(\\tilde{\\Lambda}_{Z}^{K}(\\theta)\\to\\infty\\) as \\(K\\to\\infty\\). This implies that \\(\\tilde{\\theta}^{K}\\to\\eta\\) as \\(K\\to\\infty\\). ### Upper bound **Lemma 4.4**.: _Under previous assumptions, we have_ \\[\\limsup_{x\\to\\infty}\\frac{1}{x}\\log\\mathbb{P}(Z>x)\\leq-\\theta^{*}.\\] Proof.: For \\(L\\) sufficiently large, we have with the convention \\(\\sum_{0}^{-1}=0\\) (see Proposition 4.2), \\[Z\\leq\\sup_{n\\geq 0}\\left(\\sum_{i=-n}^{-1}\\hat{s}_{i}(L)-\\hat{\\tau}_{i+1}(L) \\right)+\\hat{s}_{0}(L)=:V(L)+\\hat{s}_{0}(L).\\] We will show that under previous assumptions, we have \\[\\limsup_{x\\to\\infty}\\frac{1}{x}\\log\\mathbb{P}(V(L)+\\hat{s}_{0}(L)>x)\\leq-\\theta _{L}, \\tag{4.9}\\] where \\(\\theta_{L}\\) is defined as in (4.4) and the lemma will follow since \\(\\theta_{L}\\to\\theta^{*}\\) as \\(L\\) tends to infinity (see Lemma 4.2). First note that for all \\(\\theta\\in(0,\\theta_{L})\\), we have \\[\\max\\left\\{\\mathbb{E}\\left[e^{\\theta\\hat{s}_{0}(L)}\\right],\\mathbb{E}\\left[e^ {\\theta V(L)}\\right]\\right\\}<\\infty.\\] Hence for \\(\\theta\\in(0,\\theta_{L})\\), we have \\(\\mathbb{E}\\left[e^{\\theta(V(L)+\\hat{s}_{0}(L))}\\right]=\\mathbb{E}\\left[e^{ \\theta V(L)}\\right]\\mathbb{E}\\left[e^{\\theta\\hat{s}_{0}(L)}\\right]\\leq A\\) for some finite constant \\(A\\). Hence by Chernoff's inequality, \\[\\mathbb{P}\\left(V(L)+\\hat{s}_{0}(L)\\geq x\\right)\\leq e^{-\\theta x}\\mathbb{E} \\left[e^{\\theta(V(L)+\\hat{s}_{0}(L))}\\right]\\leq Ae^{-\\theta x}.\\] Since the above holds for all \\(0<\\theta<\\theta_{L}\\), we get \\[\\limsup_{x\\to\\infty}\\frac{1}{x}\\log\\mathbb{P}\\left(V(L)+\\hat{s}_{0}(L)\\geq x \\right)\\leq-\\theta_{L}.\\] ## Acknowledgment The author would like to thank Peter Friz for pointing out a mistake in an earlier version of this paper. ## Appendix: recursion for queues in tandem We consider a \\(G/G/1/\\infty\\to./G/1/\\infty\\) tandem queue, where \\(\\{\\sigma_{n}^{(i)}\\}\\) denotes the sequence of service times in station \\(i=1,2\\) and \\(N=\\{T_{n}\\}\\) is the sequence of arrival times at the first station. For \\(m\\leq k\\leq n\\), we denote by \\(D_{[m,n]}^{(i)}(k)\\) the departure time of customer \\(k\\) from station \\(i=1,2\\) when the network starts empty and is fed by \\(N_{[m,n]}\\). With the notations introduced in Section 2, we have \\(X_{[m,n]}(N)=D_{[m,n]}^{(2)}(n)\\). We now derive the recursion equations satisfied by the \\(D_{[m,n]}\\)'s, \\[D_{[m,n]}^{(1)}(m) = T_{m}+\\sigma_{m}^{(1)},\\] \\[D_{[m,n]}^{(2)}(m) = D_{[m,n]}^{(1)}(m)+\\sigma_{m}^{(2)}=T_{m}+\\sigma_{m}^{(1)}+ \\sigma_{m}^{(2)},\\] \\[D_{[m,n]}^{(1)}(k) = \\max\\left(D_{[m,n]}^{(1)}(k-1),T_{k}\\right)+\\sigma_{k}^{(1)},\\] \\[D_{[m,n]}^{(2)}(k) = \\max\\left(D_{[m,n]}^{(2)}(k-1),D_{[m,n]}^{(1)}(k)\\right)+\\sigma_{ k}^{(2)},\\] for \\(m<k\\leq n\\). From these equations, one can easily check that: \\[D_{[m,n]}^{(1)}(k) = \\sup_{m\\leq j\\leq k}\\left\\{T_{j}+\\sum_{i=j}^{k}\\sigma_{i}^{(1)} \\right\\},\\] \\[D_{[m,n]}^{(2)}(k) = \\sup_{m\\leq j\\leq k}\\left\\{T_{j}+\\sup_{j\\leq\\ell\\leq k}\\sum_{i=j} ^{\\ell}\\sigma_{i}^{(1)}+\\sum_{i=\\ell}^{k}\\sigma_{i}^{(2)}\\right\\},\\] and Equation (2.2) follows. ## References * [1] V. Anantharam. How large delays build up in a \\(GI/G/1\\) queue. _Queueing Systems Theory Appl._, 5(4):345-367, 1989. * [2] F. Baccelli and P. Bremaud. _Elements of Queueing Theory_. Springer-Verlag, 2003. * [3] F. Baccelli and S. Foss. On the saturation rule for the stability of queues. _Journal of Applied Probability_, 32:494-507, 1995. * [4] F. Baccelli and S. Foss. Moments and tails in monotone-separable stochastic networks. _Ann. Appl. Probab._, 14(2):612-650, 2004. * [5] F. Baccelli, S. Foss, and M. Lelarge. Tails in generalized Jackson networks with subexponential service-time distributions. _J. Appl. Probab._, 42(2):513-530, 2005. * [6] F. Baccelli, M. Lelarge, and S. Foss. Asymptotics of subexponential max plus networks: the stochastic event graph case. _Queueing Syst._, 46(1-2):75-96, 2004. * [7] A. Dembo and O. Zeitouni. _Large Deviations Techniques and Applications_. Springer-Verlag, 1998. * [8] K. Duffy, J. T. Lewis, and W. G. Sullivan. Logarithmic asymptotics for the supremum of a stochastic process. _Ann. Appl. Probab._, 13(2):430-445, 2003. * [9] A. Ganesh. Large deviations of the sojourn time for queues in series. _Annals of Operations Research_, 79:3-26, 1998. * [10] D. L. Iglehart. Extreme values in the \\(GI/G/1\\) queue. _Ann. Math. Statist._, 43:627-635, 1972. * [11] M. Lelarge. Tail asymptotics for discrete event systems. _VALUETOOLS_, 2006. * [12] A. G. Pakes. On the tails of waiting-time distributions. _J. Appl. Probability_, 12(3):555-564, 1975. Marc Lelarge1 ENS-INRIA 45 rue d'Ulm75005 Paris, France e-mail : [email protected]
A network belongs to the monotone separable class if its state variables are homogeneous and monotone functions of the epochs of the arrival process. This framework contains several classical queueing network models, including generalized Jackson networks, max-plus networks, polling systems, multiserver queues, and various classes of stochastic Petri nets. We use comparison relationships between networks of this class with i.i.d. driving sequences and the \\(GI/GI/1/1\\) queue to obtain the tail asymptotics of the stationary maximal deter under light-tailed assumptions for service times. The exponential rate of decay is given as a function of a logarithmic moment generating function. We exemplify an explicit computation of this rate for the case of queues in tandem under various stochastic assumptions. _MSC 2000 subject classifications._ 60F10, 60K25. _Key words._ large deviations, queueing networks
Condense the content of the following passage.
arxiv-format/0511092v2.md
# Quantum energies with worldline numerics H. Gies and K. Klingmuller Institute for Theoretical Physics, Philosophenweg 16, D-69120 Heidelberg h.gies@, [email protected] November 3, 2021 ## 1 Introduction Casimir energies and forces are geometry dependent. Determining the geometry dependence is a challenge both experimentally as well as theoretically. Computing the geometry dependence of Casimir energies can be viewed as a special case of the more general problem of evaluating the effects of quantum fluctuations in a background field \\(V(x)\\). For instance, the space(-time) dependence of the background field can then be used to model the Casimir geometry. A universal tool to deal with quantum fluctuations in background fields is given by the effective action \\(\\Gamma[V]\\) which is the generating functional for 1PI correlation functions for \\(V\\). In the present work, we consider a fluctuating real scalar quantum field \\(\\phi\\) interacting with the background potential according to \\(\\sim V(x)\\phi^{2}\\). For this system, the effective action can be evaluated from \\[\\Gamma[V]= -\\ln\\left(\\int{\\cal D}\\phi\\,{\\rm e}^{-\\frac{1}{2}\\int\\phi(-\\partial ^{2}+m^{2}+V)\\phi}\\right)\\] \\[= \\frac{1}{2}\\sum_{\\lambda}\\ \\ln(\\lambda^{2}+m^{2}). \\tag{1}\\] Here \\(\\hbar\\) and \\(c\\) are set to 1. The integral over the Gaussian fluctuations boils down to a sum over the spectrum \\(\\{\\lambda\\}\\) of quantum fluctuations. This spectrum consists of the eigenvalues of the fluctuation operator, \\[(-\\partial^{2}+V(x))\\,\\phi=\\lambda^{2}\\,\\phi. \\tag{2}\\] The relation to Casimir energies becomes most obvious by confining ourselves to time-independent potentials \\(V({\\bf x})\\). Then, the sum over the time-like component of thespectrum can be performed. In Euclidean spacetime, we use \\(-\\partial^{2}=-\\partial_{t}^{2}-\ abla^{2}\\), \\(-\\partial_{t}^{2}\\to p_{t}^{2}\\), and the summation/integration over \\(p_{t}\\) results in \\[E[V]\\equiv\\frac{\\Gamma[V]}{L_{t}}=\\frac{1}{2}\\sum\\omega, \\tag{3}\\] where \\(\\omega^{2}=\\lambda^{2}-p_{t}^{2}\\) denote the spatial (\\(p_{t}\\)-independent) part of the fluctuation spectrum. Here, we have defined the Casimir energy from the effective action by dividing out the extent \\(L_{t}\\) of the Euclidean time direction. The relation to a sum over \"ground-state energies\" \\(\\sim\\frac{1}{2}\\hbar\\omega\\) now becomes obvious. The general strategy for computing \\(E[V]\\) seems straightforward: determine the spectrum of quantum fluctuations and sum over the spectrum. However, this recipe is plagued by a number of profound problems: first, an analytic determination of the spectrum is possible only in very rare, mainly separable, cases. Second, a numerical determination of the spectrum is generally hopeless, since the spectrum can consist of discrete as well as continuous parts and is generically not bounded. Third, the sum over the spectrum is generally divergent, and thus regularization is required; particularly in numerical approaches, regularization can lead to severe stability problems. And finally, an unambiguous renormalization has to be performed, such that the physical parameters are uniquely fixed. For a solution of these problems, the technique of _worldline numerics_ has been developed [1] and has first been applied to Casimir systems in [2]. This technique is based on the string-inspired approach to quantum field theory [3]. In this formulation, the (quantum mechanical) problem of finding and summing the spectrum of an operator is mapped onto a Feynman path integral over closed worldlines. For the present scalar case, the effective action then reads \\[\\Gamma[V]=-\\frac{1}{2}\\int_{1/\\Lambda^{2}}^{\\infty}\\frac{\\mathrm{d}T}{T}\\, \\rme^{-m^{2}T}\\,\\mathcal{N}\\int_{x(T)=x(0)}\\mathcal{D}x\\,\\rme^{-\\int_{0}^{T} \\mathrm{d}\\tau\\left(\\frac{x^{2}}{4}+V(x(\\tau))\\right)}. \\tag{4}\\] Now, the effective action, or, more specifically, the Casimir energy, is obtained from an integral over an ensemble of closed worldlines in the given background \\(V(x)\\). This seemingly formal representation can be interpreted in an intuitive manner: A worldline can be viewed as the spacetime trajectory of a quantum fluctuation. The auxiliary integration parameter \\(T\\) is called the propertime and specifies a fictitious \"time\" which the fluctuating particle has at its disposal for traveling along the full trajectory. Larger values of the propertime thus correspond to worldlines with a larger extent in spacetime; hence, the propertime also corresponds to a smooth regulator scale, with, e.g., short propertimes being related to small-distance UV fluctuations.1 Footnote 1: For reasons of definiteness, we have therefore cut off the propertime integral at the lower bound at \\(1/\\Lambda^{2}\\) with the UV cutoff scale \\(\\Lambda\\). Most importantly from a technical viewpoint, the problem of finding and summing over the spectrum is replaced by one single step, namely taking the path integral. Moreover, for any given value of propertime \\(T\\) this path integral is finite. Possible UV divergencies can be analyzed with purely analytical means by studying the small-behavior of the propertime integral, and thus no numerical instabilities are introduced by the regularization procedure. The renormalization can be performed in the standard way, for instance, by analyzing the corresponding Feynman diagrams with the same propertime regulator and by fixing the counterterms with the aid of renormalization conditions accordingly. Further advanced methods which can deal with involved Casimir configurations have been developed during the past years, each with its own respective merits. In particular, we would like to mention the semiclassical approximation [4], a functional-integral approach using boundary auxiliary fields [5], and the optical approximation [6]. These methods are especially useful for analyzing particular geometries by purely or partly analytical means. Of course, a purely analytical evaluation of the path integral again is only possible for very rare cases, but a numerical evaluation is straightforwardly possible with Monte Carlo techniques and can be realized with conceptually simple algorithms, as described in the next section. Applications to Casimir geometries will be presented in Sect. 3 and conclusions are given in Sec. 4. ## 2 Worldline numerics for Casimir systems With the aid of the normalization of the path integral [1], we note that (4) can be written as \\[\\Gamma[V]=-\\frac{1}{2}\\frac{1}{(4\\pi)^{2}}\\int_{1/\\Lambda^{2}}^{\\infty}\\frac{ \\mathrm{d}T}{T^{3}}\\,\\rme^{-m^{2}T}\\,\\left(\\left\\langle\\rme^{-\\int_{0}^{T} \\mathrm{d}\\tau V(x(\\tau))}\\right\\rangle_{x}-1\\right), \\tag{5}\\] where the subtraction of \\(-1\\) ensures that \\(\\Gamma[V=0]=0\\). The expectation value in (5) has to be taken with respect to the worldline ensemble, \\[\\langle\\ldots\\rangle:=\\left(\\int_{x(T)=x(0)}\\mathcal{D}x\\,\\ldots\\rme^{-\\frac{ 1}{4}\\int_{0}^{T}\\mathrm{d}\\tau\\dot{x}^{2}}\\right)\\left(\\int_{x(T)=x(0)} \\mathcal{D}x\\,\\,\\rme^{-\\frac{1}{4}\\int_{0}^{T}\\mathrm{d}\\tau\\dot{x}^{2}} \\right)^{-1}. \\tag{6}\\] In the present work, we focus on the \"ideal\" Casimir effect induced by real scalar field fluctuations obeying Dirichlet boundary conditions; i.e., the boundary conditions are satisfied at infinitely thin surfaces. This situation can be modeled by choosing \\(V(x)=g\\int_{\\Sigma}\\mathrm{d}\\sigma\\,\\delta^{(4)}(x-x_{\\sigma})\\), where \\(\\mathrm{d}\\sigma\\) denotes the integration measure over the surface \\(\\Sigma\\), with \\(x_{\\sigma}\\) being a vector pointing onto the surface. The Dirichlet boundary condition is then strictly imposed by sending the coupling \\(g\\) to infinity, \\(g\\to\\infty\\)[7, 8]. Moreover, we are finally aiming at Casimir forces between disconnected rigid surfaces, which can be derived from the Casimir interaction energy, \\[E_{\\mathrm{Casimir}}=E[V_{1}+V_{2}]-E[V_{1}]-E[V_{2}], \\tag{7}\\] where we subtract the Casimir energies of the single surfaces \\(V_{1}\\) and \\(V_{2}\\) from that of the combined configuration \\(V_{1}+V_{2}\\);12 the former do not contribute to the force. Thisdefinition, together with the Dirichlet boundary condition, leads to \\[E_{\\rm Casimir}=-\\frac{1}{2}\\frac{1}{(4\\pi)^{2}}\\int_{0}^{\\infty}\\frac{\\rmd T}{T ^{3}}\\,{\\rm e}^{-m^{2}T}\\,\\left\\langle\\Theta_{V}[x]\\right\\rangle_{x}, \\tag{8}\\] where \\(\\Theta_{V}[x]=1\\) if a given worldline intersects both surfaces \\(\\Sigma=\\Sigma_{1}+\\Sigma_{2}\\) represented by the background potentials \\(V=V_{1}+V_{2}\\), and \\(\\Theta_{V}[x]=0\\) otherwise. This recipe has a simple interpretation: any worldline which intersects both surfaces corresponds to a quantum fluctuation that violates the Dirichlet boundary conditions. Its \"removal\" from the set of all fluctuations contributes \"one unit\" to the negative Casimir interaction energy. The worldline numerical algorithm is based on an approximation of the path integral by a sum over a finite ensemble of \\(n_{\\rm L}\\) number of paths, each of which is characterized by \\(N\\) discrete points per loop (ppl). These points are obtained by a discretization of the propertime parameter on each loop, \\(x_{i}=x(\\tau_{i})\\), \\(i=1,\\ldots,N\\), with \\((x_{i})_{\\mu}\\in\\mathbb{R}\\). For an efficient generation of the worldline ensemble which obeys a Gaussian velocity distribution required for (6), various algorithms are available, see [2, 9]. In summary, worldline numerics offers a number of advantages: first, the whole algorithm is independent of the background; no particular symmetry is required. Second, the numerical cost scales only linearly with the parameters \\(n_{\\rm L}\\), \\(N\\), and the dimensionality of the problem. The numerically most expensive part of the calculation is a diagnostic routine that detects whether a given worldline intersects both surfaces or not, returning the value \\(\\Theta_{V}[x]=1\\) or \\(0\\), respectively. Optimizing this diagnostic routine for a given geometry can lead to a significant reduction of numerical costs. Details of this optimization for the geometries considered below will be given elsewhere [10]. ## 3 Application to Casimir geometries ### Sphere above plate The geometry of a sphere above a plate is the most relevant configuration as far as recent and current experiments are concerned [11]. Therefore, also worldline numerics has first been applied to this case [2]. Here we extend these studies, arriving at significantly improved results with much smaller error bars and for a wider range of parameters. In the following, we exclusively discuss the massless case, \\(m=0\\). It is interesting to compare our results to the proximity force approximation (PFA) [12] which is the standard tool for estimating the effects of departure from planar geometry for Casimir effects. In this approach, the curved surfaces are viewed as a superposition of infinitesimal parallel plates, and the interaction energy is then obtained by \\[E_{\\rm PFA}=\\int_{\\Sigma_{\\rm PFA}}E_{\\rm PP}(d)\\,\\rmd\\sigma. \\tag{9}\\] Here, \\(\\Sigma_{\\rm PFA}\\) denotes a \"suitable\" auxiliary surface in between the Casimir surfaces \\(\\Sigma_{1}\\) and \\(\\Sigma_{2}\\), and \\(d\\sigma\\) is the corresponding surface element of \\(\\Sigma_{\\rm PFA}\\). The distance \\(d\\) between two points on \\(\\Sigma_{1}\\) and \\(\\Sigma_{2}\\) has to be measured along the normal to \\(\\Sigma_{\\rm PFA}\\). Obviously, the definition of \\(E_{\\rm PFA}\\) is ambiguous, owing to possible different choices of \\(\\Sigma_{\\rm PFA}\\). The two extreme cases are \\(\\Sigma_{\\rm PFA}=\\Sigma_{1}\\) or \\(\\Sigma_{\\rm PFA}=\\Sigma_{2}\\). The difference in \\(E_{\\rm PFA}\\) for these two cases is considered to represent a rough error estimate of the PFA. In the above formula, \\(E_{\\rm PP}\\) denotes the classic parallel-plate result for the energy per unit area \\(A\\)[13], \\[\\frac{E_{\\rm PP}(a)}{A}=-c_{\\rm PP}\\frac{\\pi^{2}}{1440}\\frac{1}{a^{3}}, \\tag{10}\\] with \\(a\\) denoting the plate separations, and \\(c_{\\rm PP}=2\\) for an electromagnetic (EM) field or a complex scalar, and \\(c_{\\rm PP}=1\\) for the present case of a real scalar field fluctuation. For the configuration of a sphere above a plate, we can choose \\(\\Sigma_{\\rm PFA}\\) equal to the plate (plate-based PFA), or equal to the sphere (sphere-based PFA) as the extreme cases. The corresponding results of the integral of (9) can be given in closed form, see, e.g., [6]. In the limit of small distances \\(a\\) compared to the sphere radius \\(R\\), \\(a/R\\ll 1\\), both PFA's agree, \\[E_{\\rm PFA}(a/R\\ll 1)=-c_{\\rm PP}\\frac{\\pi^{3}}{1440}\\frac{R}{a^{2}}. \\tag{11}\\] It is useful to display the resulting Casimir energies normalized to this zeroth-order small-separation PFA limit as it is done in figure 1. The dashed and dot-dashed lines depict the plate-based and sphere-based PFA, respectively. For larger separations, both cases predict a decrease of the Casimir energy relative to the zeroth-order PFA in (11). Our numerical worldline estimate confirms the zeroth-order PFA in the limit of small \\(a/R\\). But in contrast to the PFA curves, worldline numerics predicts a relative increase of the Casimir energy in comparison with (11) for larger separations/smaller spheres. We conclude that the PFA should not at all be trusted beyond the zeroth order: the first-order correction does not even have the correct sign. As a most conservative estimate, we observe that the PFA deviates from our result by at least \\(1\\%\\) for \\(a/R>0.01\\), which confirms and strengthens the result of [2]. A more detailed investigation of the validity bounds of the PFA will be given elsewhere [10]. It is interesting to observe that the zeroth-order PFA (11) still seems to be a reasonable estimate up to \\(a/R\\simeq 0.1\\), indicating that the true curvature effects compensate for the higher-order PFA corrections. Finally, we note that our results agree with the optical approximation [6] for \\(a/R\\lesssim 0.1\\), confirming the absence of diffractive effects in this regime, which are neglected by the optical approximation. For even larger separations \\(a\\), we observe a monotonous increase of the Casimir energy relative to (11). In this regime, our results agree quantitatively with those obtained from the \"KKR\" multi-scattering map method presented by A. Wirzba at this QFEXT05 workshop [14]. Most importantly, we do not observe a Casimir-Polder law for large \\(a/R\\), which would manifest itself in an \\((a/R)^{-2}\\) decrease in figure 1 at the large-\\((a/R)\\) side. Since a Casimir-Polder law is expected for the electromagnetic case, our results for the Dirichlet scalar provide clear evidence for the fact that the relation between Casimir forces for the EM field and for the Dirichlet scalar is strongly geometry dependent. In fact, the latest results of Ref.[14] include an analytic proof that the energy ratio of Fig. 1 approaches a constant, \\(180/\\pi^{4}\\simeq 1.848\\) for large \\(a/R\\) for the Dirichlet scalar. The analysis of the sphere-plate configuration for the EM field therefore still remains an open unsolved problem. Note that this does not affect our conclusions about the PFA, since also the PFA does not treat the EM case or the Dirichlet scalar in a different manner. ### Perpendicular plates A particularly inspiring geometry is given by a variant of the classic parallel-plate case: a semi-infinite plate perpendicular to an infinite plate, such that the edge of the semi-infinite plate has a minimal distance \\(a\\) to the infinite plate, see figure 2 (left panel). Whereas Casimir's parallel-plate case has only one nontrivial direction (the one normal to the plates), this perpendicular-plates case has two nontrivial directions but still only one dimensionful scale \\(a\\). This fixes the scale dependence of the energy per unit length unambiguously, \\[\\frac{E_{\\perp}(a)}{L_{\\rm T}}=-\\gamma_{\\perp}\\frac{\\pi^{2}}{1440}\\frac{1}{a^ {2}}, \\tag{12}\\] where \\(L_{\\rm T}\\) denotes the extent of the system along the remaining trivial transversal direction. The unknown coefficient \\(\\gamma_{\\perp}\\) results from the effect of quantum fluctuations Figure 1: Casimir energy for the sphere-plate configuration normalized to the zeroth-order PFA formula (11): the dashed and dot-dashed lines depict the plate-based and the sphere-based PFA estimates, respectively. The circle symbols display our worldline numerical result. The deviation from the PFA estimate characterize the relevance of Casimir curvature effects. Also shown is the result from the optical approximation [6], which, within its validity limits \\(a/R\\lesssim 0.1\\), agrees well with our result. For larger \\(a/R\\), we find satisfactory agreement with the ”KKR” multi-scattering map method presented at this workshop [14]. in this geometry and will be determined by worldline numerics. Let us first note that the PFA does not appear to be useful for the perpendicular-plates case, because the surfaces cannot reasonably be subdivided into infinitesimal surface elements facing each other from plate to plate. For instance, choosing either of the plates as the integration surface in (9), the PFA would give a zero result. However, in the worldline picture, it is immediately clear that the interaction energy is nonzero, because there are many worldlines which intersect both plates, as sketched in figure 2 (left panel). As a direct evidence, we plot the negative effective action density \\(\\mathcal{L}\\) (effective Lagrangian) in figure 2 (right panel); the effective action is obtained from \\(\\Gamma=\\int\\mathrm{d}^{4}x\\mathcal{L}\\). Brighter areas denote a higher density of the center-of-masses of those worldlines which intersect both plates. Integrating over the effective action density, we obtain the universal coefficient \\[\\gamma_{\\perp}=0.87511\\pm 0.00326, \\tag{13}\\] using \\(n_{L}=40000\\) worldlines with \\(N=200000\\) ppl generated by the _v loop_ algorithm [2]. ## 4 Conclusions We have presented new results for interaction Casimir energies, giving rise to Casimir forces between rigid bodies, induced by a fluctuating real scalar field that obeys Dirichlet boundary conditions. We have used worldline numerics as a universal tool for dealing with quantum fluctuations in inhomogeneous backgrounds. Figure 2: Left panel: sketch of the perpendicular-plates configuration with (an artist’s view of) a typical worldline that intersects both plates. Right panel: density plot of the effective action density \\(\\mathcal{L}\\) for the perpendicular plates case; the plot shows \\(\\ln(2(4\\pi a^{2})^{2}|\\mathcal{L}|)\\). The position of the perpendicular plates are indicated by solid lines for illustration. For the experimentally relevant sphere-plate configuration, we have performed extensive numerical studies, confirming earlier findings [2] with a significantly higher precision and narrowing the validity bounds of the proximity force approximation even further. Moreover, our results for small spheres for the Dirichlet scalar shows no sign of a Casimir-Polder law, as it would be expected for the EM field. This provides clear evidence for a different role of Casimir curvature effects for these two different field theories, leaving the sphere-plate configuration with a fluctuating EM field as a pressing open problem. Furthermore, we have investigated a new geometry of two perpendicular plates which has been inaccessible so far for other approximation techniques. The configuration is representative for a whole new class of Casimir systems involving sharp edges, where diffractive portions of the fluctuating field will play a major role. It is a pleasure to thank Emilio Elizalde and his team for the organization of this workshop and for creating such a stimulating atmosphere. H.G. acknowledges useful discussions with G.V. Dunne, T. Emig, A. Scardicchio, O. Schroder, A. Wirzba, and H. Weigel. This work was supported by the Deutsche Forschungsgemeinschaft (DFG) under contract Gi 328/1-3 (Emmy-Noether program) and Gi 328/3-2. ## References * [1] H. Gies and K. Langfeld, Nucl. Phys. B **613**, 353 (2001); Int. J. Mod. Phys. A **17**, 966 (2002). * [2] H. Gies, K. Langfeld and L. Moyaerts, JHEP **0306**, 018 (2003); arXiv:hep-th/0311168. * [3] For a review, see C. Schubert, Phys. Rept. **355**, 73 (2001). * [4] M. Schaden and L. Spruch, Phys. Rev. A **58**, 935 (1998); Phys. Rev. Lett. **84** 459 (2000) * [5] R. Golestanian and M. Kardar, Phys. Rev. A **58**, 1713 (1998); T. Emig, A. Hanke and M. Kardar, Phys. Rev. Lett. **87** (2001) 260402; T. Emig and R. Buscher, Nucl. Phys. B **696**, 468 (2004). * [6] A. Scardicchio and R. L. Jaffe, Nucl. Phys. B **704**, 552 (2005); Phys. Rev. Lett. **92**, 070402 (2004). * [7] M. Bordag, D. Hennig and D. Robaschik, J. Phys. A **25**, 4483 (1992). * [8] N. Graham, R. L. Jaffe, V. Khemani, M. Quandt, M. Scandurra and H. Weigel, Nucl. Phys. B **645**, 49 (2002). * [9] H. Gies, J. Sanchez-Guillen and R. A. Vazquez, JHEP **0508**, 067 (2005). * [10] H. Gies and K. Klingmuller, arXiv:quant-ph/0601094. * [11] S. K. Lamoreaux, Phys. Rev. Lett. **78**, 5 (1997); U. Mohideen and A. Roy, Phys. Rev. Lett. **81**, 4549 (1998); H. B. Chan, V. A. Aksyuk, R. N. Kleiman, D. J. Bishop and F. Capasso, Science 291, 1941 (2001); R.S. Decca, D. Lopez, E. Fishbach, D.E. Krause, Phys. Rev. Lett., 91, (2003), 050402. * [12] B.V. Derjaguin, I.I. Abrikosova, E.M. Lifshitz, Q.Rev. **10**, 295 (1956); J. Blocki, J. Randrup, W.J. Swiatecki, C.F. Tsang, Ann. Phys. (N.Y.) **105**, 427 (1977). * [13] H. B. Casimir, Kon. Ned. Akad. Wetensch. Proc. **51**, 793 (1948). * [14] A. Wirzba, A. Bulgac and P. Magierski, Proceedings of the workshop QFEXT05 (2005), arXiv:quant-ph/0511057; A. Bulgac, P. Magierski and A. Wirzba, Phys. Rev. D 73, 025007 (2006).
We present new results for Casimir forces between rigid bodies which impose Dirichlet boundary conditions on a fluctuating scalar field. As a universal computational tool, we employ worldline numerics which builds on a combination of the string-inspired worldline approach with Monte-Carlo techniques. Worldline numerics is not only particularly powerful for inhomogeneous background configurations such as involved Casimir geometries, it also provides for an intuitive picture of quantum-fluctuation-induced phenomena. Results for the Casimir geometries of a sphere above a plate and a new perpendicular-plates configuration are presented.
Write a summary of the passage below.
arxiv-format/0511163v1.md
# Cosmological model with viscosity media (dark fluid) described by an effective equation of state Jie Ren [email protected] Department of physics, Nankai University, Tianjin 300071, China (post address) Xin-He Meng [email protected] Department of physics, Nankai University, Tianjin 300071, China (post address) CCAST (World Lab), P.O.Box 8730, Beijing 100080, China November 3, 2021 ## I Introduction The cosmological observations have provided increasing evidence that our universe is undergoing a late-time cosmic acceleration expansion [1]. In order to explain the acceleration expansion, cosmologists introduce a new fluid, which possesses a negative enough pressure, called dark energy. According to the observational evidence, especially from the Type Ia Supernovae [2] and WMAP satellite missions[4], we live in a favored spatially flat universe consisting approximately of \\(30\\%\\) dark matter and \\(70\\%\\) dark energy. The simplest candidate for dark energy is the cosmological constant, but it has got the serious fine-tuning problem. Recently, a great variety of models are proposed to describe the universe with dark energy, partly such as * Scalar fields: Quintessence [5] and phantom [6], the model potential is from power-law to exponentials and a combination of both. * Exotic equation of state: Chaplygin gas [7], generalized Chaplygin gas [8], a linear equation of state [9], and Van der Waals equation of state [10]. * Modified gravity: DGP model [12], Cardassian expansion [11], \\(1/R\\), \\(R^{2}\\), \\(\\ln R\\) term corrections, etc. Maybe the mysterious dark energy does not exist, but we lack the full understanding of gravitational physics [13; 14; 15; 16; 17; 18; 19; 20]. * Viscosity: Bulk viscosity in the isotropic space [22], bulk and shear viscosity in the anisotropic space. The perfect fluid is only an approximation of the universe media. The observations also indicate that the universe media is not a perfect fluid [21] and the viscosity is concerned in the evolution of the universe [23; 24; 25]. We only list a part of the papers on this topics as the relevant are too many. According to Ref. [26],it is possible to put some order in this somewhat chaotic situation by considering a particular feature of the dark energy, namely its equation of state (hereafter EOS); it is tempting to investigate the properties of cosmological models starting from the EOS directly and by testing whether a given EOS is able to give rise to cosmological models reproducing the available dataset. The dark fluid [27] and the parameterized EOS [28] are studied in some recent papers. We hope the situation will be improved with the new generation of more precise observational data. The observational constraints indicate that the current EOS parameter \\(w=p/\\rho\\) is around \\(-1\\)[2; 3], quite probably below \\(-1\\), which is called the phantom region and even more mysterious in the cosmological evolution. In the standard model of cosmology, if the \\(w<-1\\), the universe shows to possess the future finite singularity called Big Rip [29; 30]. Several ideas are proposed to prevent the big rip singularity, like by introducing quantum effects terms in the action [32]. Based on the motivations of time-dependent viscosity and modified gravity, the Hubble parameter dependent EOS is considered in Ref. [26; 33], in which the most general \"inhomogeneous\" EOS is given out, however, they gives analytical solutions of the scale factor only for some less general cases. In this paper, we investigate a generaleffective equation of state \\[p=(\\gamma-1)\\rho+p_{0}+w_{H}H+w_{H2}H^{2}+w_{dH}\\dot{H},\\] and we show the following time-dependant bulk viscosity \\[\\zeta=\\zeta_{0}+\\zeta_{1}\\frac{\\dot{a}}{a}+\\zeta_{2}\\frac{\\ddot{a}}{\\dot{a}}\\] is equivalent to the form derived by using the above effective EOS. An integrable equation for the scale factor is obtained and three possible interpretations of this equation are proposed. The Hubble parameter dependent term in this EOS can drive the phantom barrier being crossed in an easier way [23; 33; 34]. Different choices of the parameters may lead to several fates to the cosmological evolution [34]. This paper is organized as follows: In the next section we describe our model and give the exact solution of the scale factor. In Sec. III we consider the sound speed and the EOS parameter in this model for unified dark energy, and give some numerical solutions of a more general equation for the scale factor. In Sec. IV we propose three interpretations for our model. In Sec. V we confront the effective viscosity model proposed in the previous Sec. with the SNe Ia Golden data. Finally, we present our conclusions in the last section. The appendix presents some detail comments on the cosmological constant involved. ## II Model and calculations We consider the Friedmann-Roberson-Walker metric in the flat space geometry (\\(k=0\\)) as favored by WMAP cosmic microwave background data on power spectrum \\[ds^{2}=-dt^{2}+a(t)^{2}(dr^{2}+r^{2}d\\Omega^{2}), \\tag{1}\\] and assume that the cosmic fluid possesses a bulk viscosity \\(\\zeta\\). The energy-momentum tensor can be written as \\[T_{\\mu\ u}=\\rho U_{\\mu}U_{\ u}+(p-\\zeta\\theta)H_{\\mu\ u}, \\tag{2}\\] where in comoving coordinates \\(U^{\\mu}=(1,0)\\), \\(\\theta=U^{\\mu}_{;\\mu}=3\\dot{a}/a\\), and \\(H_{\\mu\ u}=g_{\\mu\ u}+U_{\\mu}U_{\ u}\\)[35]. By defining the effective pressure as \\(\\tilde{p}=p-\\zeta\\theta\\) and from the Einstein equation \\(R_{\\mu\ u}-\\frac{1}{2}g_{\\mu\ u}R=\\kappa^{2}T_{\\mu\ u}\\) with \\(\\kappa^{2}=8\\pi G\\), we obtain the Friedmann equations \\[\\frac{\\dot{a}^{2}}{a^{2}} = \\frac{\\kappa^{2}}{3}\\rho, \\tag{3a}\\] \\[\\frac{\\ddot{a}}{a} = -\\frac{\\kappa^{2}}{6}(\\rho+3\\tilde{p}). \\tag{3b}\\] The conservation equation for energy, \\(T^{0\ u}_{;\ u}\\), yields \\[\\dot{\\rho}+(\\rho+\\tilde{p})\\theta=0. \\tag{4}\\] To describe completely the global behaviors for our Universe evolution an additional relation, a reasonable EOS is required. A generally parameterized EOS can be written as \\[p=(\\gamma-1)\\rho+f(\\rho;\\alpha_{i})+g(H,\\dot{H};\\alpha_{i}) \\tag{5}\\] where \\(\\alpha_{i}\\) are parameters that are expected that when \\(\\alpha_{i}\\to 0\\), the equation of state approaches to that of the perfect fluid, i.e. \\(p=w\\rho\\), where the factorized parameter \\(w=\\gamma-1\\) with \\(\\gamma\\) being another parameter. We consider the following EOS, an explicit form as \\[p=(\\gamma-1)\\rho+p_{0}+w_{H}H+w_{H2}H^{2}+w_{dH}\\dot{H}, \\tag{6}\\] where \\(p_{0}\\), \\(w_{H}\\), \\(w_{H2}\\), \\(w_{dH}\\) are free parameters. In this and the next section, we assume the universe media is a single fluid described by this EOS. Compared with the bulk viscosity form as described in Ref. [34], the following one is more general. We show that this time-dependent bulk viscosity \\[\\zeta=\\zeta_{0}+\\zeta_{1}\\frac{\\dot{a}}{a}+\\zeta_{2}\\frac{\\ddot{a}}{\\dot{a}} \\tag{7}\\] is effectively equivalent to the form derived by using Eq. (6). The reason is \\[\\tilde{p} = p-\\zeta\\theta \\tag{8}\\] \\[= p-3\\zeta_{0}\\frac{\\dot{a}}{a}-3\\zeta_{1}\\frac{\\dot{a}^{2}}{a^{2} }-3\\zeta_{2}\\frac{\\ddot{a}}{a}\\] \\[= p-3\\zeta_{0}\\frac{\\dot{a}}{a}-3(\\zeta_{1}+\\zeta_{2})\\frac{\\dot{a }^{2}}{a^{2}}-3\\zeta_{2}\\left(\\frac{\\ddot{a}}{a}-\\frac{\\dot{a}^{2}}{a^{2}}\\right)\\] \\[= p-3\\zeta_{0}H-3(\\zeta_{1}+\\zeta_{2})H^{2}-3\\zeta_{2}\\dot{H},\\] we can see that the corresponding coefficients are \\[w_{H} = -3\\zeta_{0}, \\tag{9a}\\] \\[w_{H2} = -3(\\zeta_{1}+\\zeta_{2}),\\] (9b) \\[w_{dH} = -3\\zeta_{2}. \\tag{9c}\\] The motivation of considering this bulk viscosity is that by fluid mechanics we know the transport/viscosity phenomenon is related to the \"velocity\" \\(\\dot{a}\\), which is related to the Hubble parameter, and the acceleration. Since we do not know the exact form of viscosity, here we consider a parameterized bulk viscosity, which is a linear combination of three terms: the first term is a constant \\(\\zeta_{0}\\), the second corresponds to the Hubble parameter, and the third can be proportional to \\(\\ddot{a}/aH\\). From the above corresponding coefficients, we can see that the inhomogeneous EOS may be interpreted simply as time-dependent viscosity case. Additionally, the EOS of Eq. (6) can also be interpreted as the case of a variable cosmological constant model, in which the \\(\\Lambda\\)-term is written as \\[\\Lambda=\\Lambda_{0}+\\Lambda_{H}H+\\Lambda_{H2}H^{2}+\\Lambda_{dH}\\dot{H}. \\tag{10}\\] Using this EOS to eliminate \\(\\rho\\) and \\(p\\), we obtain the equation which determines the scale factor \\(a(t)\\) evolution \\[\\frac{\\ddot{a}}{a}=\\frac{-(3\\gamma-2)/2-(\\kappa^{2}/2)w_{H2}+(\\kappa^{2}/2)w_{dH}}{ 1+(\\kappa^{2}/2)w_{dH}}\\frac{\\dot{a}^{2}}{a^{2}}+\\frac{-(\\kappa^{2}/2)w_{H}}{1+( \\kappa^{2}/2)w_{dH}}\\frac{\\dot{a}}{a}+\\frac{-(\\kappa^{2}/2)p_{0}}{1+(\\kappa^{2 }/2)w_{dH}}. \\tag{11}\\] To make this equation more comparable to that of the perfect fluid, we define \\(\\tilde{\\gamma}\\) given by \\[\\frac{-(3\\gamma-2)/2-(\\kappa^{2}/2)w_{H2}+(\\kappa^{2}/2)w_{dH}}{1+(\\kappa^{2}/ 2)w_{dH}}=-\\frac{3\\tilde{\\gamma}-2}{2}. \\tag{12}\\] This equation gives \\[\\tilde{\\gamma}=\\frac{\\gamma+(\\kappa^{2}/3)w_{H2}}{1+(\\kappa^{2}/2)w_{dH}}. \\tag{13}\\] By defining \\[\\frac{1}{T_{1}} = \\frac{-(\\kappa^{2}/2)w_{H}}{1+(\\kappa^{2}/2)w_{dH}} \\tag{14}\\] \\[\\frac{1}{T_{2}^{2}} = \\frac{-(\\kappa^{2}/2)p_{0}}{1+(\\kappa^{2}/2)w_{dH}},\\] (15) \\[\\frac{1}{T^{2}} = \\frac{1}{T_{1}^{2}}+\\frac{6\\tilde{\\gamma}}{T_{2}^{2}}. \\tag{16}\\] and noting that dim[\\(T_{1}\\)]=dim[\\(T_{2}\\)]=[time], we can see that when \\(T_{2}\\rightarrow\\infty\\), \\(T=T_{1}\\); when \\(T_{1}\\rightarrow\\infty\\), \\(T=T_{2}\\sqrt{6\\gamma}\\). Now Eq. (11) becomes \\[\\frac{\\ddot{a}}{a}=-\\frac{3\\tilde{\\gamma}-2}{2}\\frac{\\dot{a}^{2}}{a^{2}}+\\frac {1}{T_{1}}\\frac{\\dot{a}}{a}+\\frac{1}{T_{2}^{2}}. \\tag{17}\\] The five parameters \\(\\gamma\\), \\(p_{0}\\), \\(w_{H}\\), \\(w_{H2}\\), and \\(w_{dH}\\) are condensed to three parameters \\(\\tilde{\\gamma}\\), \\(T_{1}\\), and \\(T_{2}\\) in the above equation. With the initial conditions of \\(a(t_{0})=a_{0}\\) and \\(\\theta(t_{0})=\\theta_{0}\\), if \\(\\tilde{\\gamma}\ eq 0\\), the solution can be obtained as \\[a(t)=a_{0}\\left\\{\\frac{1}{2}\\left(1+\\tilde{\\gamma}\\theta_{0}T-\\frac{T}{T_{1}} \\right)\\exp\\left[\\frac{t-t_{0}}{2}\\left(\\frac{1}{T}+\\frac{1}{T_{1}}\\right) \\right]+\\frac{1}{2}\\left(1-\\tilde{\\gamma}\\theta_{0}T+\\frac{T}{T_{1}}\\right) \\exp\\left[-\\frac{t-t_{0}}{2}\\left(\\frac{1}{T}-\\frac{1}{T_{1}}\\right)\\right] \\right\\}^{2/3\\tilde{\\gamma}}, \\tag{18}\\] And we obtain directly \\[\\rho(t)=\\frac{3}{\\kappa^{2}}\\frac{\\dot{a}^{2}}{a^{2}}=\\frac{1}{3\\kappa^{2} \\tilde{\\gamma}^{2}}\\left[\\frac{(1+\\tilde{\\gamma}\\theta_{0}T-\\frac{T}{T_{1}})( \\frac{1}{T}+\\frac{1}{T_{1}})\\mbox{exp}(\\frac{t-t_{0}}{T})-(1-\\tilde{\\gamma} \\theta_{0}T+\\frac{T}{T_{1}})(\\frac{1}{T}-\\frac{1}{T_{1}})}{(1+\\tilde{\\gamma} \\theta_{0}T-\\frac{T}{T_{1}})\\mbox{exp}(\\frac{t-t_{0}}{T})+(1-\\tilde{\\gamma} \\theta_{0}T+\\frac{T}{T_{1}})}\\right]^{2}. \\tag{19}\\] The above solution is valid when \\(\\tilde{\\gamma}\ eq 0\\). For \\(\\tilde{\\gamma}=0\\), we need to take the limit case. When \\(\\tilde{\\gamma}\\to 0\\), the limit of the solution \\(a(t)\\) is got as \\[a(t)=a_{0}\\mbox{exp}\\left[\\left(\\frac{1}{3}\\theta_{0}T_{1}+\\frac{T_{1}^{2}}{T_ {2}^{2}}\\right)\\left(e^{(t-t_{0})/T_{1}}-1\\right)-\\frac{T_{1}(t-t_{0})}{T_{2}^ {2}}\\right]. \\tag{20}\\] And we obtain directly \\[\\rho(t)=\\frac{3}{\\kappa^{2}}\\left[\\frac{1}{3}\\theta_{0}e^{(t-t_{0})/T_{1}}+ \\frac{T_{1}}{T_{2}^{2}}\\left(e^{(t-t_{0})/T_{1}}-1\\right)\\right]. \\tag{21}\\] Note that the solution \\(a(t)\\) for \\(\\tilde{\\gamma}=0\\) has not possessed the future singularity, the so called Big Rip, in this case. ## III Sound speed and EOS parameter According to Ref. [2], the observational consequences are summarized as follows:* They provide the first conclusive evidence for cosmic deceleration that preceded the current epoch of cosmic acceleration. Using a simple model of the expansion history, the transition between the two epochs is constrained to be at \\(z=0.46\\pm 0.13\\). * For a flat universe with a cosmological constant, they measure \\(\\Omega_{M}=0.29\\pm^{0.05}_{0.03}\\) (equivalently, \\(\\Omega_{\\Lambda}=0.71\\)). * When combined with external flat-universe constraints including the cosmic microwave background and large-scale structure, they find \\(w=-1.02\\pm^{0.13}_{0.19}\\) (and \\(w<-0.76\\) at the 95% confidence level) for an assumed static equation of state of dark energy, \\(p=w\\rho\\). * The constraints are consistent with the static nature of and value of \\(w\\) expected for a cosmological constant (i.e., \\(w_{0}=-1.0\\), \\(dw/dz=0\\)), and are inconsistent with very rapid evolution of dark energy. On the basis of the above observational consequences, we suggest that \\(\\tilde{\\gamma}\\sim 0\\) and the parameter \\(T_{1}\\) be negative, if the dark fluid describes the unification of dark matter and dark energy. Because when \\(\\tilde{\\gamma}=0\\) and \\(T_{1}<0\\), we can obtain * The universe can accelerate after the epoch of deceleration. * The density approaches to a constant in the late times, which corresponds to the de Sitter universe, so there is no future singularity. * The EOS parameter \\(w\\) approaches to \\(-1\\) when the cosmic time \\(t\\) is sufficiently large. * The sound speed is a real number (see Ref. [31] for constraints of sound speed). Because of these features, we construct a model of dark fluid which can be seen as a unification of dark energy and dark matter. Using Eq. (3a), we obtain the relation between \\(p\\) and \\(\\rho\\) \\[p=(\\gamma-1)\\rho+p_{0}+\\frac{\\kappa}{\\sqrt{3}}w_{H}\\sqrt{\\rho}+\\frac{\\kappa^{ 2}}{3}w_{H2}\\rho^{2}-\\frac{\\kappa^{2}}{2}w_{dH}(p+\\rho), \\tag{22}\\] that is \\[(1+\\frac{\\kappa^{2}}{2}w_{dH})p=(\\gamma-1+\\frac{\\kappa^{2}}{3}w_{H2}-\\frac{ \\kappa^{2}}{2}w_{dH})\\rho+\\frac{\\kappa}{\\sqrt{3}}w_{H}\\sqrt{\\rho}+p_{0}. \\tag{23}\\] So the EOS between \\(p\\) and \\(\\rho\\) is \\[p=(\\tilde{\\gamma}-1)\\rho-\\frac{2}{\\sqrt{3}\\kappa T_{1}}\\sqrt{\\rho}-\\frac{2}{ \\kappa^{2}T_{2}^{2}}, \\tag{24}\\] where \\(\\tilde{\\gamma}\\) is defined as before. Figs. 1 and 2 show that the parameters \\(\\gamma\\), \\(w_{H2}\\), and \\(w_{dH}\\) can drive \\(\\tilde{\\gamma}\\) crossing \\(-1\\). In the present paper, we set \\(\\kappa=1\\) and \\(\\theta_{0}=1\\) for simplicity to all the figures in this paper. The values of other parameters are given in the legend and the caption of each figure. We especially consider the following choice of the parameters: \\(\\gamma=0\\), \\(T_{1}=-25\\), and \\(T_{2}=100\\). The square of the sound speed is \\[c_{s}^{2}=\\frac{\\partial p}{\\partial\\rho}=\\tilde{\\gamma}-1-\\frac{1}{\\sqrt{3} \\kappa T_{1}}\\frac{1}{\\sqrt{\\rho}}. \\tag{25}\\] When \\(\\tilde{\\gamma}=0\\), \\(T_{1}\\) should be negative if the sound speed is a real number. The graph of \\(c_{s}^{2}\\)-\\(t\\) relations is shown in Fig. 3. We can see that the sound speed approaches to a constant in the late times. The EOS parameter is \\[w=\\frac{p}{\\rho}=\\tilde{\\gamma}-1-\\frac{2}{\\sqrt{3}\\kappa T_{1}}\\frac{1}{ \\sqrt{\\rho}}-\\frac{2}{\\kappa^{2}T_{2}^{2}}\\frac{1}{\\rho} \\tag{26}\\] Fig. 4 shows the \\(w\\)-\\(t\\) relation. This figure shows that \\(w\\) also approaches to a constant in the late times. This is because the density \\(\\rho\\) approaches to a constant after some time. Because we have already chosen \\(\\tilde{\\gamma}=0\\), there is no \\(w=-1\\) crossing. However, if \\(\\tilde{\\gamma}\\) is around zero, the crossing may easily occur. Since we chose the parameter \\(T_{1}\\) to be negative, from Eq. (19), we can see that the density approaches to a constant \\[\\rho=-\\frac{3T_{1}}{\\kappa^{2}T_{2}^{2}} \\tag{27}\\] after a sufficiently large time, as in Fig. 5. In order to explain the observations, there should be at least one term in the right hand side of Eq. (17) causing the cosmic expansion to accelerate and one forcing the expansion to decelerate. If we assume that the universe approaches to the de Sitter space-time in the late times, \\((+,-,+)\\), \\((-,+,+)\\), and \\((-,-,+)\\) are possible combinations of the signs of the three terms. It is interesting that Eq. (17) with the signs \\((+,-,+)\\) in the right hand side may unify the early-time inflation, the middle-time deceleration and the late-time acceleration, which is discussedin the next section. Fig. 6 shows that the universe accelerates after an epoch of deceleration, and Fig. 7 shows the corresponding evolution of the scale factor. The case for possible future singularity is considered in our previous paper [34]. In Ref. [32; 37; 38], they demonstrate that the quantum effects play the dominant role near/before a big rip, driving the universe out of a future singularity (or at least, moderating it). It is also interesting to study the entropy and dissipation [39; 40; 41], since this EOS may be interpreted as the time-dependent viscosity case. The more general EOS, such as the form \\[p_{X}=-\\rho_{X}-A\\rho_{X}^{\\alpha}-BH^{2\\beta}, \\tag{28}\\] in Ref. [26], give more general dynamical equations, Figure 5: The relation between the density \\(\\rho\\) and the cosmic time. Note that the density approaches to a constant, which is not zero. Figure 4: The relation between the EOS parameter \\(w=p/\\rho\\) and the the cosmic time \\(t\\). which can be written as \\[\\frac{\\ddot{a}}{a}=-\\frac{3\\tilde{\\gamma}-2}{2}\\frac{\\dot{a}^{2}}{a^{2}}+\\lambda \\left(\\frac{\\dot{a}}{a}\\right)^{m}+\\mu\\left(\\frac{\\dot{a}}{a}\\right)^{n}+\ u. \\tag{29}\\] The corresponding coefficients to Eq. (28) are \\(\\tilde{\\gamma}=0\\), \\(\\lambda=A(\\kappa^{2}/2)(3/\\kappa^{2})^{\\alpha}\\), \\(m=2\\alpha\\), \\(\\mu=(\\kappa^{2}/2)B\\), \\(n=2\\beta\\), and \\(\ u=0\\). Now we only consider a simpler case to illustrate the scale factor evolution behaviors, \\[\\frac{\\ddot{a}}{a}=-\\frac{3\\tilde{\\gamma}-2}{2}\\frac{\\dot{a}^{2}}{a^{2}}+\\frac {1}{t_{c}}\\left(\\frac{\\dot{a}}{a}\\right)^{n}. \\tag{30}\\] where \\(t_{c}\\) is a parameter. Fig. 8 shows the evolution of the scale factor with different \\(n\\). ## IV Interpretations of the model ### Unified dark energy Since we do not know the nature of either dark energy or dark matter, maybe they can be regarded as two aspects of a single fluid. Based on the analysis in the above section, the EOS in our model can be looked as that of the unified dark energy, since the universe expansion can accelerate after the epoch of deceleration. The model in the present paper can also be regarded as the \\(\\Lambda\\)CDM model with an additional term, or the \\(\\Lambda\\)CDM model with bulk viscosity. The parameter space is enriched in this model. The \\(\\Lambda\\)CDM model describes two mixed fluids, and their EOSs are \\(p=0\\) for dark matter and \\(p=-\\rho\\) for dark energy (cosmological constant). In our model, the case \\(\\tilde{\\gamma}=1\\) and \\(T_{1}\\rightarrow\\infty\\) corresponds to the \\(\\Lambda\\)CDM model. However, in the above section, we study a special choice of the parameters, \\(\\tilde{\\gamma}=0\\), \\(T_{1}=-25\\), and \\(T_{2}=100\\), which is totally different from the \\(\\Lambda\\)CDM model. The qualitative analysis of Eq. (17) can be easily obtained if we assume that \\(H\\) is always decreasing during the cosmic evolution. The three terms in the right hand side of Eq. (17) are proportional to \\(H^{2}\\), \\(H^{1}\\), and \\(H^{0}\\), respectively. If we assume \\(a\\propto e^{2t}\\), then \\(H\\propto e^{t}\\), so the proportions of the three terms are \\(e^{2t}:e^{t}:1\\); if we assume \\(a\\propto t^{2/3}\\), then \\(H\\propto 1/t\\), so the proportions of the three terms are \\(t^{-2}:t^{-1}:1\\). In the early times, the first term is dominant, which may lead to inflation if \\(\\tilde{\\gamma}\\sim 0\\). In the roughly middle times, the second term is dominant, which leads to deceleration if \\(T_{1}<0\\). In the late times as current, the third term is dominant, which leads to acceleration like the de Sitter universe if \\(T_{2}\\) is a real number. We can also see the evolution of \\(\\dot{a}(t)\\) in Fig. 6 and \\(a(t)\\) in Fig. 7. In Eq. (17), the term \\(\\frac{1}{T_{1}}\\frac{\\dot{a}}{a}\\) describes the effective viscosity. Since we do not know much about the nature of dark Figure 7: The relation between the scale factor \\(a\\) and the cosmic time \\(t\\). Figure 6: The relation between the expansion velocity \\(v=\\dot{a}\\) and the cosmic time \\(t\\). Figure 8: The evolution of the scale factor when \\(\\gamma=-1\\) and \\(t_{c}=-25\\). energy and the bulk viscosity in the universe, so the bulk viscosity can be regarded as effective, or a contribution as friction term.In order to separately study the effect of the three terms in the right hand side of Eq. (17), if the first, and the second term are dominant, respectively we have the evolution relations \\[\\frac{\\ddot{a}}{a}=-\\frac{3\\gamma-2}{2}\\frac{\\dot{a}^{2}}{a^{2}} \\Rightarrow a\\frac{dH_{\\gamma}}{da}=-\\frac{3\\gamma}{2}H_{\\gamma}, \\tag{31a}\\] \\[\\frac{\\ddot{a}}{a}=\\frac{1}{T_{1}}\\frac{\\dot{a}}{a} \\Rightarrow a\\frac{dH_{v}}{da}=-H+\\frac{1}{T_{1}}. \\tag{31b}\\] The solutions are correspondingly different \\[H_{\\gamma}(z) = H_{0}^{2}(1+z)^{3\\gamma}, \\tag{32a}\\] \\[H_{v}(z) = \\left(H_{0}-\\frac{1}{T_{1}}\\right)(z+1)+\\frac{1}{T_{1}}. \\tag{32b}\\] ### Mixture of dark energy and dark matter Another interpretation is that the EOS describes the dark energy, which is mixed with the dark matter in the universe media. So we should concern on the mixture of the dark energy and dark matter, which requires fine-tuning of the parameters. In the \\(\\Lambda\\)CDM model of cosmology, we have \\[\\left(\\frac{\\dot{a}}{a}\\right)^{2}=H_{0}^{2}[\\Omega_{m}(1+z)^{3}+\\Omega_{ \\Lambda}]. \\tag{33}\\] where \\(H_{0}\\) is the current value of the Hubble parameter, \\(z=a_{0}/a-1\\) is the redshift, \\(\\Omega_{m}\\) and \\(\\Omega_{\\Lambda}\\) are the cosmological density parameters of matter and \\(\\Lambda\\)-term, respectively. In our case, \\[\\left(\\frac{\\dot{a}}{a}\\right)^{2}=H_{0}^{2}\\Omega_{m}(1+z)^{3}+(1-\\Omega_{m} )H_{d}(z)^{2}, \\tag{34}\\] where \\(H_{d}(z)\\) is the solution of the equation \\[aH\\frac{dH}{da}=-\\frac{3\\tilde{\\gamma}}{2}H^{2}+\\frac{1}{T_{1}}H+\\frac{1}{T_{2 }^{2}}. \\tag{35}\\] The solution of the above equation with the initial condition \\(H(a_{0})=H_{0}\\) is \\[\\left|\\frac{(H-\\frac{1}{3\\tilde{\\gamma}T_{1}})^{2}-\\frac{1}{9\\tilde{\\gamma}^{ 2}T_{1}^{2}}-\\frac{2}{3\\tilde{\\gamma}^{2}T_{2}^{2}}}{(H_{0}-\\frac{1}{3\\tilde{ \\gamma}T_{1}})^{2}-\\frac{1}{9\\tilde{\\gamma}^{2}T_{1}^{2}}-\\frac{2}{3\\tilde{ \\gamma}^{2}T_{2}^{2}}}\\right|=(1+z)^{3\\gamma}. \\tag{36}\\] Here we consider a simpler case, \\(T_{2}\\rightarrow\\infty\\), then \\[a\\frac{dH}{da}=-\\frac{3\\tilde{\\gamma}}{2}H+\\frac{1}{T_{1}}. \\tag{37}\\] The solution is \\[H(a)=\\left(H_{0}-\\frac{2}{3\\tilde{\\gamma}T_{1}}\\right)\\left(\\frac{a}{a_{0}} \\right)^{-3\\gamma/2}+\\frac{2}{3\\tilde{\\gamma}T_{1}}, \\tag{38}\\] so \\(H(z)\\) for the dark energy is \\[H_{d}(z)=\\left(H_{0}-\\frac{2}{3\\tilde{\\gamma}T_{1}}\\right)(z+1)^{3\\gamma/2}+ \\frac{2}{3\\tilde{\\gamma}T_{1}}. \\tag{39}\\] It is interesting that the above relation can be rewritten as \\[H_{d}(z)=H_{0}[\\tilde{\\Omega}(1+z)^{3\\tilde{\\gamma}/2}+(1-\\tilde{\\Omega})], \\tag{40}\\] where \\[\\tilde{\\Omega}=1-\\frac{2}{3\\tilde{\\gamma}T_{1}H_{0}}. \\tag{41}\\] Note that Eq. (40) is valid if \\(\\tilde{\\gamma}\ eq 0\\), and for the case \\(\\tilde{\\gamma}=0\\), directly solving Eq. (37) gives \\[H_{d}(z)=H_{0}\\left[1-\\frac{1}{T_{1}H_{0}}{\\rm ln}(z+1)\\right]. \\tag{42}\\] We assume the universe media contains two fluids. One is described by Eq. (17) with \\(T_{2}\\rightarrow\\infty\\), and another is described by the simplest pure cosmological constant \\(\\Lambda\\). The former may be regarded as the dark matter with effective viscosity. The \\(H\\)-\\(z\\) relation is thus \\[H^{2}=H_{0}^{2}\\{\\Omega_{m}[\\tilde{\\Omega}(1+z)^{3\\tilde{\\gamma}/2}+(1-\\tilde {\\Omega})]^{2}+(1-\\Omega_{m})\\}, \\tag{43}\\] which can be regarded as the generalized relation of mixed dark energy and dard matter. We emphasize that solving Eq. (35) with \\(T_{2}\\rightarrow\\infty\\) and writting \\(H^{2}=\\Omega_{m}H_{d}^{2}+(1-\\Omega_{m})H_{0}^{2}\\) is not equivalent to directly solvingData fitting of the effective viscosity model The observations of the SNe Ia have provided the first direct evidence of the accelerating expansion for our current universe. Any model attempting to explain the acceleration mechanism should be consistent with the SNe Ia data implying results, as a basic requirement. The method of the data fitting is illustrated in Ref. [36]. The observations of supernovae measure essentially the apparent magnitude \\(m\\), which is related to the luminosity distance \\(d_{L}\\) by \\[m(z)={\\cal M}+5{\\rm log}_{10}D_{L}(z), \\tag{47}\\] where \\(D_{L}(z)\\equiv(H_{0}/c)d_{L}(z)\\) is the dimensionless luminosity distance and \\[d_{L}(z)=(1+z)d_{M}(z), \\tag{48}\\] where \\(d_{M}(z)\\) is the comoving distance given by \\[d_{M}(z)=c\\int_{0}^{z}\\frac{1}{H(z^{\\prime})}dz^{\\prime}. \\tag{49}\\] Also, \\[{\\cal M}=M+5{\\rm log}_{10}\\left(\\frac{c/H_{0}}{1{\\rm Mpc}}\\right)+25, \\tag{50}\\] where \\(M\\) is the absolute magnitude which is believed to be constant for all supernovae of type Ia. We use the 157 golden sample of supernovae data compiled by Riess _et al._[2] to fit our model. The data points in these samples are given in terms of the distance modulus \\[\\mu_{obs}(z)\\equiv m(z)-M_{obs}(z). \\tag{51}\\] The \\(\\chi^{2}\\) is calculated from \\[\\chi^{2}=\\sum_{i=1}^{n}\\left[\\frac{\\mu_{obs}(z_{i})-{\\cal M}^{\\prime}-5{\\rm log }_{10}D_{Lth}(z_{i};c_{\\alpha})}{\\sigma_{obs}(z_{i})}\\right]^{2}. \\tag{52}\\] where \\({\\cal M}^{\\prime}={\\cal M}-M_{obs}\\) is a free parameter and \\(D_{Lth}(z_{i};c_{\\alpha})\\) is the theoretical prediction for the dimensionless luminosity distance of a supernovae at a particular distance, for a given model with parameters \\(c_{\\alpha}\\). We consider the generalized \\(\\Lambda\\)CDM model as referred to in the previous section and perform a best-fit analysis with the minimization of the \\(\\chi^{2}\\), with respect to \\({\\cal M}^{\\prime}\\), \\(\\Omega_{m}\\). Fig. 9 shows that the theoretical curve fits the observational data at an acceptable level, with only one adjustable parameter \\(\\Omega_{m}\\) except the today's Hubble parameter \\(H_{0}\\). ## VI Discussion and conclusion We investigate a parameterized effective EOS of dark fluid in the cosmological evolution. With this general EOS, the dynamical equation of the scale factor is completely integrable and an exact solution for Einstein's gravitational equation with FRW metric is obtained. The parameters \\(\\gamma\\), \\(p_{0}\\), \\(w_{H}\\), \\(w_{H2}\\), and \\(w_{dH}\\) can be reduced to three condensed parameters \\(\\tilde{\\gamma}\\), \\(T_{1}\\), and \\(T_{2}\\). Three interpretations to this model are proposed in this paper: * This EOS can be regarded as a unification of the dark energy and dark matter, so there is a single fluid to show functions in the universe. In this case, we prefer to the choice of the parameters: \\(\\tilde{\\gamma}\\sim 0\\), \\(T_{1}<0\\), and \\(T_{2}^{2}>0\\). * This EOS describes the dark energy, which is mixed with the dark matter in the universe media; or this EOS describes the dark matter with viscosity, which is mixed with the dark energy from the \\(\\Lambda\\)-term. * The universe media contains a single fluid, which corresponds to the matter described by the EOS of \\(p=0\\), with an effectively constant viscosity. It is the effective viscosity that causes the cosmic expansion acceleration without by introducing a cosmological constant. In this case, we prefer to the choice of the parameters: \\(\\tilde{\\gamma}\\sim 1\\), \\(T_{1}>0\\), and \\(T_{2}=0\\). Different choices of the parameters may lead to several fates of the cosmological evolution. We especially study the choices for the parameters of \\(\\tilde{\\gamma}=0\\) and \\(T_{1}<0\\) and the unified dark energy in the first interpretation case. We presents a generalized relation of \\(H\\)-\\(z\\) compared with the \\(\\Lambda\\)CDM model. We show that the matter described by the EOS of \\(p=0\\) plus with effective viscosity and without introducing the cosmological constant can fit the observational data well, so the effective viscosity model Figure 9: The dependence of luminosity on redshift computed from the effective viscosity model. The solid and dashed lines correspond to \\(\\Omega_{m}=0.3\\) and \\(\\Omega_{m}=0.5\\), respectively. The dots are the observed data. may be an alternative candidate to explain the late-time accelerating expansion universe. ## Appendix A Remarks on the \\(\\Lambda\\)-term involved Directly solving Eq. (35) with the EOS \\(p=(\\gamma-1)\\rho\\) gives \\[H(z)=\\left(H_{0}^{2}-\\frac{2}{3\\tilde{\\gamma}^{2}T_{2}^{2}}\\right)(1+z)^{3 \\gamma}+\\frac{2}{3\\tilde{\\gamma}^{2}T_{2}^{2}}, \\tag{36}\\] which can be rewritten as \\[H^{2}=H_{0}^{2}[\\Omega_{m}(1+z)^{3\\gamma}+(1-\\Omega_{m})], \\tag{37}\\] where \\(\\Omega_{m}=1-\\frac{2}{3\\tilde{\\gamma}^{2}T_{2}^{2}H_{0}^{2}}\\). Solving the Friedmann equations with the EOS \\(p=(\\gamma-1)\\rho\\) without the \\(\\Lambda\\)-term gives \\(H_{x}^{2}=H_{0}^{2}(1+z)^{3\\gamma}\\). On the other hand, concerning on the mixture of the dark energy and dark matter, we can write \\(H^{2}=\\Omega_{m}H_{x}^{2}+(1-\\Omega_{m})H_{0}^{2}\\), which is exactly the same as Eq. (37). However, the following two methods are not equivalent except for some very special cases: (i) Using the EOS \\(p=f(\\rho)\\) to solve the Friedmann equations with the \\(\\Lambda\\)-term, we obtain the \\(H\\)-\\(z\\) relation. (ii) Using the EOS \\(p=f(\\rho)\\) to solve the Friedmann equations without the \\(\\Lambda\\)-term, we obtain \\(H_{x}(z)\\) and write \\[H^{2}=\\Omega_{m}H_{x}(z)^{2}+(1-\\Omega_{m})H_{0}^{2}. \\tag{38}\\] Generally the above \\(H\\)-\\(z\\) relation is not equivalent to what is obtained in (i). ## Acknowledgements X.H.M. is very grateful to Profs. S.D. Odintsov and I. Brevik for lots of helpful comments with reading the main contents. This work is supported partly by NSF and Doctoral Foundation of China. ## References * (1) T. Totani, Y. Yoshii, and K. Sato, Astrophys. J. **483**, L75 (1997); S. Perlmutter _et al._, Nature **391**, 51 (1998); A.G. Riess _et al._, Astron. J. **116**, 1009 (1998); N. Bahcall, J.P. Ostriker, S. Perlmutter, and P.J. Steinhardt, Science **284**, 1481 (1999). * (2) A.G. Riess _et al._, Astrophys. J. **607**, 665 (2004); * (3) H.Jassal, J.Bagla and T.Padmanabhan, Mon.Not.Roy.Astron.Soc.Letters 356, L11 (2005); astro-ph/0506748 * (4) C. L. Bennett _et al._, Astrophys. J. Suppl. **148**, 1 (2003). * (5) L. Wang, R.R. Caldwell, J.P. Ostriker, and P.J. Steinhardt, Astrophys. J. **530**, 17 (2000). * (6) R.R. Caldwell, Phys. Lett. B **545**, 23 (2002), astro-ph/9908168. * (7) A. Kamenshchik, U. Moschella, and V. Pasquier, Phys. Lett. B **511**, 265 (2001). * (8) M.C. Bento, O. Bertolami, and A.A. Sen, Phys. Rev. D **66**, 043507 (2002). * (9) E. Babichev, V. Dokuchaev, and Y. Eroshenko, Class. Quantum Grav. **22**, 143 (2005). * (10) S. Capozziello, S.D. Martino, and M. Falanga, Phys. Lett. B **299**, 494 (2002). * (11) K. Freese and M. Lewis, Phys. Lett. B **540**, 1 (2002). * (12) G.R. Dvali, G. Gabadadze, and M. Porrati, Phys. Lett. B **484**, 112 (2000); G.R. Dvali and M.S. Turner, astro-ph/0301510. * (13) A. Lue, R. Scoccimarro, and G. Starkman, Phys. Rev. D **69**, 044005 (2004). * (14) S. Nojiri, S.D. Odintsov, Phys. Lett. B **599**, 137 (2004). * (15) S.M. Carroll, V. Duvvuri, M. Trodden, and M. S. Turner, Phys. Rev. D **70**, 043528 (2004). * (16) X.H. Meng and P. Wang, Class. Quant. Grav. **20**, 4949 (2003); ibid, **21**, 951 (2004); ibid, **22**, 23 (2005); Phys. Lett. B **584**, 1 (2004). * (17) T. Chiba, Phys. Lett. B **575**, 1, (2003), astro-ph/0307338. * (18) E.E. Flanagan, Phys. Rev. Lett. 92, 071101, (2004), astro-ph/0308111. * (19) S. Nojiri and S.D. Odintsov, Phys. Rev. D **68**, 123512 (2003), hep-th/0307288. * (20) D.N. Vollick, Phys. Rev. D **68**, 063510, (2003), astro-ph/0306630. * (21) T.R. Jaffe, A.J. Banday, H.K. Eriksen, K.M. Gorski, and F.K. Hansen, astro-ph/0503213. * (22) T.Padmanabhan and S.Chitre, Phys.Lett.A120, 433 (1987) * (23) I. Brevik and O. Gorbunova, gr-qc/0504001. * (24) I. Brevik, O. Gorbunova, and Y. A. Shaido, gr-qc/0508038. * (25) M. Cataldo, N. Cruz, and S. Lepe, Phys. Lett. B **619**, 5 (2005). * (26) S. Capozziello, V.F. Cardone, E. Elizalde, S. Nojiri, and S.D. Odinsov, astro-ph/0508350. * (27) A. Arbey, astro-ph/0506732; astro-ph/0509592; X.H. Meng, M.G. Hu, and J. Ren, astro-ph/0510357. * (28) V.B. Johri and P.K. Rath, astro-ph/0510017. * (29) R.R. Caldwell, M. Kamionkowski, and N.N. Weinberg, Phys. Rev. Lett. **91**, 071301 (2003). * (30) S. Nojiri, S.D. Odintsov, and S. Tsujikawa, Phys. Rev. D **71**, 063004 (2005). * (31) S. Hannestad, Phys. Rev. D **71**, 103519 (2005). * (32) S. Nojiri and S.D. Odintsov, Phys. Lett. B **595**, 1 (2004), hep-th/0405078. * (33) S. Nojiri and S.D. Odintsov, Phys. Rev. D **72**, 023003 (2005). * (34) X.H. Meng, J. Ren, and M.G. Hu, astro-ph/0509250. * (35) I. Brevik, Phys. Rev. D **65**, 127302 (2002). * (36) M.C. Bento, O. Bertolami, N.M.C. Santos, and A.A. Sen, Phys. Rev. D **71**, 063501 (2005). * (37) S. Nojiri and S.D. Odintsov, Phys. Rev. D **70**, 103522 (2004). * (38) E. Elizalde, S. Nojiri, and S.D. Odintsov, Phys. Rev. D **70**, 043539 (2004), hep-th/0405034. * (39) I. Brevik, S. Nojiri, S.D. Odintsov, and L. Vanzo, Phys. Rev. D **70**, 043520 (2004). * (40) A.D. Prisco, L. Herrera, and J. Ibanez, Phys. Rev. D **63**, 023501 (2000). * (41) L. Herrera, A.D. Prisco, and J. Ibanez, Class. Quantum Grav. **18**, 1475 (2001).
A generally parameterized equation of state (EOS) is investigated in the cosmological evolution with bulk viscosity media modelled as dark fluid, which can be regarded as a unification of dark energy and dark matter. Compared with the case of the perfect fluid, this EOS has possessed four additional parameters, which can be interpreted as the case of the non-perfect fluid with time-dependent viscosity or the model with variable cosmological constant. From this general EOS, a completely integrable dynamical equation to the scale factor is obtained with its solution explicitly given out. (i) In this parameterized model of cosmology, for a special choice of the parameters we can explain the late-time accelerating expansion universe in a new view. The early inflation, the median (relatively late time) deceleration, and the recently cosmic acceleration may be unified in a single equation. (ii) A generalized relation of the Hubble parameter scaling with the redshift is obtained for some cosmology interests. (iii) By using the SNe Ia data to fit the effective viscosity model we show that the case of matter described by \\(p=0\\) plus with effective viscosity contributions can fit the observational gold data in an acceptable level. pacs: 98.80.Cq, 98.80.-k
Condense the content of the following passage.
arxiv-format/0511528v1.md
# Beyond the perfect fluid hypothesis for dark energy equation of state V.F. Cardone\\({}^{1}\\), C. Tortora\\({}^{2}\\), A. Troisi\\({}^{1}\\), S. Capozziello\\({}^{2}\\) Corresponding author : V.F. Cardone, [email protected]\\({}^{1}\\)Dipartimento di Fisica \"E.R. Caianiello\", Universita di Salerno and INFN, Sez. di Napoli, Gruppo Coll. di Salerno, via S. Allende, 84081 - Baronissi (Salerno), Italy \\({}^{2}\\)Dipartimento di Scienze Fisiche, Universita di Napoli \"Federico II\" and INFN, Sez. di Napoli, Compl. Univ. Monte S. Angelo, Edificio N, Via Cinthia, 80126, Napoli, Italy ## I Introduction The end of the 21st century has left as unexpected legacy a new picture of the universe depicted as a spatially flat manifold with a subcritical matter content presently undergoing a phase of accelerated expansion. An impressive amount of astrophysical evidences on different scales, from the anisotropy spectrum of cosmic microwave background radiation (hereafter CMBR) [1; 2; 3] to the Type Ia Supernovae (hereafter SNeIa) Hubble diagram [4; 5], the large scale structure [6] and the matter power spectrum determined by the Ly\\(\\alpha\\) forest data [7], represent observational cornerstones that put on firm grounds the picture of the universe described above. Although the classical cosmological constant \\(\\Lambda\\)[8] represents the best fit to the full set of observational data [9; 10], the well known _coincidence_ and _fine tuning_ problems have lead cosmologists to look for alternative candidates that are collectively referred to as _dark energy_. In the most investigated scenario, dark energy originates from a scalar field \\(\\phi\\), dubbed _quintessence_, running down its self interaction potential \\(V(\\phi)\\) so that an effective fluid with negative pressure contributes to the energy budget of the universe (for comprehensive reviews see, for instance, [11]). It is, however, also possible that dark energy and dark matter are actually two different manifestation of the same substance. In such models, collectively referred to as _unified dark energy_ (UDE), a single fluid with an exotic equation of state behaves as dark energy at the lowest energy scales and as and dark matter at higher energies [12; 13; 14]. It is worth remembering that a variant of UDE models has recently been investigated by introducing models which are able to give rise to both inflation and cosmic acceleration [15; 16] also solving the problem of phantom quintessence [17]. Notwithstanding the strong efforts made to solve this puzzle, none of the proposed explanations is fully satisfactory and free of problems. This disturbing situation has motivated much interest toward a radically different approach to the problem of cosmic acceleration. It has therefore been suggested that cosmic speed up is an evidence for the need of _new physics_ rather than a new fluid. Much interest has then been devoted to models according to which standard matter is the only physical ingredient, while the Friedmann equations are modified, possibly as a consequence of braneworld scenarios [18]. In this same framework, higher order theories of gravity represent a valid alternative to the dark energy approach. Also referred to as _curvature quintessence_, in these models, the gravity Lagrangian is generalized by replacing the Ricci scalar curvature \\(R\\) with a generic function \\(f(R)\\) so that an effective dark energy - like fluid appears in the Friedmann equations and drives the accelerated expansion. Different models of this kind have been explored and tested against observations considering the two possible formulations that are obtained adopting the metric [19; 20; 21; 22; 23] or the Palatini [24; 25; 26] formulation. From this overview of the different theoretical models proposed so far, it is clear that rather little is definitively known on the nature and the fundamental properties of the dark energy even if some model independent constraints on its present day value and on its first derivative may be inferred from nonparametric analyses (see, e.g., [27]). It is worth noting, however, that, in all the models considered so far (with the remarkable exception of UDE models), it has been aprioristically assumed that dark energy behaves as a perfect fluid so that its EoS is linear in the energy density. Actually, from elementary thermodynamics, we know that a real fluid is never perfect [28] and, on the contrary, such an assumption is more and more inadequate as the fluid approaches its thermodynamical _critical points_ or during phase transitions. Given our fundamental ignorance about the properties of the dark sector, we cannot exclude the possibility that the universe is in a sort of critical point so that its constituents cannot be treated as perfect fluids. While the matter term, whatever its nature, may be safely modelled as a dustlike component (i.e., its EoS simplifies to \\(p=0\\)), forcing the dark energy to be a perfect fluid is a rough simplification that may lead to neglect the impact on the dynamics of its true properties. Moreover, such an unmotivated approach could lead to systematically wrong results and hence to misleading inferences on the dark energy nature. Motivated by these considerations, it is therefore worth exploring what are the consequences of abandoning the perfect fluid EoS. A first step in this direction has been performed by Capozziello et al. [29] who have considered a model in which a single fluid with a Van der Waals EoS accounts for both dark matter and dark energy (see also [30]). From classical thermodynamics, we know that the Van der Waals EoS is best suited to describe the behaviour of real gas with a particular attention to the phase transitions phenomena. Actually, the Van der Waals EoS is only one of the possible choices in these regimes. Elaborating further on the idea put forward by Capozziello et al., we explore here other thermodynamical EoS all sharing the properties of having been proposed to work well also for fluids in critical conditions. Moreover, these EoS contain the perfect fluid EoS as a limiting case thus representing useful and more realistic generalizations. It is worth stressing that such an approach better reflects our ignorance of the dark energy nature and should prevent us from deducing theoretically biased conclusions on its nature. The plan of the paper is as follows. In Sect. II, we present the EoS considered giving their expressions and characterizing parameters. The dynamics of cosmological models comprising dust matter and a dark energy fluid with such an EoS is discussed in Sect. III where we determine the redshift evolution of the main physical quantities of interest. Matching with the data allows to investigate the viability of the different EoS and constrain their parameters. The method we use and the results we get are presented in Sect. IV, while the position of the peaks in the CMBR anisotropy spectrum is evaluated in Sect. V and compared with the WMAP determination. In Sect. VI, we reinterpret the models proposed in the framework of scalar field quintessence determining the self interaction potential that gives rise to a dark energy model with the given EoS. Conclusions are presented in Sect. VII, while in Appendix A we give some further details on the EoS from the thermodynamic point of view. ## II Equations of state The dynamical system describing a Friedmann-Robertson-Walker (FRW) cosmology is given by the Friedmann equations [31] : \\[\\frac{\\ddot{a}}{a}=-\\frac{4\\pi G}{3}\\;(\\rho_{M}+\\rho_{X}+3p_{X})\\, \\tag{1}\\] \\[H^{2}=\\frac{8\\pi G}{3}(\\rho_{M}+\\rho_{X})\\, \\tag{2}\\] and the continuity equations for each of the two fluids : \\[\\dot{\\rho_{i}}+3H\\;(\\rho_{i}+p_{i})=0\\, \\tag{3}\\] where \\(a\\) is the scale factor, \\(H=\\dot{a}/a\\) the Hubble parameter, the dot denotes the derivative with respect to cosmic time and we have assumed a spatially flat universe in agreement with what is inferred from CMBR anisotropy spectrum [1; 2; 3]. Eqs.(1), (2) and (3) are derived by the Einstein field equations and the contracted Bianchi identities assuming that the source of the gravitational field is a a mixture of matter with energy density \\(\\rho_{M}\\) and pressure \\(p_{M}=0\\) and an additional negative pressure fluid (which is usually referred to as _dark energy_) with energy density \\(\\rho_{X}\\) and pressure \\(p_{X}\\). To close the system and determine the evolution of the scale factor \\(a\\) and of the other quantities of interest, the equation of state (hereafter EoS) of the dark energy fluid (i.e. a relation between \\(\\rho_{X}\\) and \\(p_{X}\\)) is needed. Unfortunately, this is a daunting task given our complete ignorance of the dark energy nature and of its fundamental properties. Motivated by the discussion in the introduction, we explore here some EoS all sharing the properties of working well even when the fluid is near critical points or phase transitions. A textbook example is the Van der Waals EoS : \\[p_{X}=\\frac{\\gamma\\rho_{X}}{1-\\beta\\rho_{X}}-\\alpha\\rho_{X}^{2}\\, \\tag{4}\\] wherec \\(\\alpha\\) and \\(\\beta\\), in the thermodynamics analogy, may be related to limiting values of the pressure and the volume, while \\(\\gamma\\) is the usual barotropic factor. The Van der Waals EoS reduces to the perfect fluid case in the limit \\(\\alpha,\\beta\\to 0\\). The dynamics of the corresponding cosmological model has been yet investigated both in the framework of UDE models [29] and as a dark energy fluid [30] so that we do not consider it again here. On the other hand, there are other EoS that are worth investigating. However, we limit our attention to EoS described by two parameters only in order to both narrow the parameter space to explore and avoid introducing too large a degeneracy among the quantities we have to determine. Let us first define \\(\\eta(z)\\) and \\(\\tilde{p}(z)\\) as \\(\\rho_{X}(z)/\\rho_{crit}\\) and \\(p_{X}(z)/\\rho_{crit}\\) respectively, being \\(\\rho_{crit}=3H_{0}^{2}/8\\pi G\\) the present day critical density of the universe. The EoS may then be evaluated as \\(w=p_{X}/\\rho_{X}=\\tilde{p}/\\eta\\). For the different models we consider, \\(w\\) is a non linear function of \\(\\eta\\) and is given as follows. 1. _Redlich - Kwong_ : \\[w_{RK}=\\beta\\times\\frac{1-\\sqrt{3-2\\sqrt{2}}\\alpha\\eta}{1-(1-\\sqrt{2})\\alpha \\eta}\\.\\] (5) 2. _Modified Berthelot_ : \\[w_{MB}=\\frac{\\beta}{1+\\alpha\\eta}\\.\\] (6) 3. _Dieterici_ : \\[w_{Dt}=\\frac{\\beta\\exp\\left[2(1-\\alpha\\eta)\\right]}{2-\\alpha\\eta}\\.\\] (7) 4. _Peng - Robinson_ : \\[w_{PR}=\\frac{\\beta}{1-\\alpha\\eta}\\times\\left[1-\\frac{(c_{a}/c_{b})\\alpha\\eta} {(1+\\alpha\\eta)/(1-\\alpha\\eta)+\\alpha\\eta}\\right]\\] (8) with \\(c_{a}\\simeq 1.487\\) and \\(c_{b}\\simeq 0.253\\). In Eqs.(5) - (8), the two parameters \\(\\alpha\\) and \\(\\beta\\) are related to the critical values of density and pressure of the fluid. In particular, for all cases, \\(\\alpha\\propto\\rho_{crit}/\\rho_{c}\\), while \\(\\beta\\propto p_{c}/\\rho_{c}\\) with \\(\\rho_{c}\\) and \\(p_{c}\\) the values of the energy density and pressure respectively at the critical point of the fluid1. Note that, for \\(\\alpha=0\\), all the EoS above reduces to \\(w=cst\\), i.e. to the perfect fluid one. The condition \\(\\alpha=0\\) is achieved for an infinite critical density which means that the fluid has no critical points. This is indeed the case of the perfect fluid and is the reason why such a description is highly unrealistic. Footnote 1: See the Appendix for the definition of critical points and the exact expression of \\(\\alpha\\) and \\(\\beta\\). It is convenient, however, in the application to express these two parameters in terms of more handable and meaningful quantities. To this aim, we first define \\[y\\equiv\\alpha\\eta(z=0)=\\alpha(1-\\Omega_{M})\\Rightarrow\\alpha=y/(1-\\Omega_{M}) \\tag{9}\\] where we have used the flatness condition \\(\\Omega_{M}+\\Omega_{X}=1\\). Combining Eqs.(1) and (2), using the definition of deceleration parameter \\(q\\equiv-a\\ddot{a}/a^{2}\\) and evaluating the result at \\(z=0\\), we get the well known relation : \\[q_{0}=\\frac{1}{2}+\\frac{3}{2}\\Omega_{X}w_{0}. \\tag{10}\\] Introducing one of Eqs.(5) - (8) into Eq.(10) with the condition \\(\\eta(z=0)=\\Omega_{X}\\), we can express \\(\\beta\\) in terms of \\(q_{0}\\), \\(y\\) and \\(\\Omega_{M}\\) thus obtaining what follows. 1. _Redlich - Kwong_ : \\[\\beta=\\frac{(2q_{0}-1)[1-(1-\\sqrt{2})y]}{3(1-\\Omega_{M})(1-\\sqrt{3-2\\sqrt{2}} y)}\\.\\] (11) ii. _Modified Berthelot_ : \\[\\beta=\\frac{(2q_{0}-1)(1+y)}{3(1-\\Omega_{M})}\\] (12) iii. _Dieterici_ : \\[\\beta=\\frac{(2q_{0}-1)(2-y)\\exp\\left[-2(1-y)\\right]}{3(1-\\Omega_{M})}\\.\\] (13) iv. _Peng - Robinson_ : \\[\\beta=-\\frac{c_{b}(2q_{0}-1)(y-1)(y^{2}-2y-1)}{3(1-\\Omega_{M})[c_{a}y(1-y)+c_{b }(y^{2}-2y-1)]}\\.\\] (14) Note that it is \\(y\\) rather than \\(\\alpha\\) to determine \\(\\beta\\) so that it is this parameter that will be constrained by the fitting procedure. Moreover, \\(q_{0}\\) and \\(\\Omega_{M}\\) are more familiar quantities than \\(\\beta\\) so that it is easier to choose intervals over which they can take values. ## III Redshift evolution For a given EoS, it is possible to determine how the main physical quantities (such as the energy density, the EoS and the Hubble parameter) evolves with the redshift \\(z\\). To this aim, we first change variable from \\(t\\) to \\(z\\) in the continuity equation (3) which thus is rewritten as : \\[\\frac{d\\eta}{dz}=\\frac{3(1+w)\\eta(z)}{1+z} \\tag{15}\\] with the initial condition \\(\\eta(z=0)=\\Omega_{X}\\). Note that, for the EoS we are considering, Eq.(15) is a first order nonlinear differential equation that cannot be solved analytically. However, a numerical integration is straightforward provided that the parameters \\((q_{0},y,\\Omega_{M})\\) are given. Fig. 1 shows the results for the different EoS described in the previous section for some illustrative set of parameters. Note that, hereon, to shorten the notation, we will use the acronyms \\(RK\\), \\(MB\\), \\(Dt\\) and \\(PR\\) referring to the Redlich - Kwong, Modified Berthelot, Dieterici and Peng - Robinson cases respectively. As it is clearly shown, not surprisingly, different EoS may lead to radically different evolutions for the energy density. This is particularly true comparing the upper panels with the lower ones. For the \\(RK\\) and \\(MB\\) cases, \\(\\log\\eta\\) is an almost linear increasing function of \\(z\\), i.e. \\(\\eta(z)\\) has an approximately power - law like decrease with the cosmic time \\(t\\) over a large range. As a consequence, in the far past, the dark energy component does not fade away, but still contributes to the energy budget during the usually matter dominated epoch. This behaviour may be problematic for the impact on structure formation and nucleosynthesis. For the \\(RK\\) case, this problem may be particularly worrisome since, as can be inferred from Fig. 1, for high \\(z\\), \\(\\eta\\sim(1+z)^{\\gamma}\\) with \\(\\gamma\\) larger than 3 for some combination of the parameters \\((q_{0},y,\\Omega_{M})\\). The situation is less dramatic for the \\(MB\\) EoS since, although we still get \\(\\eta\\sim(1+z)^{\\gamma}\\) for high \\(z\\), now \\(\\gamma\\) is smaller than 3 so that the dark energy term becomes quite small during the matter dominated era. Note also that the evolution of \\(\\eta(z)\\) only weakly depends on \\(y\\) for the \\(MB\\) model thus suggesting a serious degeneracy in this quantity. The situation is radically different for the \\(Dt\\) and \\(PR\\) models. As Fig. 1 shows, in these cases, \\(\\eta(z)\\) quickly approaches a constant value so that, in the past, the dark energy component does not disappear, but plays the same role as a cosmological constant term. Note, in particular, that this regime is achieved very soon for the \\(PR\\) EoS in which case \\(\\eta(z)\\) is almost constant for quite small values of \\(z\\). These results are reassuring since the energy density of the dark energy component becomes vanishingly small with respect to that of the matter during both the structure formation and nucleosynthesis epochs so that we are quite confident that these processes are not altered by the use of unusual EoS. Having determined \\(\\eta(z)\\), it is now straightforward to compute how the EoS depend on the redshift. The result is shown in Fig. 2 where we plot \\(w(z)\\) for the different models adopting for the model parameters the same values as in Fig. 1. Although the behaviour of \\(\\eta(z)\\) is qualitatively similar in some cases, the shape of \\(w(z)\\) is radically different for the models we are considering so that we discuss them separately. First, let us consider the \\(RK\\) EoS. For the models in the upper left panel of Fig. 2, the EoS turns out to be an increasing function of the redshift \\(z\\) with the largest value of \\(y\\) leading to higher \\(w\\) at high \\(z\\). In particular, \\(w\\) may become positive for sufficiently large values of \\(y\\). However, we have checked that this result strongly depends on \\(\\Omega_{M}\\). Actually, for values of \\(\\Omega_{M}\\geq 0.35\\), the EoS becomes more and more negative as \\(z\\) increases so that the fluid behaves as kind of _superphantom_. As a general rule, however, we stress that, for \\(y\\leq 1\\), \\(w(z\\simeq 0)\\simeq-1\\) so that in the present epoch the EoS mimics that of the cosmological constant. The \\(MB\\) EoS is shown in the upper right panel of Fig. 2, but the main trend should be inferred directly from Eq.(6). Since \\(\\eta(z)\\) is an increasing function of \\(z\\), it is easy to understand that, whatever are the values of \\((q_{0},y,\\Omega_{M})\\), \\(w(z)\\) will vanish in the past so that a dust - like EoS is asymptotically achieved. Note, however, that the convergence may be quite slow depending on \\(y\\): the larger is \\(y\\), the higher is \\(w\\) at a given \\(z\\) so that the quicker is the convergence toward the asymptotic null value. Given this behaviour, it is tempting to use the \\(MB\\) EoS as a proposal for a UDE model. From Figure 2: The evolution against the redshift of the EoS parameter \\(w(z)\\) for the \\(RK\\) (upper left), \\(MB\\) (upper right), \\(Dt\\) (lower left) and \\(PR\\) (lower right) EoS respectively. Model parameters are set as in Fig. 1. Figure 1: The evolution against the redshift of the dimensionless energy density \\(\\eta(z)\\) for the \\(RK\\) (upper left), \\(MB\\) (upper right), \\(Dt\\) (lower left) and \\(PR\\) (lower right) EoS respectively. For all models, we set \\((q_{0},\\Omega_{M})=(-0.5,0.3)\\). Short dashed, solid and long dashed lines refer to different values of \\(y\\), namely \\(y=0.5,0.75,1.0\\) for the \\(RK\\) model, \\(y=1.5,2.5,3.5\\) for the \\(MB\\) one, \\(y=0.5,0.7,0.9\\) for the \\(Dt\\) and \\(PR\\) models. the point of view of the parameters, this model may be obtained imposing \\(\\Omega_{X}=1-\\Omega_{b}\\) with \\(\\Omega_{b}\\) the baryon density parameter. However, we prefer to not fix \\(\\Omega_{X}\\) and determine it later from matching with the data. Let us consider now \\(w(z)\\) for the \\(Dt\\) parametrization (lower left panel in Fig. 2). In contrast with the other cases considered, \\(w(z)\\) is not a monotonic function of the redshift, but it has rather an asymmetric bell - shaped behaviour. In particular, the height of the peak is larger for smaller values of \\(y\\) and its position shifts toward right (i.e., larger values of \\(z\\)) with the increasing of \\(y\\). The most remarkable feature is, however, the asymptotic approach toward the cosmological constant value \\(w=-1\\) that is achieved later for smaller \\(y\\). A similar behaviour is consistent with the result shown in Fig. 1 according to which \\(\\eta(z)\\) becomes constant for values of \\(z\\) sufficiently high. Comparing the two plots, we see that \\(\\eta(z)\\) starts being approximately constant as soon as \\(w(z)\\) is indistinguishable from \\(-1\\) so that everything works as for the cosmological constant. Even if not shown in the plot, we note that \\(w(z)\\) approaches \\(-1\\) also in the limit \\(z\\to-1\\), i.e. in the asymptotic future, so that a de Sitter like expansion is achieved. Finally, let us discuss the case of the \\(PR\\) EoS which is depicted in the lower right panel of Fig. 2. As a striking result, we get that \\(w(z)\\) starts from a value very close to -1 and very soon reaches \\(w=-1\\) after which it does not change anymore. This behaviour nicely explains why \\(\\eta(z)\\) is approximately constant over almost the full evolutionary history. It is also worth noting that, although \\(w(z)\\) depends significantly on \\(y\\), the numerical change in its value is too small to be detected. As a consequence, it is likely that matching with the data will be unable to efficiently constrain this parameter. Another interesting dynamical quantity is the deceleration parameter \\(q\\). Combining the Friedmann equations, it is straightforward to get : \\[q(z)=\\frac{1}{2}+\\frac{3}{2}\\frac{w(z)\\eta(z)}{\\Omega_{M}(1+z)^{3}+\\eta(z)} \\tag{16}\\] so that, having yet evaluated both \\(\\eta(z)\\) and \\(w(z)\\), it is immediate to compute \\(q(z)\\). It turns out that, for all the EoS considered, the evolution of \\(q(z)\\) is quite similar over the redshift range probed by the most of the available data. Moreover, it is remarkable that there is almost no dependence at all on \\(y\\) for the \\(MB\\), \\(Dt\\) and \\(PR\\) models, while a weak dependence is present in the case of the \\(RK\\) EoS. In this latter case, it is important to stress that the adopted value of \\(\\Omega_{M}\\) plays a fundamental role with values of \\(\\Omega_{M}\\geq 0.35\\) leading to \\(q(z)<0\\) for all \\(z>0\\), i.e. these models are never decelerating. For all other cases, instead, \\(q(z)\\) changes sign so that the transition redshift, defined as \\(q(z_{T})=0\\), turns out to be positive in agreement with some recent estimates. As a final issue, we have also numerically evaluated how the scale factor depends on the cosmic time \\(t\\). Introducing the normalized time variable \\(\\tau=t/t_{0}\\) (with \\(t_{0}\\) the present age of the universe), it turns out that \\(a(\\tau)\\) is almost linear over the most of the universe evolution and, what is more interesting, is independent of \\(y\\). Actually, this is only a result of having used the dimensionless time \\(\\tau\\) rather than the physical time \\(t\\). Since, as we have checked, \\(t_{0}\\) depends significantly on the combination of the model parameters \\((q_{0},y,\\Omega_{M})\\), transforming from \\(a(\\tau)\\) to \\(a(t)\\) introduce the expected dependence on \\(y\\). As a general result, \\(a(t)\\) does not diverge in any finite time so that any _Big Rip_ is avoided even if \\(w\\) may lie today in the phantom (\\(w<-1\\)) regime. ## IV Constraining the EoS The discussion above has shown that the dynamics of a cosmological model filled with dust matter and a dark energy fluid whose EoS is one of those proposed in Sect. II is reasonable and not affected by any pathological behaviour (provided the parameters are chosen with some care in the case of the \\(RK\\) model). It is therefore interesting to compare the models with the available observations in order to both investigate the viability of the model itself and constrain its parameters. ### The method In order to constrain the EoS characterizing parameters, we maximize the following likelihood function : \\[{\\cal L}\\propto\\exp\\left[-\\frac{\\chi^{2}({\\bf p})}{2}\\right] \\tag{17}\\] where \\({\\bf p}\\) denotes the set of model parameters and the pseudo - \\(\\chi^{2}\\) merit function is defined as : \\[\\chi^{2}({\\bf p}) = \\sum_{i=1}^{N}\\left[\\frac{r^{th}(z_{i},{\\bf p})-r_{i}^{obs}}{ \\sigma_{i}}\\right]^{2} \\tag{18}\\] \\[+ \\left[\\frac{{\\cal R}({\\bf p})-1.716}{0.062}\\right]^{2}+\\left[ \\frac{{\\cal A}({\\bf p})-0.469}{0.017}\\right]^{2}\\,.\\] Let us discuss briefly the different terms entering Eq.(18). In the first one, we consider the dimensionless coordinate distance \\(y\\) to an object at redshift \\(z\\) defined as : \\[r(z)=\\int_{0}^{z}\\frac{dz^{\\prime}}{E(z^{\\prime})} \\tag{19}\\] and related to the usual luminosity distance \\(D_{L}\\) as \\(D_{L}=(1+z)r(z)\\). Daly & Djorgovki [32] have compiled a sample comprising data on \\(y(z)\\) for the 157 SNeIa in the Riess et al. [5] Gold dataset and 20 radiogalaxies from [33], summarized in Tables 1 and 2 of [32]. As a preliminary step, they have fitted the linear Hubble law to a large set of low redshift (\\(z<0.1\\)) SNeIa thus obtaining :\\[h=0.664\\pm 0.008\\.\\] We thus set \\(h=0.664\\) in order to be consistent with their work, but we have checked that varying \\(h\\) in the 68% CL quoted above does not alter the main results. Furthermore, the value we are using is consistent also with \\(H_{0}=72\\pm 8\\) km s\\({}^{-1}\\) Mpc\\({}^{-1}\\) given by the HST Key project [34] based on the local distance ladder and with the estimates coming from the time delay in multiply imaged quasars [35] and the Sunyaev - Zel'dovich effect in X - ray emitting clusters [36]. The second term in Eq.(18) makes it possible to extend the redshift range over which \\(y(z)\\) is probed resorting to the distance to the last scattering surface. Actually, what can be determined from the CMBR anisotropy spectrum is the so called _shift parameter_ defined as [37; 38] : \\[R\\equiv\\sqrt{\\Omega_{M}}y(z_{ls}) \\tag{20}\\] where \\(z_{ls}\\) is the redshift of the last scattering surface which can be approximated as [39],: \\[z_{ls}=1048\\left(1+0.00124\\omega_{b}^{-0.738}\\right)(1+g_{1}\\omega_{M}^{g_{2}}) \\tag{21}\\] with \\(\\omega_{i}=\\Omega_{i}h^{2}\\) (with \\(i=b,M\\) for baryons and total matter respectively) and \\((g_{1},g_{2})\\) given in Ref. [39]. The parameter \\(\\omega_{b}\\) is well constrained by the baryogenesis calculations contrasted to the observed abundances of primordial elements. Using this method, Kirkman et al. [40] have determined : \\[\\omega_{b}=0.0214\\pm 0.0020\\.\\] Neglecting the small error, we thus set \\(\\omega_{b}=0.0214\\) and use this value to determine \\(z_{ls}\\). It is worth noting, however, that the exact value of \\(z_{ls}\\) has a negligible impact on the results and setting \\(z_{ls}=1100\\) does not change none of the constraints on the other model parameters. Finally, the third term in the definition of \\(\\chi^{2}\\) takes into account the recent measurements of the _acoustic peak_ in the large scale correlation function at 100 \\(h^{-1}\\) Mpc separation detected by Eisenstein et al. [41] using a sample of 46748 luminous red galaxies (LRG) selected from the SDSS Main Sample [42]. Actually, rather than the position of acoustic peak itself, a closely related quantity is better constrained from these data defined as [41] : \\[\\mathcal{A}=\\frac{\\sqrt{\\Omega_{M}}}{z_{LRG}}\\left[\\frac{z_{LRG}}{E(z_{LRG})} y^{2}(z_{LRG})\\right]^{1/3} \\tag{22}\\] with \\(z_{LRG}=0.35\\) the effective redshift of the LRG sample. As it is clear, the \\(\\mathcal{A}\\) parameter depends not only on the dimensionless coordinate distance (and thus on the integrated expansion rate), but also on \\(\\Omega_{M}\\) and \\(E(z)\\) explicitly which removes some of the degeneracies intrinsic in distance fitting methods. Therefore, it is particularly interesting to include \\(\\mathcal{A}\\) as a further constraint on the model parameters using its measured value [41] : \\[\\mathcal{A}=0.469\\pm 0.017\\.\\] Note that, although similar to the usual reduced \\(\\chi^{2}\\) introduced in statistics, the reduced \\(\\chi^{2}\\) (i.e., the ratio between the \\(\\chi^{2}\\) and the number of degrees of freedom) is not forced to be 1 for the best fit model because of the presence of the priors on \\(\\mathcal{R}\\) and \\(\\mathcal{A}\\) and since the uncertainties \\(\\sigma_{i}\\) are not Gaussian distributed, but take care of both statistical errors and systematic uncertainties. With the definition (17) of the likelihood function, the best fit model parameters are those that maximize \\(\\mathcal{L}(\\mathbf{p})\\). However, to constrain a given parameter \\(p_{i}\\), one resorts to the marginalized likelihood function defined as : \\[\\mathcal{L}_{p_{i}}(p_{i})\\propto\\int dp_{1}\\ldots\\int dp_{i-1}\\int dp_{i+1} \\int dp_{n}\\mathcal{L}(\\mathbf{p}) \\tag{23}\\] that is normalized at unity at maximum. Denoting with \\(\\chi_{0}^{2}\\) is the value of the \\(\\chi^{2}\\) for the best fit model, the 1 and 2\\(\\sigma\\) confidence regions are determined by imposing \\(\\Delta\\chi^{2}=\\chi^{2}-\\chi_{0}^{2}=1\\) and \\(\\Delta\\chi^{2}=4\\) respectively. ### Results We have applied the likelihood analysis described above to the four EoS presented in Sect. II obtaining constraints on the model parameters \\((q_{0},y,\\Omega_{M})\\). The results obtained are summarized in Table 1 where we give the best fit values and 68% and 95% confidence ranges for each parameter. Note that the range tested for \\(y\\) is set on a case by case basis as discussed in the following. Given \\((q_{0},y,\\Omega_{M})\\) for an EoS, we may also evaluate some other interesting physical quantities. Since the uncertainties on the model parameters are not Gaussian distributed, a naive propagation of the errors is not possible. Moreover, we have not an analytical expression for some quantities such as, e.g., the transition redshift. We thus estimate the 68% and 95% confidence ranges on the derived quantities by randomly generating 20000 points \\((q_{0},y,\\Omega_{M})\\) using the marginalized likelihood functions of each parameter and then deriving the likelihood function of the corresponding quantity. Although not statistically well motivated, this procedure gives a conservative estimate of the uncertainties which is enough for our aims. As a general result, we note that the constraints on both \\(q_{0}\\) and \\(\\Omega_{M}\\) turn out to be essentially model independent. Moreover, they are consistent with recent estimates obtained using different datasets and dark energy models with a perfect fluid EoS (constant or redshift dependent). In particular, both values are quite similar to those predicted for the concordance \\(\\Lambda\\)CDM model yielding \\((q_{0},\\Omega_{M})\\simeq(-0.5,0.3)\\). Actually, this is not much surprising. As discussed in Sect. III, the four EoS considered mimic well the cosmological constant for small values of \\(z\\). Therefore, we do expect similar values for \\(q_{0}\\) and \\(\\Omega_{M}\\) since these quantities are both evaluated today when the difference among the concordance model and our ones may be hardly detected. The only parameter directly characterizing the different models is therefore \\(y\\) so that constraints on \\(y\\) are indeed strongly model dependent. Beside, the physical range for this quantity must be set on a case by case basis thus obviously impacting the final estimate so that we discuss separately the results for each model. #### iv.2.1 Redlich - Kwong As a first issue, it is important to assess what is the range explored for the parameter \\(y\\). Looking at Eq.(5), it is clear that, in order to avoid unphysical divergences of the EoS, the condition \\(1-\\sqrt{3-2\\sqrt{2}}\\alpha\\eta(z)\ eq 0\\) must hold. Moreover, from Eq.(11), we get the further constrain \\(y\ eq 1/\\sqrt{3-2\\sqrt{2}}\\) in order \\(\\beta\\) to not be divergent. We have checked (analytically and numerically) that choosing \\(0\\leq y\\leq 2\\) ensures that all the conditions quoted above are fulfilled whatever is the redshift \\(z\\). Performing the likelihood analysis discussed above, we have obtained the constraints reported in the first row of Table 1. Quite surprisingly, the range for \\(y\\) is very narrow. Exploring the likelihood contours in the \\(3\\)\\(-\\)\\(D\\) parameter space, we have found that there are indeed two _local_ minima of the pseudo - \\(\\chi^{2}\\) defined above. Our procedure select the _absolute_ minimum thus selecting a very small region. There is, however, also a more subtle motivation for the very stringent constraints obtained on \\(y\\). Although not apparent from the upper left panel of Fig. 1, the energy density strongly depends on \\(y\\) in the region \\(y\\geq 1\\) so that also small deviations from the best fit value leads to significant departures from the best fitting curve thus leading to very strong constraints on the model parameters. Although the best fit curve reproduces very well the data, the \\(RK\\) model may be excluded on the basis of physical considerations. Actually, for \\((q_{0},y,\\Omega_{M})\\) in the parameter space individuated by the constraints reported, the energy density increases with \\(z\\) faster than the matter one. As a result, the universe turns out to be ever accelerating (so that the estimated transition redshift is negative) and never undergoes a matter dominated epoch in the past. Even if we have not performed a detailed calculation, it is nevertheless clear that such a situation leads to severe problems with both structure formation and nucleosynthesis. Note that the same qualitative behaviour holds for the parameters taking values in the other local mimimum. Given these problems, we conclude that the \\(RK\\) EoS may be discarded and will not be considered anymore in the following. #### iv.2.2 Modified Berthelot Looking at Eqs.(6) and (12), it is clear that there are no physical motivations to impose an upper limit on \\(y\\), the only constraints thus being \\(y\\geq 0\\). In the limit \\(y>>1\\), combining Eq.(6) with Eqs.(9) and (12), we get \\(w\\simeq(\\beta/\\alpha)/\\eta=(2q_{0}-1)/[3(1-\\Omega_{M})\\eta]\\) so that the EoS does not depend on \\(y\\) in this regime. As a consequence, the constraints on \\(y\\) turns out to be quite weak and the likelihood analysis may put only upper limits. For the best fit values in the second row of Table 1, Eq.(6) reduces to the perfect fluid one with \\(w=(2q_{0}-1)/[3(1-\\Omega_{M})]\\simeq-1.12\\). This result could suggest that the likelihood analysis argues in favor of no need of giving off the perfect fluid hypothesis. Actually, it is worth stressing that the marginalized likelihood function for \\(y\\) is quite flat so that values of \\(y\ eq 0\\) are perfectly viable. Indeed, the \\(1\\sigma\\) confidence range extends up to \\(y=4.2\\) thus showing that the \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|} \\hline Id & \\(q_{0}\\) & \\(y\\) & \\(\\Omega_{M}\\) & \\(\\mathcal{A}\\) & \\(\\mathcal{R}\\) & \\(z_{T}\\) & \\(t_{0}\\) (Gyr) \\\\ \\hline \\hline \\(RK\\) & \\(-0.490^{+0.005+0.009}_{-0.005-0.009}\\) & \\(1.90^{+0.02+0.04}_{-0.02-0.04}\\) & \\(0.355^{+0.001+0.003}_{-0.001-0.002}\\) & — & — & — & — \\\\ \\(MB\\) & \\(-0.55^{+0.04+0.09}_{-0.04-0.09}\\) & \\(0\\) (\\(\\leq 4.2\\)\\(\\leq 6.0\\)) & \\(0.28^{+0.02+0.04}_{-0.02-0.04}\\) & \\(0.474^{+0.014+0.030}_{-0.018-0.033}\\) & \\(1.738^{+0.022+0.042}_{-0.095}\\) & \\(0.692^{+0.050+0.094}_{-0.042-0.082}\\) & \\(14.29^{+0.22+0.43}_{-0.21-0.41}\\) \\\\ \\(Dt\\) & \\(-0.55^{+0.04+0.09}_{-0.04-0.09}\\) & \\(0.9\\) (\\(\\leq 0.9\\)) & \\(0.28^{+0.02+0.04}_{-0.02-0.04}\\) & \\(0.471^{+0.015+0.033}_{-0.016-0.033}\\) & \\(1.733^{+0.021+0.045}_{-0.065}\\) & \\(0.706^{+0.053+0.107}_{-0.062-0.132}\\) & \\(14.32^{+0.23+0.49}_{-0.24-0.49}\\) \\\\ \\(PR\\) & \\(-0.55^{+0.07+0.32}_{-0.07-0.46}\\) & \\(0.9\\) (\\(\\geq 0.57\\)\\(\\geq 0.24\\)) & \\(0.28^{+0.02+0.04}_{-0.01-0.03}\\) & \\(0.476^{+0.015+0.030}_{-0.012-0.025}\\) & \\(1.742^{+0.012+0.026}_{-0.012-0.027}\\) & \\(0.710^{+0.052+0.122}_{-0.060-0.113}\\) & \\(14.32^{+0.33+0.54}_{-0.19-0.45}\\) \\\\ \\hline \\end{tabular} \\end{table} Table 1: Summary of the results of the likelihood analysis of the models discussed in the text. The meaning of the entries is as follows. By writing \\(x=bf_{-\\delta_{-}-\\delta_{-}-\\delta_{-}}^{+\\delta_{+}}\\), we mean that \\(x\\) is the maximum likelihood value of the considered quantity, while the \\(68\\%\\) and \\(95\\%\\) confidence ranges are \\((x-\\delta_{-},x+\\delta_{+})\\) and \\((x-\\delta_{-},x+\\delta_{++})\\) respectively. We use this notation to give our constraints on the model parameters \\((q_{0},y,\\Omega_{M})\\) and the derived quantities \\((\\mathcal{A},\\mathcal{R},z_{T},t_{0})\\). For the \\(RK\\) case, we do not estimate \\((\\mathcal{A},\\mathcal{R},z_{T},t_{0})\\) since the model may be discarded. For the \\(MB\\) and \\(Dt\\) case, we are able to give only upper limits on \\(y\\), while only lower limits may be given on this same quantity for the \\(PR\\) model. See the text for discussion. \\(MB\\) EoS provides a good match with the data even when it significantly differs from the simplest model \\(p=w\\rho\\) with \\(w\\) a constant. Let us now briefly comment on the possibility to use the \\(MB\\) EoS in the framework of UDE models. Should this approach be correct, the likelihood analysis should have returned \\(\\Omega_{M}\\simeq\\Omega_{b}\\), while such low values are excluded at more than \\(3\\sigma\\) level. As such, one could conclude that the UDE approach may be rejected. Actually, one should still explore the possibility that Eq.(6) is an effective EoS and formally decompose the energy density \\(\\rho_{X}\\) as sum of \\(\\rho_{dm}\\) and \\(\\rho_{de}\\) with the first and second term referring to dark matter and dark energy respectively. The EoS of this dark energy term is then evaluated imposing \\(w_{MB}=w_{de}\\rho_{de}/(\\rho_{dm}+\\rho_{de})\\). Imposing \\(\\rho_{dm}=\\Omega_{dm}\\rho_{crit}(1+z)\\)3, one should set \\(\\Omega_{dm}=\\Omega_{M}-\\Omega_{b}\\) using the value of \\(\\Omega_{M}\\) determined above and a model independent estimate of \\(\\Omega_{b}\\). Investigating this scenario is outside our aim so that we do not speculate further on this interesting possibility. Footnote 3: Qualitatively, this could be understood noting that the main contribution to \\(\\chi^{2}\\) comes from the 157 SNeIa, while \\(\\mathcal{A}\\) and \\(\\mathcal{R}\\) gives only a modest contribution unless the model is unreasonably different from the best fit one. It is interesting to discuss with some detail the constraints derived on some physical quantities coming from the likelihoods for the model parameters. As a first consistency check, we have estimated both the acoustic peak and the shift parameters \\(\\mathcal{A}\\) and \\(\\mathcal{R}\\). Even if we have explicitly introduced priors on them in the definition of the pseudo - \\(\\chi^{2}\\) in Eq.(18), it is nevertheless possible that the likelihood procedure selects a region of the parameter space giving values of \\(\\mathcal{A}\\) and \\(\\mathcal{R}\\) in disagreement with the imposed priors4. Actually, it turns out that \\(\\mathcal{A}\\) and \\(\\mathcal{R}\\) agree very well (within \\(1\\sigma\\)) with the measured ones, although the maximum likelihood values are slightly larger than the estimated ones. Footnote 4: Qualitatively, this could be understood noting that the main contribution to \\(\\chi^{2}\\) comes from the 157 SNeIa, while \\(\\mathcal{A}\\) and \\(\\mathcal{R}\\) gives only a modest contribution unless the model is unreasonably different from the best fit one. While \\(q_{0}<0\\), the universe has entered the epoch of accelerated expansion only for \\(z<z_{T}\\), this latter being the transition redshift previously defined and constrained for the \\(MB\\) model as reported in Table 1. The maximum likelihood value \\(z_{T}=0.692\\) is more than \\(1.7\\sigma\\) larger than the tentative model independent estimate of Riess et al. giving \\(z_{T}=0.46\\pm 0.13\\)[5]. It is worth noting, however, our value of \\(z_{T}\\) is in good agreement with that predicted for the concordance \\(\\Lambda\\)CDM model, being in this case \\(z_{T}=(2\\Omega_{\\Lambda}/\\Omega_{M})^{1/3}-1\\simeq 0.671\\). This is not much surprising given that, in the region of the parameter space selected, the \\(MB\\) EoS mimics well that of the \\(\\Lambda\\) term over the redshift range probed by the data. Finally, we consider the age of the universe obtaining \\(t_{0}=14.29\\) Gyr as maximum likelihood value and the \\(2\\sigma\\) confidence range extending from 13.88 up to 14.72 Gyr. This result is in satisfactory agreement with previous model dependent estimates such as \\(t_{0}=13.24^{+0.89}_{+0.41}\\) Gyr from Tegmark et al. [9] and \\(t_{0}=13.6\\pm 0.19\\) Gyr given by Seljak et al. [10]. Aging of globular clusters [43] and nucleochronology [44] give model independent (but affected by larger errors) estimates of \\(t_{0}\\) still in good agreement with our one. #### iii.2.3 Diederici Setting the range of \\(y\\) for the \\(Dt\\) EoS is a subtle task. Imposing that \\(w\\) never diverges and \\(\\beta\\) does not vanish leads to the conditions \\(y\ eq 2-[y/(1-\\Omega_{M})]\\eta(z)\\) and \\(y\ eq 2\\). There is, however, a further constraint motivated by numerical integrations of the continuity equation that turns out to become unstable for \\(y\\geq 1\\). In order to avoid this problem, we have searched for the constraints on \\(y\\) in the region \\(0\\leq y\\leq 1\\) only. It comes out that the marginalized likelihood is quite flat so that the full range is well within \\(1\\sigma\\) from the best fit value. Although this is quite disturbing from the point of view of constraining the model, this is encouraging since it shows that abandoning the perfect fluid EoS in favor of the \\(Dt\\) one still gives a good match with the data. As a final remark, let us note that the acoustic peak and the shift parameters \\(\\mathcal{A}\\) and \\(\\mathcal{R}\\), the transition redshift \\(z_{T}\\) and the age of the universe \\(t_{0}\\) estimated for the \\(Dt\\) model are in very good agreement with the same quantities obtained for the \\(MB\\) case so that we refer the reader to what already said above. #### iii.2.4 Peng - Robinson Eq.(14) shows that there are two values of \\(y\\) such that the \\(PR\\) EoS reduces to the perfect fluid one, namely \\(y=0\\) (so that \\(\\alpha=0\\) and Eq.(6) reduces to \\(w_{PR}=\\beta\\)) and \\(y=1\\) (giving \\(w_{PR}=0\\)). We have checked that values of \\(y>1\\) give rise to models having some pathological behaviours in the past (for instance, unphysical divergence of \\(\\eta(z)\\) for high \\(z\\)) so that we have restricted our attention to the range \\(0\\leq y\\leq 1\\). Once again, the likelihood function is quite flat so that we are able only to give lower limits on \\(y\\). It is noteworthy, however, that the best fit value is now \\(y=0.9\\), that is the \\(PR\\) EoS does not reduce to that of the perfect gas. Finally, it is worth noting that the constraints on \\(\\mathcal{A}\\), \\(\\mathcal{R}\\), \\(z_{T}\\) and \\(t_{0}\\) obtained for the \\(PR\\) model agree very well with those estimated for the \\(MB\\) case so that we refer the reader to what already said. ### Degeneracy with the \\(\\Lambda\\)CDM and QCDM models The results discussed above demonstrates that the \\(MB\\), \\(Dt\\) and \\(PR\\) EoS give rise to cosmological models that are in good agreement with the considered dataset. On the other hand, also models with a dark energy having a perfect fluid EoS provides a very good match to the same dataset. This consideration suggests that a sort of degeneracy among the different EoS should exist. Investigating in detail this issue needs a detailed set of simulations in order to understand under which conditions such a degeneracy may be broken. Although this is outside our aim, we nevertheless provide a preliminary analysis that is sufficient to get an interesting feeling of the problem. To this aim, we implement a quite simple procedure. First, we select an EoS and set its characterizing parameters. Then, we generate a sample of SNeIa according to the theoretical luminosity distance for the model with the EoS chosen before. Note that the sample comprises the same number of SNeIa of the Riess et al. Gold sample Riess et al. (2006) and have the same redshift and distance modulus error distribution. Finally, we fit to this dataset the \\(\\Lambda\\)CDM (\\(p_{X}=-\\rho_{X}\\)) and the QCDM (\\(p_{X}=w_{X}\\rho_{X}\\)) model. In order to render our analysis as similar as possible to that in Riess et al., in the second case we impose the prior \\(\\Omega_{M}=0.27\\pm 0.04\\) as done in Riess et al. (2006). The parameters used in the simulations and the constraints obtained on the \\(\\Lambda\\) and QCDM model parameters are summarized in Table 2. Some interesting lessons may be learned from this simple exercise. First, we note that the \\(\\Lambda\\)CDM model fits well in all the cases considered. Moreover, the estimated \\(\\Omega_{M}\\) is quite close to the input value so that no systematic errors is induced on this parameter. Note that this result does not depend on the particular choice of the EoS parameters provided they lie in the confidence ranges summarized in Table 1. Actually, such a result could be expected considering that, over the redshift range probed by the SNeIa sample and for the values of parameters chosen, the three EoS chosen mimic quite well the \\(\\Lambda\\)CDM model so that it is not surprising that the simulated data may be fitted by the concordance scenario. It is still more interesting to consider the results from fitting the \\(QCDM\\) model to the simulated dataset. While the estimated \\(\\Omega_{M}\\) is again quite similar to the input value (although biased and formally not in agreement within the errors), the constraints on \\(w_{X}\\) may extend in the phantom region (\\(w_{X}<-1\\)) depending on the EoS adopted and the parameters chosen. This preliminary test suggests a possible way to escape the need of the problematic phantom dark energy (i.e. a negative pressure fluid with \\(w_{X}<-1\\)). Indeed, Table 2 shows that \\(w_{X}<-1\\) may be the consequence of forcing the perfect fluid EoS to fit a cosmological model where the _true_ dark energy EoS is not the perfect fluid one. This intriguing scenario have, however, to be further investigated with a more careful and extensive set of simulated dataset also taking into account other possible probes such as the priors on \\(\\mathcal{A}\\) and \\(\\mathcal{R}\\) or the gas mass fraction in galaxy clusters. ## V The CMBR peaks position The analysis presented above has convincingly shown that dark energy models with EoS given by the \\(MB\\), \\(Dt\\) and \\(PR\\) parametrizations are indeed viable alternatives with respect to the usual perfect fluid assumption. Indeed, the \\(r(z)\\) diagram, the acoustic peak \\(\\mathcal{A}\\) and the shift parameter \\(\\mathcal{R}\\) are correctly predicted and also the estimated age of the universe is in good agreement with other estimates in literature. As a further test, we compute the positions of the first three peaks in the CMBR anisotropy spectrum using the procedure detailed in Riess et al. (2006); Riess et al. (2006). According to this prescription, in a flat universe made out of a matter term and a scalar field - like fluid, the position of the \\(m\\) - th peak is given by : \\[l_{m}=l_{A}(m-\\bar{\\varphi}-\\delta\\varphi_{m}) \\tag{24}\\] with \\(l_{A}\\) the acoustic scale, \\(\\bar{\\varphi}\\) the overall peak shift and \\(\\delta\\varphi_{m}\\) the relative shift of the \\(m\\) - th peak with respect to the first. While \\(\\bar{\\varphi}\\) and \\(\\delta\\varphi_{m}\\) are given by the approximated formulae in Ref. Riess et al. (2006), the acoustic scale for flat universes may be evaluated as Riess et al. (2006) : \\[l_{A} = \\frac{\\pi}{\\bar{c}_{s}}\\left\\{\\frac{F}{\\sqrt{1-\\bar{\\Omega}_{ls} ^{\\phi}}}\\right. \\tag{25}\\] \\[\\times \\left.\\left[\\sqrt{a_{ls}+\\frac{\\Omega_{0}^{r}}{1-\\Omega_{0}^{ \\phi}}}-\\sqrt{\\frac{\\Omega_{r}^{r}}{1-\\Omega_{0}^{\\phi}}}\\right]^{-1}-1\\right\\}\\] with : \\[F=\\frac{1}{2}\\int_{0}^{1}da\\left[a+\\frac{\\Omega_{0}^{\\phi}a^{1-3\\bar{w}_{0}}+ \\Omega_{0}^{r}(1-a)}{1-\\Omega_{0}^{\\phi}}\\right]^{-1/2} \\tag{26}\\] where we use Eq.(21) to determine \\(a_{ls}=(1+z_{ls})^{-1}\\). The other quantities entering Eqs.(25) and (26) are defined as follows Riess et al. (2006); Riess et al. (2006) : \\begin{table} \\begin{tabular}{|c|c|c|c|c|} \\hline \\multicolumn{2}{|c|}{Input Model} & \\multicolumn{2}{|c|}{\\(\\Lambda\\)CDM} & \\multicolumn{2}{|c|}{QCDM} \\\\ \\hline \\hline Id & \\(y,\\Omega_{M}\\) & \\(\\Omega_{M}\\) & \\(\\Omega_{M}\\) & \\(w_{X}\\) \\\\ \\hline \\hline \\(MB\\) & \\(1.0,0.28\\) & \\(0.28^{+0.02+0.04}_{-0.02-0.04}\\) & \\(0.30^{+0.04+0.08}_{-0.04-0.08}\\) & \\(-1.06^{+0.12+0.22}_{-0.16-0.35}\\) \\\\ \\(Dt\\) & \\(0.25,0.28\\) & \\(0.26^{+0.02+0.04}_{-0.02-0.04}\\) & \\(0.34^{+0.03+0.06}_{-0.04-0.08}\\) & \\(-1.27^{+0.15+0.27}_{-0.17-0.38}\\) \\\\ \\(PR\\) & \\(0.35,0.28\\) & \\(0.27^{+0.03+0.06}_{-0.02-0.05}\\) & \\(0.27^{+0.04+0.08}_{-0.04-0.08}\\) & \\(-0.97^{+0.09+0.18}_{-0.12-0.26}\\) \\\\ \\hline \\end{tabular} \\end{table} Table 2: Summary of the results of the likelihood analysis on the simulated dataset described in the text. The first two columns identifies the input model giving the EoS id and model parameters. In particular, for all the EoS, we have set \\(q_{0}=-0.55\\). The third column refers to the \\(\\Lambda\\)CDM model, while fourth and fifth columns are for the QCDM model. Maximum likelihood values and confidence ranges are reported using the same scheme as in Table 1. \\[\\bar{c}_{s}=\\frac{1}{\\tau_{ls}}\\int_{0}^{\\tau_{ls}}\\left[3+\\frac{9}{4}\\frac{\\rho_{ b}(\\tau)}{\\rho_{r}(\\tau)}\\right]^{-2}d\\tau\\, \\tag{27}\\] \\[\\bar{w}_{0}=\\frac{\\int_{0}^{\\tau_{0}}\\Omega_{\\phi}(\\tau)w(\\tau)d\\tau}{\\int_{0}^{ \\tau_{0}}\\Omega_{\\phi}(\\tau)d\\tau}\\, \\tag{28}\\] \\[\\bar{\\Omega}_{ls}^{\\phi}=\\frac{1}{\\tau_{ls}}\\int_{0}^{\\tau_{ls}}\\Omega_{\\phi}( \\tau)d\\tau\\, \\tag{29}\\] where \\(\\tau=\\int a^{-1}dt\\) is the conformal time, \\(\\rho_{b}\\) and \\(\\rho_{r}\\) are the energy densities of the baryons and radiation respectively, \\(w(z)\\) and \\(\\Omega_{\\phi}=\\rho_{\\phi}/\\rho_{crit}(z)\\) are the barotropic factor and the density parameter of the scalar field. In order to use Eqs.(25) - (29), we note that the role of the scalar field fluid is played by the dark energy so that all the quantities with the subscript \\(\\phi\\) have now to be evaluated using the energy density corresponding to a given EoS. Finally, we set the present day value of the radiation density parameter as \\(\\Omega_{0}^{r}=9.89\\times 10^{-5}\\)[45; 46] and \\(n=1\\) as index of the spectrum of primordial fluctuations, entering the approximated formulae for \\(\\bar{\\varphi}\\) and \\(\\delta\\varphi_{m}\\). The position of the first two peaks in the CMBR anisotropy spectrum ha been determined with great accuracy by WMAP giving [3] : \\[l_{1}^{WMAP}=220.1\\pm 0.08\\ \\,\\ \\ l_{2}^{WMAP}=546\\pm 10\\ \\, \\tag{30}\\] while the position of the third peak is more uncertain and may be estimated as [1] : \\[l_{3}^{Boom}=851\\pm 31. \\tag{31}\\] Having only three data points, it is clear that only qualitative constraints can be imposed on the model parameters. For this reason, we fix \\(\\Omega_{M}\\) to the best fit value in Table 1 for each EoS and use a \\(\\chi^{2}\\) analysis to constrain \\(q_{0}\\) and \\(y\\). Formally, the best fit parameters turn out to be : \\[(q_{0},y)=\\left\\{\\begin{array}{ll}(-0.972,4.80)&\\mbox{for the $MB$ EoS}\\,\\\\ (-0.950,0.88)&\\mbox{for the $Dt$ EoS}\\,\\\\ (-0.988,0.34)&\\mbox{for the $PR$ EoS}\\,\\end{array}\\right. \\tag{32}\\] giving \\((l_{1},l_{2},l_{3})=(208.3,546.3,857.2)\\) as best fit values independent of the EoS considered. However, as Fig. 3 shows, the region of the parameter space \\((q_{0},y)\\) that are consistent within \\(1\\sigma\\) with the bounds from the position of the peaks is quite large, so that, as already predicted, only weak constraints can be derived. Although a detailed fit to the full CMBR anisotropy spectrum is needed, this preliminary analysis gives encouraging results. Indeed, considering the most stringent cut (that on \\(l_{1}\\)), Fig. 3 shows that it is possible to find out models that are in agreement with both the fit to the dimensionless coordinate distances and the position of the first three peaks. ## VI Scalar field potential Although the agreement with the observations is a valid motivation for these models, it is nonetheless important to look for a theoretical approach to further substantiate our proposal. Such a scheme may be easily recovered in the framework of scalar field quintessence. In such a case, the energy density and the pressure of the dark energy fluid read : \\[\\rho_{\\phi}=\\frac{1}{2}\\dot{\\phi}^{2}+V(\\phi)\\, \\tag{33}\\] \\[p_{\\phi}=\\frac{1}{2}\\dot{\\phi}^{2}-V(\\phi)\\, \\tag{34}\\] Figure 3: Constraints on the deceleration parameter \\(q_{0}\\) and the scaled density parameter \\(y\\) for the \\(MB\\) (left panel), \\(Dt\\) (central panel) and \\(PR\\) (right panel) EoS. Models with parameters on the right of the short, solid and loong dashed lines give values of \\((l_{1},l_{2},l_{3})\\) respectively in agreement within \\(1\\sigma\\) with the measured ones. The black dot individuates the best fit model discussed in the text. For all the EoS, \\(\\Omega_{M}\\) is set to the best fit value reported in Table 1. where \\(\\phi\\) is the scalar field evolving under the action of the self - interaction potential \\(V(\\phi)\\). For a given \\(V(\\phi)\\), Eqs.(33) and 34) may be inserted in the Friedmann equations in order to determine \\(\\rho_{\\phi}(z)\\), \\(w_{\\phi}(z)=p_{\\phi}(z)/\\rho_{\\phi}(z)\\) and the other dynamical quantities of interest. It is worth noting, however, that this procedure may be reverted so that, for given \\(w(z)\\) and \\(E(z)\\), one may find out the self - interaction potential \\(V(\\phi)\\) giving rise to that kind of cosmological expansion. To this end, one may first determine \\(d\\phi(z)/dz\\) and \\(V(z)\\) as [47] (but see also [48] for reconstruction from the SNeIa data directly) : \\[\\tilde{V}(z)=\\frac{1}{2}(1-w_{\\phi})E(z)\\, \\tag{35}\\] \\[\\frac{d\\tilde{\\phi}(z)}{dz}=-\\frac{\\sqrt{3(1+w_{\\phi})}}{1+z}\\left[1+\\frac{ \\Omega_{M}(1+z)^{2}}{(1-\\Omega_{M})E(z)}\\right]^{-1/2} \\tag{36}\\] where we have defined \\(\\tilde{V}\\equiv V/\\rho_{\\phi}(z=0)\\) and \\(\\tilde{\\phi}=\\phi/M_{Pl}\\) with \\(\\rho_{\\phi}(0)=(1-\\Omega_{M})\\rho_{crit}\\) and \\(M_{Pl}=(8\\pi G)^{-1/2}\\). Note that, to get Eq.(36), we have chosen \\(\\dot{\\phi}>0\\) (which gives \\(d\\phi/dz<0\\)) without any loss of generality. Eq.(35) gives \\(V(z)\\), while \\(V(\\phi)\\) may be obtained integrating Eq.(36) with the initial condition \\(\\phi(z=0)=0\\) to get \\(\\phi(z)\\) and then inverting this relation with respect to \\(z\\). We have applied this procedure imposing \\(w_{\\phi}(z)=w_{i}(z)\\) (with \\(i=MB,Dt,PR\\)) over the redshift range \\(z=(0,10)\\) in order to recover the scalar field potential giving rise to our exotic EoS. Not surprisingly, an analytical solution is not possible so that we have resorted to numerical techniques setting the model parameters to their best fit values. The results are shown in Fig. 4 for the \\(MB\\), \\(Dt\\) and \\(PR\\) EoS. It is worth noting that, although the three EoS are quite different, the potential \\(V(\\phi)\\) is remarkably similar and any difference could be hardly detected over a large range in \\(\\phi\\). As a matter fact, the same analytical approximating function may be fitted to the three models. Indeed, we find that, within 2%, a very good fitting is obtained for : \\[V(\\phi)=V_{0}\\left\\{1+V_{s}\\left(\\frac{\\phi}{\\phi_{s}}\\right)^{\\alpha}\\exp \\left[\\frac{1}{2}\\left(\\frac{\\phi}{\\phi_{s}}\\right)^{2}\\right]\\right\\} \\tag{37}\\] with \\((V_{s},\\phi_{s},\\alpha)\\) fitting parameters to be determined on a case by case basis. For the best fit models, we get : \\[(V_{s},\\phi_{s},\\alpha)=\\left\\{\\begin{array}{l}(0.0448,-0.1817,0.6098)\\\\ (0.0331,-0.1611,0.5197)\\\\ (0.0369,-0.1517,0.9434)\\end{array}\\right. \\tag{38}\\] for the \\(MB\\), \\(Dt\\) and \\(PR\\) EoS respectively. Considering the behaviour of \\(w(z)\\) at high \\(z\\) shown in Fig. 2, it is somewhat surprising that the same functional expression approximates well the potential \\(V(\\phi)\\) for all the three EoS. However, we have checked that there are no systematic errors in the reconstruction procedure. Indeed, for the \\(Dt\\) and \\(PR\\) EoS, at high enough \\(z\\), \\(\\dot{\\phi}^{2}\\) is negligible with respect to \\(V(\\phi)\\) so that the scalar field enters in a slow roll like regime and \\(w\\simeq-1\\) as in the lower panels of Fig. 2. For the \\(MB\\) case, a slow roll regime is not achieved at high \\(z\\) where, on the contrary, \\(\\dot{\\phi}^{2}\\simeq V(\\phi)\\) and hence the EoS of the scalar field counterpart vanishes. It is worth noting that the approximating potential is quite different from those often used in literature, such as the exponential potential [49; 50] and the power law one [51]. On the other hand, its shape is the same as those proposed in supergravity inspired models according to which it is \\(V\\propto\\phi^{\\alpha}\\exp\\left(\\phi^{2}\\right)\\)[52]. However, \\(\\alpha\\leq 0\\) in such models, while we find \\(\\alpha\\) positive. While for large \\(\\phi/\\phi_{s}\\) both our approximating potential and SUGRA - like ones are exponential, for small \\(\\phi/\\phi_{s}\\), \\(V(\\phi)\\) takes a power law shape in the SUGRA case, while, for our models, \\(V(\\phi)\\) is approximately constant so that a cosmological constant behaviour is achieved. This is consistent with the results \\(w(z=0)\\simeq-1\\) we find for the present value of the EoS using the best fit parameters in Table 1. ## VII Conclusions Notwithstanding their (somewhat radically) different approaches to the dark energy puzzle, all the models proposed so far assumes that the dark energy EoS is a linear function of the energy density. From a thermodynamical point of view, the ansatz \\(p=w\\rho\\) (no matter whether \\(w\\) is a constant or a function of \\(z\\)) means that the fluid is modeled as a perfect gas. Putting forward the analogy with classical thermodynamics, however, it is well known that the perfect fluid approximation is quite crude and it is, in particular, unable to deal with critical phenomena such as phase transitions and the behaviour of the fluid near critical points. As a matter of fact, the perfect fluid Figure 4: Reconstructed scalar field potential over the redshift range \\((0,10)\\) for the models with the \\(MB\\) (short dashed), \\(Dt\\) (solid) and \\(PR\\) (long dashed) EoS. The potential is normalized to be \\(1\\) at \\(z=0\\), while, on the abscissa, \\(\\phi\\) is in units of the Planck mass \\(M_{Pl}\\). approximation works only in very particular conditions. On the other hand, our deep ignorance of the fundamental properties of the dark energy nature does not motivate the choice of such idealized condition for the present state of this component. It is thus interesting to consider EoS that are more general than the perfect fluid one reducing to this latter in a certain regime. Motivated by these considerations, we have investigated here the consequences of abandoning the perfect fluid approximation on the dynamics of a dark energy dominated universe. To this end, we have considered four different EoS, namely the Redlich - Kwong Eq.(5), the Modified Berthelot Eq.(6), the Dieterici Eq.(7) and the Peng - Robinson Eq.(8) parametrizations. These have been chosen because, from classical thermodynamics, we know they are well behaved also in critical conditions. The viability of the models and constraints on their characterizing parameters have been studied by using a likelihood analysis taking into account the observations on the dimensionless coordinate distance to SNeIa and radiogalaxies and priors on the acoustic peak and shift parameters \\(\\mathcal{A}\\) and \\(\\mathcal{R}\\). This test has shown that all the four EoS are able to give rise to models that fits quite well the available dataset, but the \\(RK\\) EoS has to be rejected since it does not give rise to a deceleration phase in the past. On the other hand, the \\(MB\\), \\(Dt\\) and \\(PR\\) EoS predict reasonable values for the transition redshift \\(z_{T}\\) and the age of the universe \\(t_{0}\\) in good agreement with previous model independent estimates. As a further check, we have also evaluated the position of the first three peaks in the CMBR anisotropy spectrum finding out that, for each model, there exists a region of the parameter space such that both the CMBR peaks and the \\(r(z)\\) diagram are correctly reproduced. These successful results are quite encouraging since they show that the perfect fluid EoS may be given off without worsening the agreement with the data. This nice consideration strongly motivates further test of these models in order to both better constrain their parameters and try to select among them according to what model is better suited to describe what is observed. Since the \\(MB\\), \\(Dt\\) and \\(PR\\) EoS evolve with \\(z\\) in different ways, it is desirable to resort to observables depending on \\(w(z)\\) rather than its value over only a limited redshift range. Interesting candidates, from this point of view, are the full CMBR spectrum (not only its peaks position) and the growth factor that also depends on the theory of perturbations. As an interesting byproduct of the likelihood test, we have discovered a degeneracy with both the concordance \\(\\Lambda\\)CDM and the quiessence (dark energy with constant \\(w\\)) QCDM models. Actually, we have fitted both \\(\\Lambda\\)CDM and QCDM models to Gold - like SNeIa dataset simulated using one of our EoS as true background cosmological model. Using the same procedure in Riess et al. [5], we have found that the \\(\\Lambda\\)CDM model provides a very good fit for values of \\(\\Omega_{M}\\) in well agreement with the input ones. On the contrary, the QCDM model still gives a good match with the simulated data, but \\(\\Omega_{M}\\) is slightly biased high and \\(w\\) may be artificially pushed in the phantom region \\(w<-1\\). This suggests the intriguing possibility that phantom models turns out to be the best fit to SNeIa data only because of a systematic error on the EoS. It is worth stressing, however, that this result is only preliminary being based on a limited dataset. To further substantiate it, one should carry out a Fisher matrix analysis or detailed Montecarlo simulations also taking into account other probes as the full CMBR anisotropy spectrum. Having been inspired by classical thermodynamics, the EoS considered are phenomenologically motivated, but lacks a background theoretical model. To overcome this difficulty, we have worked out a scalar field interpretation reconstructing the self - interaction potential in such a way that the quintessence EoS is the same as the \\(MB\\), \\(Dt\\) or \\(PR\\) EoS. The potential \\(V(\\phi)\\) thus obtained is very well approximated by an analytical expression that is neither exponential - like nor power law - like, but has formally the same form as the SUGRA inspired models. However, there is a significant difference since, for small \\(\\phi\\), both potentials scale as \\(\\phi^{\\alpha}\\), but \\(\\alpha>0\\) for our models rather than negative as in the SUGRA scenario. It could be interesting to work out the consequences of such a difference. On the other hand, it is also possible that the EoS considered have to be considered as _effective_ ones as is the case in some braneworld inspired dark energy models [18] or for the curvature fluid [19; 20; 21] in \\(f(R)\\) theories. We would like to conclude with a general comment. As it is well known, the perfect fluid EoS is only a crude approximation of a _real_ fluid that is usually used in cosmology since it represents the simplest way to fit the available data. However, as first shown in the Van der Waals quintessence scenario [29; 30] and further demonstrated here, abandoning the perfect fluid EoS still makes it possible to fit the available astrophysical data with the same accuracy so that the use of realistic EoS turns out to be motivated also _a posteriori_. In our opinion, the era of _precision cosmology_ calls for _precision theory_ so that time is come to abandon approximate description such as the perfect fluid one. Which is the _most realistic_ description of the dark energy term is a topic that worths be addressed with the help of hints coming from thermodynamic analogies. ## Appendix A Some details on the EoS Although inspired by classical thermodynamics, the EoS we have considered are somewhat exotic so that we believe it is useful to give some further details from the thermodynamic point of view. As a preliminary step, let us denote with \\((p,V,T)\\) the pressure, the volume and the temperature of the fluid and with a subscript \\(c\\) these quantities evaluated at the _critical point_. Let us also remember that the critical point is defined by the conditions :\\[\\left(\\frac{\\partial p}{\\partial V}\\right)_{T=cst}=\\left(\\frac{\\partial^{2}p}{ \\partial V^{2}}\\right)_{T=cst}=0. \\tag{10}\\] Let us also denote with the subscript \\(r\\) the reduced quantities, i.e. \\(x_{r}\\equiv x/x_{c}\\). In what follows, we show how Eqs.(5) - (8) are obtained starting from their thermodynamical analog. As a general remark, note that the temperature of the dark energy fluid should be intended as an _effective_ rather than a _physical_ one in order to avoid problems with negative values. Note that this problem is also present in the case of the perfect fluid EoS where a negative \\(w<0\\) is formally equivalent to a negative \\(T\\). ### Redlich - Kwong Let us first consider the \\(RK\\) EoS that is defined as : \\[p=\\frac{RT}{V-b}-\\frac{a}{V(V+b)T^{1/2}} \\tag{11}\\] with \\(R\\) the gas constant and \\((a,b)\\) two model parameters. Inserting Eq.(11) into Eq.(10), we get : \\[a=\\sqrt{3-2\\sqrt{2}}\\ RT_{c}^{3/2}V_{c}\\ \\ \\ \\,\\ \\ \\ \\ b=(1-\\sqrt{2})V_{c}\\] so that Eq.(11) may be rewritten as : \\[p=\\tilde{p}_{c}\\times\\frac{(T_{r}V_{r}^{2})^{-1/2}}{(1-\\sqrt{2})^{-1}V_{r}-1} \\left(\\frac{T_{r}^{3/2}V_{r}}{\\sqrt{3-2\\sqrt{2}}}-1\\right) \\tag{12}\\] with : \\[\\tilde{p_{c}}=\\frac{\\sqrt{2(3-2\\sqrt{2})}}{(1-\\sqrt{2})(1-\\sqrt{3-2\\sqrt{2}})} \\times p_{c}\\,\\] \\[p_{c}=\\frac{1-\\sqrt{3-2\\sqrt{2}}}{\\sqrt{2}}\\times\\frac{RT_{c}}{V_{c}}\\simeq 0.414\\ RT_{c}/V_{c}\\.\\] Note that the critical pressure is almost half the perfect fluid one. Let us now assume that the temperature is constant and equal to its critical value3 so that \\(T_{r}=1\\). Using then \\(V=1/\\rho\\to V_{r}=\\rho_{c}/\\rho\\), after some algebra, Eq.(12) may be finally rewritten as : Footnote 3: The ansatz \\(T=T_{c}\\) is somewhat arbitrary, but does not lead to any loss of generality. Actually, choosing a different \\(T\\) only rescales the EoS. Since we do not know either \\(T\\) or \\(T_{c}\\), it is a useful working hypothesis to set \\(T_{r}=1\\). \\[\\tilde{p}=\\beta\\eta\\times\\frac{1-\\sqrt{3-2\\sqrt{2}}\\alpha\\eta}{1-(1-\\sqrt{2} )\\alpha\\eta}\\] which is the same as Eq.(5) having posed \\(\\tilde{p}=p/\\rho_{crit}\\), \\(\\eta=\\rho/\\rho_{crit}\\) and defined : \\[\\alpha=\\rho_{crit}/\\rho_{c}\\ \\ \\ \\,\\ \\ \\ \\ \\beta=\\frac{\\sqrt{2}}{1-\\sqrt{3-2 \\sqrt{2}}}\\times p_{c}/\\rho_{c}\\.\\] Note that, for the best fit parameters in Table 1, \\(\\beta<0\\) so that \\(p_{c}<0\\) as expected since dark energy is known to have negative pressure. ### Modified Berthelot Let us summarize here the main steps above for the case of the \\(MB\\) EoS. The pressure \\(p\\) as function of temperature \\(T\\) and volume \\(V\\) is implicitly defined as : \\[p=\\frac{RT}{V}\\left[1+\\frac{9}{128}\\left(\\frac{p}{p_{c}}\\right)\\left(\\frac{T}{ T_{c}}\\right)^{-1}\\left(1-\\frac{T_{c}^{2}}{T^{2}}\\right)\\right] \\tag{13}\\] where the critical pressure is given as : \\[p_{c}=\\frac{83}{128}\\frac{RT_{c}}{V_{c}}\\.\\] Using this relation, we may rewrite Eq.(13) as : \\[p=\\frac{128}{83}p_{c}\\times\\frac{T_{r}/V_{r}}{1-(9/128)V_{r}^{-1}(1-6T_{r}^{- 2})}. \\tag{14}\\] Setting \\(T_{r}=1\\) and \\(V_{r}=\\rho_{c}/\\rho\\), after some simple algebra, we finally get : \\[\\tilde{p}=\\frac{\\beta\\eta}{1+\\alpha\\eta}\\] with \\(\\tilde{p}=p/\\rho_{crit}\\), \\(\\eta=\\rho/\\rho_{crit}\\) and we have defined : \\[\\alpha=\\frac{45\\rho_{crit}}{128\\rho_{c}}\\ \\ \\ \\,\\ \\ \\ \\ \\beta=\\frac{128^{2}}{45 \\times 83}\\frac{p_{c}}{\\rho_{c}}\\simeq 4.4(p_{c}/\\rho_{c})\\.\\] As for the \\(RK\\) case, the best fit parameters in Table 1 gives \\(\\beta<0\\) as expected. ### Dieterici In terms of thermodynamical quantities, the \\(Dt\\) EoS reads : \\[p=\\frac{RT}{V-b}\\exp\\left(-\\frac{a}{RTV}\\right) \\tag{15}\\] where the two parameters \\((a,b)\\) may be determined solving Eq.(10) thus obtaining : \\[a=2RT_{c}V_{c}\\ \\ \\ \\,\\ \\ \\ \\ b=V_{c}/2\\.\\]Introducing reduced variables \\((T_{r},V_{r})\\) and using the expression for the critical pressure : \\[p_{c}=\\frac{2RT_{c}}{e^{2}V_{c}}\\,\\] we rewrite Eq.(10) as : \\[p=\\frac{p_{c}T_{r}}{2V_{r}-1}\\exp\\left[2(1-T_{r}^{-1}V_{r}^{-1}\\right]\\,. \\tag{12}\\] With the usual positions \\(T_{r}=1\\) and \\(V_{r}=\\rho_{c}\\rho\\), the above relation finally becomes : \\[\\tilde{p}=\\frac{\\beta\\eta}{2-\\alpha\\eta}\\exp\\left[2(1-\\alpha\\eta)\\right]\\] which is the same as Eq.(7) provided one defines tilted quantities as usual and \\[\\alpha=\\rho_{crit}/\\rho\\ \\ \\ \\,\\ \\ \\ \\ \\beta=p_{c}/\\rho_{c}\\.\\] Not surprisingly, \\(\\beta\\) (and hence \\(p_{c}\\)) turns out to be negative for the best fit parameters. ### Peng - Robinson As a final case, let us consider the \\(PR\\) EoS starting from its expression in terms of thermodynamical quantities : \\[p=\\frac{RT}{V-b}-\\frac{a}{V(V+b)+b(V-b)}. \\tag{13}\\] Solving the equations for the critical points allows us to express \\((a,b)\\) in terms of \\((T_{c},V_{c})\\) giving : \\[a=c_{a}RT_{c}V_{c}\\ \\ \\ \\,\\ \\ \\ \\ b=c_{b}V_{c}\\] with \\(c_{a}\\simeq 1.487\\) and \\(c_{b}\\simeq 0.253\\). The critical pressure turns out to be : \\[p_{c}=\\frac{RT_{c}/V_{c}}{1-c_{b}}\\left[1-\\frac{c_{a}}{(1+c_{b})/(1-c_{b})+c_{ b}}\\right]\\simeq 0.307RT_{c}/V_{c}\\.\\] Introducing reduced variables, Eq.(13) rewrites as : \\[p=\\frac{p_{c}T_{r}c_{p}^{-1}}{V_{r}-c_{b}}\\left[1-\\frac{c_{a}T_{r}^{-1}}{V_{r} (V_{r}+c_{b})/(V_{r}-c_{b})+c_{b}}\\right]. \\tag{14}\\] Eq.(8) is finally obtained by setting in the above relation \\(T_{r}=1\\) and \\(V_{r}=\\rho_{c}/\\rho\\) thus getting : \\[\\tilde{p}=\\frac{\\beta\\eta}{1-\\alpha\\eta}\\left[1-\\frac{(c_{a}/c_{b})\\alpha\\eta }{(1+\\alpha\\eta)/(1-\\alpha\\eta)+\\alpha\\eta}\\right]\\] with \\(\\tilde{p}\\) and \\(\\eta\\) defined as usual, while we have set : \\[\\alpha=\\rho_{crit}/\\rho_{c}\\ \\ \\ \\,\\ \\ \\ \\ \\beta=\\frac{p_{c}/\\rho_{c}}{c_{b}c_{p }}\\.\\] Once again, the best fit parameters gives \\(\\beta<0\\) and hence \\(p_{c}<0\\) as expected. ## References * (1) P. de Bernardis et al. 2000, Nature, 404, 955 * (2) R. Stompor et al., ApJ, 561, L7, 2001; C.B. Netterfield et al., ApJ, 571, 604, 2002; R. Rebolo et al., MNRAS, 353, 747, 2004 * (3) D.N. Spergel et al., ApJS, 148, 175, 2003 * (4) A. G. Riess et al., AJ, 116, 1009, 1998; S. Perlmutter et al., ApJ, 517, 565, 1999; R.A. Knop et al., ApJ, 598, 102, 2003; J.L. Tonry et al., ApJ, 594, 1, 2003; B.J. Barris et al., ApJ, 602, 571, 2004; * (5) A.G. Riess et al., ApJ, 607, 665, 2004 * (6) S. Dodelson et al., ApJ, 572, 140, 2002; W.J. Percival et al., MNRAS, 337, 1068, 2002; A.S. Szalay et al., ApJ, 591, 1, 2003; E. Hawkins et al., MNRAS, 346, 78, 2003; A.C. Pope et al., ApJ, 607, 655, 2004 * ph/0405013, 2004 * (8) S.M. Carroll, W.H. Press, E.L. Turner, ARAA, 30, 499, 1992; V. Sahni, A. Starobinski, Int. J. Mod. Phys. D, 9, 373, 2000 * (9) M. Tegmark et al., Phys. Rev. D, 69, 103501, 2004 * (10) U. Seljak et al., Phys. Rev. D, 71, 103515, 2005 * (11) P.J.E. Peebles, B. Rathra, Rev. Mod. Phys., 75, 559, 2003; T. Padmanabhan, Phys. Rept., 380, 235, 2003 * (12) A. Kamenshchik, U. Moschella, V. Pasquier, Phys. Lett. B, 511, 265, 2001; N. Bilic, G.B. Tupper, R.D. Viollier, Phys. Lett. B, 535, 17, 2002; M.C. Bento, O. Bertolami, A.A. Sen, Phys. Rev. D, 67, 063003 * th/0405034, 2004 * (14) V.F. Cardone, A. Troisi, S. Capozziello, Phys. Rev. D, 69, 083517, 2004; S. Capozziello, A. Melchiorri, A. Schirone, Phys. Rev. D, 70, 101301, 2004 * ph/0506371, Phys. Rev. D accepted * th/0505215, 2005 * th/0506212, 2005 * (18) G.R. Dvali, G. Gabadadze, M. Porrati, Phys. Lett. B, 485, 208, 2000; G.R. Dvali, G. Gabadadze, M. Kolanovic, F. Nitti, Phys. Rev. D, 64, 084004, 2001; G.R. Dvali, G. Gabadadze, M. Kolanovic, F. Nitti, Phys. Rev. D,64, 024031, 2002; A. Lue, R. Scoccimarro, G. Starkman, Phys. Rev. D, 69, 124015, 2004 * [19] S. Capozziello, Int. J. Mod. Phys. D, 11, 483, 2002 * ph/0303041, 2003 * ph/0410135; S. Carloni, P.K.S. Dunsby, S. Capozziello, A. Troisi, gr -qc/0410046, 2004 * [22] S. Nojiri, S.D. Odintsov, Phys. Lett. B, 576, 5, 2003; S. Nojiri, S.D. Odintsov, Mod. Phys. Lett. A, 19, 627, 2003; S. Nojiri, S.D. Odintsov, Phys. Rev. D, 68, 12352, 2003; S.M. Carroll, V. Duvvuri, M. Trodden, M. Turner, Phys. Rev. D, 70, 043528, 2004 * [23] S. Capozziello, V.F. Cardone, A. Troisi, Phys. Rev. D, 71, 043503, 2005 * [24] D.N. Vollick, Phys. Rev. 68, 063510, 2003; X.H. Meng, P. Wang, Class. Quant. Grav., 20, 4949, 2003; E.E. Flanagan, Phys. Rev. Lett. 92, 071101, 2004; E.E. Flanagan, Class. Quant. Grav., 21, 417, 2004; X.H. Meng, P. Wang, Class. Quant. Grav., 21, 951, 2004; G.M. Kremer and D.S.M. Alves, Phys. Rev. D, 70, 023503, 2004 * [25] S. Nojiri, S.D. Odintsov, Gen. Rel. Grav., 36, 1765, 2004; X.H. Meng, P. Wang, Phys. Lett. B, 584, 1, 2004 * [26] G. Allemandi, A. Borowiec, M. Francaviglia, Phys. Rev. D, 70, 043524, 2004; G. Allemandi, A. Borowiec, M. Francaviglia, Phys. Rev. D, 70, 103503, 2004; G. Allemandi, A. Borowiec, M. Francaviglia, S.D. Odintsov, gr -qc/0504057, 2005 * [27] U. Alam, V. Sahni, D. Saini, A.A. Starobinsky, MNRAS, 354, 275, 2004; H.K. Jassal, J.S. Bagla, T. Padmanabhan, MNRAS, 356, 11, 2005 * [28] Rowlinson J.S., Widom B. 1982, _Molecular Theory of Capillarity_, Oxford University Press, Oxford (UK) * [29] S. Capozziello, S. De Martino, M. Falanga, Phys. Lett. A, 299, 494, 2002; S. Capozziello, V.F. Cardone, S. Carloni, S. De Martino, M. Falanga, A. Troisi, M. Bruni, JCAP, 0504, 005, 2005 * [30] Kremer G.M. 2003, Phys. Rev. D, 68, 123507; Kremer G.M. 2004, Gen. Rel. Grav., 36, 1423 * [31] Peebles P.J.E. 1993, _Principle of physical cosmology_, Princeton Univ. Press, Princeton (USA); Peacock J. 1999, _Cosmological physics_, Cambridge University Press, Cambridge (UK) * [32] R.A. Daly, S.G. Djorgovski, ApJ, 612, 652, 2004 * [33] Guerra E.J., Daly R.A., Wan L. 2000, ApJ, 544, 659; Daly R.A., Guerra E.J. 2002, AJ, 124, 1831; Podariu S., Daly R.A., Mory M.P., Ratra B. 2003, ApJ, 584, 577; Daly R.A., Djorgovski S.G. 2003, ApJ, 597, 9 * [34] W.L. Freedman et al., ApJ, 553, 47, 2001 * [35] L.L.R. Williams, P. Saha, AJ, 119, 439, 2000; V.F. Cardone, S. Capozziello, V. Re, E. Piedipalumbo, A&A, 379, 72, 2001; V.F. Cardone, S. Capozziello, V. Re, E. Piedipalumbo, A&A, 382, 792, 2002; C. Tortora, E. Piedipalumbo, V.F. Cardone, MNRAS, 354, 353, 2004; T. York, I.W.A. Browne, O. Wucknitz, J.E. Skelton, MNRAS, 357, 124, 2005 * [36] J.P. Hughes, M. Birkinshaw, ApJ, 501, 1, 1998; R. Saunders et al., MNRAS, 341, 937, 2003; R.W. Schmidt, S.W. Allen, A.C. Fabian, MNRAS, 352, 1413, 2004 * [37] Y. Wang, P. Mukherjee, ApJ, 606, 654, 2004 * [38] Y. Wang, M. Tegmark, Phys. Rev. Lett., 92, 241302, 2004 * [39] W. Hu, N. Sugiyama, ApJ, 471, 542, 1996 * [40] D. Kirkman, D. Tyler, N. Suzuki, J.M. O'Meara, D. Lubin, ApJS, 149, 1, 2003 * ph/0501171, 2005 * [42] M.A. Strauss et al., AJ, 124, 1810 * [43] L. Krauss and B. Chaboyer, Science, Jan 3 2003 issue * [44] R. Cayrel et al., Nature, 409, 691, 2001 * [45] M. Doran, M. Lilley, J. Schwindt, C. Wetterich, ApJ, 559, 501, 2001; * [46] M. Doran, M. Lilley, MNRAS, 330, 965, 2002 * ph/0505253, 2005 * [48] A.A. Starobinski, JETP Lett., 68, 757, 1988; D. Huterer, M.S. Turner, Phys. Rev. D, 60, 081301, 1999; T. Chiba, T. Nakamura, Phys. Rev. D, 62, 121301, 2000 * [49] P.G. Ferreira, M. Joyce, Phys. Rev. D, 58, 023503, 1998; T. Barreiro, E.J. Copeland, N.J. Nunes, Phys. Rev. D, 61, 127301, 2000 * [50] C. Rubano, P. Scudellaro, Gen. Rel. Grav., 34, 307, 2001; M. Pavlov, C. Rubano, M. Sazhin, P. Scudellaro, ApJ, 566, 619, 2002; C. Rubano, M. Sereno, MNRAS, 335, 30, 2002; C. Rubano, P. Scudellaro, E. Piedipalumbo, S. Capozziello, M. Capone, Phys. Rev. D, 69, 103510, 2004; M. Demianski, E. Piedipalumbo, C. Rubano, C. Tortora, A&A, 431, 27, 2005 * [51] B. Ratra, P.J.E. Peebles, Phys. Rev. D, 37, 3406, 1988; C. Wetterich, Nucl. Phys. B, 302, 668, 1988 * [52] P. Brax, J. Martin, Phys. Lett. B, 468, 40, 1999; P. Brax, J. Martin, A. Riazuelo, Phys. Rev. D, 62, 103505, 2000; P. Brax, J. Martin, Phys. Rev. D, 71, 063530, 2005
Abandoning the perfect fluid hypothesis, we investigate here the possibility that the dark energy equation of state (EoS) \\(w\\) is a nonlinear function of the energy density \\(\\rho\\). To this aim, we consider four different EoS describing classical fluids near thermodynamical critical points and discuss the main features of cosmological models made out of dust matter and a dark energy term with the given EoS. Each model is tested against the data on the dimensionless coordinate distance to Type Ia Supernovae and radio galaxies, the shift and the acoustic peak parameters and the positions of the first three peaks in the anisotropy spectrum of the comic microwave background radiation. We propose a possible interpretation of each model in the framework of scalar field quintessence determining the shape of the self interaction potential \\(V(\\phi)\\) that gives rise to each one of the considered thermodynamical EoS. As a general result, we demonstrate that replacing the perfect fluid EoS with more generar expressions gives both the possibility of successfully solving the problem of cosmic acceleration escaping the resort to phantom models. pacs: 98.80.-k, 98.80.Es, 97.60.Bw, 98.70.Dk
Summarize the following text.
arxiv-format/0512085v2.md
# Running coupling at finite temperature and chiral symmetry restoration in QCD Jens Braun Institut fur Theoretische Physik, Philosophenweg 16 and 19, 69120 Heidelberg, Germany Holger Gies Institut fur Theoretische Physik, Philosophenweg 16 and 19, 69120 Heidelberg, Germany ## I Introduction Strongly interacting matter is believed to have fundamentally different properties at high temperature than at low or zero temperature [1]. Whereas the latter can be described in terms of ordinary hadronic states, a hadronic picture at increasing temperature is eventually bound to fail; instead, a description in terms of quarks and gluons is expected to arise naturally owing to asymptotic freedom. In the transition region between these asymptotic descriptions, effective degrees of freedom, such as order parameters for the chiral or deconfining phase transition, may characterize the physical properties in simple terms, i.e., with a simple effective action [2]. If a simple description at or above the phase transition does not exist and the system is strongly interacting in all conceivable sets of variables [3], a formulation in terms of microscopic degrees of freedom has the greatest potential to bridge wide ranges in parameter space from first principles. In this Letter, we report a nonperturbative study of finite-temperature QCD parameterized in terms of microscopic degrees of freedom: gluons and quarks. We use the functional renormalization group (RG) [4; 5; 6] and concentrate on two problems which are accessible in microscopic language: first, we compute the running of the gauge coupling driven by quantum as well as thermal fluctuations, generalizing previous zero-temperature studies [7]. Second, we investigate the induced quark dynamics including its back-reactions on gluodynamics, in order to monitor the status of chiral symmetry at finite temperature. With this strategy, the critical temperature of chiral symmetry restoration can be computed. This Letter is particularly devoted to a presentation of the central physical mechanisms which our study reveals for the dynamics near the phase transition; all technical details can be found in a separate publication [8]. The functional RG yields a flow equation for the effective average action \\(\\Gamma_{k}\\)[5], \\[\\partial_{t}\\Gamma_{k}=\\frac{1}{2}\\mathrm{STr}\\,\\partial_{t}R_{k}\\,(\\Gamma_{k }^{(2)}+R_{k})^{-1},\\quad t=\\ln\\frac{k}{\\Lambda}, \\tag{1}\\] where \\(\\Gamma_{k}\\) interpolates between the bare action \\(\\Gamma_{k=\\Lambda}=S\\) and the full quantum effective action \\(\\Gamma=\\Gamma_{k=0}\\); \\(\\Gamma_{k}^{(2)}\\) denotes the second functional derivative with respect to the fluctuating field. The regulator function \\(R_{k}\\) specifies the Wilsonian momentum-shell integration, such that the flow of \\(\\Gamma_{k}\\) is dominated by fluctuations with momenta \\(p^{2}\\simeq k^{2}\\). An approximate solution to the flow equation can reliably describe also nonperturbative physics if the relevant degrees of freedom in the form of RG relevant operators are kept in the ansatz for the effective action. As the crucial ingredient, the choice of this ansatz has to be guided by all available physical information. ## II Truncated RG flow for thermal gluodynamics In this work, we truncate the space of possible action functionals to a tractable set of operators which is motivated from various sources and principles. For the principle of gauge invariance, we use the background-field formalism as developed in [9], i.e., we work in the Landau-de Witt background-field gauge and follow the strategy of [7; 10] for an approximate resolution of the gauge constraints [11]. Decomposing the gauge field into a background-field part and a fluctuation field, this strategy focusses on the flow in the background-field sector of the action and neglects an independent running of the fluctuation-field sector. In general, the solution of the strongly-coupled gauge sector represents the most delicate part of this study, owing to a lack of sufficient _a priori_ control of nonperturbative truncation schemes. A first and highly nontrivial check of a solution is already given by the stability of its RG flow, since oversimplifying truncations which miss the right degrees of freedom generically exhibit IR instabilities of Landau-pole type. The IR stability of our solution arises from an important conceptual ingredient: we optimize our truncated flow with an adjustment of the regulator to the spectral flow of \\(\\Gamma^{(2)}\\)[7; 12], instead of a naive canonical momentum-shell regularization. For this, we integrate over shells of eigenvalues of \\(\\Gamma^{(2)}\\), by inserting \\(\\Gamma^{(2)}\\) into the regulator and accounting for the flow of these eigenvalues. More precisely, we use the exponential regulator [5] of the form \\(R_{k}(\\Gamma^{(2)})=\\Gamma^{(2)}/[\\exp(\\Gamma^{(2)}/\\mathcal{Z}_{k}k^{2})-1]\\), where \\(\\mathcal{Z}_{k}\\) denotes the wave function renormalization of the corresponding field (gluons or ghost). In a perturbative language, the optimizing spectral adjustment allows for a resummation of a larger class of diagrams. The main part of our truncation consists of an infinite set of operators given by powers of the Yang-Mills Lagrangian, \\[\\Gamma_{k,\\text{YM}}[A]=\\int_{x}\\mathcal{W}_{k}(\\theta),\\quad\\theta=\\frac{1}{ 4}F^{a}_{\\mu\ u}F^{a}_{\\mu\ u}. \\tag{2}\\] In the function \\(\\mathcal{W}_{k}(\\theta)=W_{1}\\theta+\\frac{1}{2}W_{2}\\theta^{2}+\\frac{1}{3!}W_ {3}\\theta^{3}\\dots\\), the coefficients \\(W_{i}\\) form an infinite set of generalized couplings. This truncation represents a gauge-covariant gradient expansion in the field strength, neglecting higher-derivative terms and more complicated color and Lorentz structures. Hence, the truncation includes arbitrarily high gluonic correlators projected onto their small-momentum limit and onto the particular color and Lorentz structure arising from powers of \\(F^{2}\\). Since perturbative gluons are certainly not the true degrees of freedom in the IR, an inclusion of infinitely many gluonic operators appears mandatory in order to have a chance to capture the relevant physics in this gluonic language. The covariant gradient expansion does not only facilitate a systematic classification of the gluonic operators, it is also a consistent expansion in the framework of the functional RG.1 Footnote 1: The gluonic gradient expansion as a local expansion can, of course, not be expected to give reliable answers to all questions; for instance, bound-state phenomena such as glue balls are encoded in the nonlocal pole structure of higher-order vertices. Furthermore, our truncation includes standard (bare) gauge-fixing and ghost terms, neglecting any nontrivial running in these sectors. We emphasize that we do not expect that this truncation reflects the true behavior in these sectors, but we assume that the non-trivial running in these sectors does not qualitatively modify the running of the background-field sector where we read off the physics. A well-known problem of gauge-covariant gradient expansions in gluodynamics is the appearance of an IR unstable Nielsen-Olesen mode in the spectrum [13]. At finite temperature \\(T\\), this problem is severe, since such a mode will be strongly populated by thermal fluctuations, typically spoiling perturbative computations [14]. Our flow equation allows us to resolve this problem with the aid of the IR regulator. We remove this mode's unphysical thermal population by a \\(T\\)-dependent regulator. In this way, we obtain a strictly positive spectrum for the thermal fluctuations. In the present truncation, the flow equation results in a differential equation for the function \\(\\mathcal{W}_{k}\\) of the form \\[\\partial_{t}\\mathcal{W}_{k}(\\theta)=\\mathcal{F}[\\partial_{y}\\mathcal{W}_{k}, \\partial_{\\theta}^{2}\\mathcal{W}_{k},\\partial_{t}\\partial_{\\theta}\\mathcal{W}_ {k},\\partial_{t}\\partial_{\\theta}^{2}\\mathcal{W}_{k}], \\tag{3}\\] where the extensive functional \\(\\mathcal{F}\\) depends on derivatives of \\(\\mathcal{W}_{k}\\), on the coupling \\(g\\) and the temperature \\(T\\); it is displayed in [8]. We use the nonrenormalization of the product of coupling and background field, \\(gA\\), for a nonperturbative definition of the running coupling in terms of the background wave function renormalization \\(Z_{k}\\equiv W_{1}\\)[15], \\[\\beta_{g^{2}}\\equiv\\partial_{t}g^{2}=\\eta\\,g^{2},\\quad\\eta=-\\frac{1}{Z_{k}} \\partial_{t}Z_{k}. \\tag{4}\\] The flow of \\(Z_{k}\\), and thus the running of the coupling, is successively driven by all generalized couplings \\(W_{i}\\). Keeping track of all contributions from the flows of the \\(W_{i}\\), Eq. (3) boils down to a recursive relation, \\[\\partial_{t}W_{i}=f_{ij}(g,T)\\partial_{t}W_{j}, \\tag{5}\\] with \\(f_{ij}(g,T)\\) representing the expansion coefficients of the RHS of Eq. (3), which obey \\(f_{ij}=0\\) for \\(j>i+1\\). Solving Eq. (5) for \\(\\partial_{t}Z_{k}\\equiv\\partial_{t}W_{1}\\), we obtain a nonperturbative \\(\\beta_{g^{2}}\\) function in terms of an infinite asymptotic but resumable power series, \\[\\beta_{g^{2}}=\\sum_{m=1}^{\\infty}a_{m}(\\tfrac{T}{k})\\frac{(g^{2})^{m+1}}{[2( 4\\pi)^{2}]^{m}}, \\tag{6}\\] with temperature-dependent coefficients \\(a_{m}\\).2 For explicit representations of the \\(a_{m}\\) and further details, we refer the reader to [8]. At zero \\(T\\), the \\(\\beta_{g^{2}}\\) function agrees well with perturbation theory for small coupling, reproducing one-loop exactly and two-loop within a few-percent error. For larger coupling, the resumed integral representation of Eq. (6) reveals a second zero of the \\(\\beta_{g^{2}}\\) function for finite \\(g^{2}\\), corresponding to an IR attractive non-Gaussian fixed point \\(g_{*}^{2}>0\\), which confirms the results of [7]. The resulting zero-temperature flow of the coupling is displayed by the (red) solid \"T=0\" line in Fig. 1. Deviations from perturbation theory become significant at scales below 1 GeV; the IR fixed-point behavior sets in on scales of order \\(\\mathcal{O}(100\\ \\text{MeV})\\). As initial condition, we use the measured value of the coupling at the \\(\\tau\\) mass scale [16], \\(\\alpha_{\\text{s}}=0.322\\), which evolves to the world average of \\(\\alpha_{\\text{s}}\\) at the \\(Z\\) mass scale. We stress that no other parameter or scale is used as an input neither at \\(T=0\\) nor at finite temperature as described below. The appearance of an IR fixed point in Yang-Mills theories is a well-investigated phenomenon in the Landau gauge [17], where the running coupling can be defined with the aid of the ghost-gluon vertex. Whereas the universal perturbative running of the coupling is identical for the different gauges despite the different definitions of the coupling, we moreover observe a qualitative agreement between the Landau gauge and the Landau-de Witt background-field gauge in the nonperturbative IR in the form of an attractive fixed point. This points to a deeper connection between the two gauges which deserves further study and may be traced back to certain non-renormalization properties in the two gauges. Note that the IR fixed point in the Landau gauge is in accordance with the Kugo-Ojima and Gribov-Zwanziger confinement scenarios [18]. In general, an IR fixed point is also compatible with the existence of a mass gap [7], since such a gap in the physical spectrum typically induces threshold and decoupling behavior towards the IR. At finite temperature \\(T\\), the UV behavior remains unaffected for scales \\(k\\gg T\\) and agrees well with the one-loop perturbative running coupling at zero temperature, as expected.3 In the IR, the running is strongly modified: The coupling increases towards lower scales until it develops a maximum near \\(k\\sim T\\). Below, the coupling decreases according to a power law \\(g^{2}\\sim k/T\\), see Fig. 1. This behavior has a simple explanation: the wavelength of fluctuations with momenta \\(p^{2}<T^{2}\\) is larger than the extent of the compactified Euclidean time direction. Hence, these modes become effectively 3-dimensional and their limiting behavior is governed by the spatial \\(3d\\) Yang-Mills theory. This dimensional reduction has been discussed for the running coupling in a perturbative weak-coupling framework in [20]. Our results generalize this to arbitrary couplings. As a nontrivial new result, we observe the existence of a non-Gaussian IR fixed point \\(g^{2}_{3d,*}\\) also in the reduced 3-dimensional theory. By virtue of a straightforward matching between the \\(4d\\) and \\(3d\\) coupling, the observed power law for the \\(4d\\) coupling is a direct consequence of the strong-coupling IR behavior in the \\(3d\\) theory, \\(g^{2}(k\\ll T)\\sim g^{2}_{3d,*}\\,k/T\\). We find a \\(3d\\) fixed-point value of \\(\\alpha_{3d,*}\\equiv g^{2}_{3d,*}/(4\\pi)\\simeq 2.7\\) which demonstrates that the system is strongly coupled despite the naive decrease of the \\(4d\\) coupling; also the \\(3d\\) background anomalous dimension is large, approaching \\(\\eta_{3d}\\to 1\\) near the IR fixed point. This scenario is reminiscent to hot \\(4d\\)\\(\\phi^{4}\\) theory which approaches a strongly coupled IR limit at high temperatures analogous to the \\(3d\\) Wilson-Fisher fixed point [21]. The observation of an IR fixed point in the \\(3d\\) theory again agrees with recent results in the Landau gauge [22]. Note that our flow to the \\(3d\\) theory is driven by thermal as well as quantum fluctuations. This is different from a purely thermal flow [23; 24] as used in [25], where the IR limit is four-dimensional, being characterized by a decoupling of fluctuation modes owing to thermal masses. Footnote 3: Our truncation does not reproduce higher orders in the high-temperature small-coupling expansion which proceeds with odd powers in \\(g\\) beyond one loop [19]. These odd powers are a result of a resummation which, in the language of the effective action, requires non-local operators, being neglected so far. In any case, we do not expect the underlying quasi-particle picture of these operators to hold near the chiral phase transition, such that their omission should not qualitatively modify our low-temperature results. The \\(3d\\) IR fixed point and the perturbative UV behavior already qualitatively determine the momentum asymptotics of the running coupling. Phenomenologically, the behavior of the coupling in the transition region at mid-momenta is most important, which is quantitatively provided by the full \\(4d\\) finite-temperature flow equation. ## III Truncated RG flow for chiral quark dynamics Extending our calculations to QCD, we first include the quark contributions to all gluonic operators of our truncation, as done in Ref. [26] for QED. This effectively corresponds to Heisenberg-Euler-type quark-loop contributions to the flow of the function \\({\\cal W}_{k}(\\theta)\\). Successively, we obtain quark-loop contributions to the coefficients \\(a_{m}\\) in Eq. (6) and thus to the running coupling, accounting for the screening nature of fermionic fluctuations; here, we confine ourselves to massless quarks, but current-quark masses can straightforwardly be included [8]. The determination of the critical temperature \\(T_{\\rm cr}\\) above which chiral symmetry is restored requires a second crucial ingredient for our truncation: we study the Figure 1: Running SU(3) Yang-Mills coupling \\(\\alpha_{\\rm YM}(k,T)\\) as a function of \\(k\\) for \\(T=0,100,300\\,{\\rm MeV}\\) compared to the one-loop running for vanishing temperature. gluon-induced quark self-interactions of the type \\[\\Gamma_{\\psi,{\\rm int}}=\\int\\hat{\\lambda}_{\\alpha\\beta\\gamma\\delta}\\bar{\\psi}_{ \\alpha}\\psi_{\\beta}\\bar{\\psi}_{\\gamma}\\psi_{\\delta}, \\tag{7}\\] where \\(\\alpha,\\beta,\\dots\\) denote collective indices including color, flavor, and Dirac structures. The resulting flow equations for the \\(\\hat{\\lambda}\\)'s are a straightforward finite-temperature generalization of those derived and analyzed in [27; 28] and will not be displayed here for brevity. The boundary condition \\(\\hat{\\lambda}_{\\alpha\\beta\\gamma\\delta}\\to 0\\) for \\(k\\to\\Lambda\\to\\infty\\) guarantees that the \\(\\hat{\\lambda}\\)'s at \\(k<\\Lambda\\) are solely generated by quark-gluon dynamics from first principles (e.g., by 1PI \"box\" diagrams with 2-gluon exchange). We emphasize that this is an important difference to, e.g., the Nambu-Jona-Lasinio (NJL) model, where the \\(\\hat{\\lambda}\\)'s are independent input parameters. We consider all linearly-independent four-quark interactions permitted by gauge and chiral symmetry. A priori, these include color and flavor singlets and octets in the (S\\(-\\)P), (V\\(-\\)A) and (V\\(+\\)A) channels. U\\({}_{\\rm A}\\)(1)-violating interactions are neglected, since they may become relevant only inside the \\(\\chi\\)SB regime or for small \\(N_{\\rm f}\\). We drop any nontrivial momentum dependencies of the \\(\\hat{\\lambda}\\)'s and study these couplings in the point-like limit \\(\\hat{\\lambda}(|p_{i}|\\ll k)\\). This is a severe approximation, since it inhibits a study of QCD properties in the chirally broken regime; for instance, mesons manifest themselves as momentum singularities in the \\(\\hat{\\lambda}\\)'s. Nevertheless, the point-like truncation can be a reasonable approximation in the chirally symmetric regime, as has recently been quantitatively confirmed for the zero-temperature chiral phase transition in many-flavor QCD [27]. Our truncation is based on the assumption that quark dynamics both near the finite-\\(T\\) phase boundary as well as near the many-flavor phase boundary [29] is driven by similar mechanisms. Our restrictions on the four-quark interactions result in a total number of four linearly-independent \\(\\hat{\\lambda}\\) couplings; all others channels are related to this minimal basis by means of Fierz transformations. Introducing the dimensionless couplings \\(\\lambda=k^{2}\\hat{\\lambda}\\), the \\(\\beta\\) functions for the \\(\\lambda\\) couplings are of the form \\[\\partial_{t}\\lambda=2\\lambda-\\lambda A\\lambda-b\\lambda g^{2}-cg^{4}, \\tag{8}\\] where the coefficients \\(A\\), \\(b\\), \\(c\\) are temperature dependent, \\(A\\) being a matrix and \\(b\\) a vector in the space of \\(\\lambda\\) couplings (for explicit representations, see [8; 30]). Within this truncation, a simple picture for the chiral dynamics arises: at weak gauge coupling, the RG flow generates quark self-interactions of order \\(\\lambda\\sim g^{4}\\) via the last term in Eq. (8) with a negligible back-reaction on the gluonic RG flow. If the gauge coupling in the IR remains smaller than a critical value \\(g<g_{\\rm cr}\\), the \\(\\lambda\\) self-interactions remain bounded, approaching fixed points \\(\\lambda_{*}\\) in the IR. Technically, the \\(\\sim g^{4}\\) term is balanced by the first term \\(\\sim 2\\lambda\\) at these fixed points. The fixed points are the counter-parts of the Gaussian fixed point \\(\\lambda_{*}^{\\rm Gauss}=0\\) in NJL-like models (at \\(g^{2}=0\\)), here being modified by the gauge dynamics. At these fixed points, the fermionic subsystem remains in the chirally invariant phase which is indeed realized at high temperatures \\(T>T_{cr}\\). If the gauge coupling increases beyond the critical coupling \\(g>g_{\\rm cr}\\), the IR fixed points \\(\\lambda_{*}\\) are destabilized and the quark self-interactions become critical [27; 28]. Then the gauge-fluctuation-induced \\(\\lambda\\)'s have become strong enough to contribute as relevant operators to the RG flow, with the term \\(\\sim\\lambda A\\lambda\\) dominating Eq. (8). In this case, the \\(\\lambda\\)'s increase rapidly, approaching a divergence at a finite scale \\(k=k_{\\chi{\\rm SB}}\\). In fact, this seeming Landau-pole behavior indicates \\(\\chi\\)SB and the formation of chiral condensates: the \\(\\lambda\\)'s are proportional to the inverse mass parameter of a Ginzburg-Landau effective potential for the order parameter in a (partially) bosonized formulation, \\(\\lambda\\sim 1/m^{2}\\)[31; 32]. Thus, the scale at which the self-interactions formally diverge is a good measure for the scale \\(k_{\\chi{\\rm SB}}\\) where the effective potential for the chiral order parameter becomes flat and is about to develop a nonzero vacuum expectation value. Whether or not chiral symmetry is preserved by the ground state therefore depends on the running QCD coupling \\(g\\) relative to the critical coupling \\(g_{\\rm cr}\\) which is required to trigger \\(\\chi\\)SB. For instance, at zero temperature, the SU(3) critical coupling for the quark system is \\(\\alpha_{\\rm cr}\\equiv g_{\\rm cr}^{2}/(4\\pi)\\simeq 0.8\\) in our RG scheme [32], being only weakly dependent on the number of flavors [27]. Since the IR fixed point for the gauge coupling is much larger \\(\\alpha_{*}>\\alpha_{\\rm cr}\\) for not too many massless flavors, the QCD vacuum is characterized by \\(\\chi\\)SB. At finite temperature, the running of the gauge coupling is considerably modified in the IR. Moreover, the critical coupling is \\(T\\) dependent, \\(g_{\\rm cr}=g_{\\rm cr}(T/k)\\). This can be understood from the fact that all quark modes acquire thermal masses and, thus, stronger interactions are required to excite critical quark dynamics. This thermal decoupling is visible in the coef Figure 2: Running QCD coupling \\(\\alpha_{s}(k,T)\\) for \\(N_{\\rm f}=3\\) massless quark flavors and \\(N_{c}=3\\) colors and the critical value of the running coupling \\(\\alpha_{\\rm cr}(k,T)\\) as a function of \\(k\\) for \\(T=130\\,{\\rm MeV}\\) (upper panel) and \\(T=220\\,{\\rm MeV}\\) (lower panel). The existence of the \\((\\alpha_{\\rm s},\\alpha_{\\rm cr})\\) intersection point (marked by a circle) in the former indicates that the \\(\\chi\\)SB quark dynamics can become critical. ficients \\(A\\), \\(b\\), and \\(c\\) in Eq. (8), all of which vanish in the limit \\(T/k\\to\\infty\\). In Fig. 2, we show the running coupling \\(\\alpha_{\\rm s}\\) and its critical value \\(\\alpha_{\\rm cr}\\) for \\(T=130\\,{\\rm MeV}\\) and \\(T=220\\,{\\rm MeV}\\) as a function of the regulator scale \\(k\\). The intersection point \\(k_{\\rm cr}\\) between both marks the scale where the quark dynamics become critical. Below the scale \\(k_{\\rm cr}\\), the system runs quickly into the \\(\\chi\\)SB regime. We estimate the critical temperature \\(T_{\\rm cr}\\) as the lowest temperature for which no intersection point between \\(\\alpha_{\\rm s}\\) and \\(\\alpha_{\\rm cr}\\) occurs.4 Compared to [8], we have further resolved the finite-\\(T\\) Lorentz structure of the four-fermion couplings [30], resulting in a slightly improved estimate for \\(T_{\\rm cr}\\): we find \\(T_{\\rm cr}\\approx 172\\,^{+40}_{-34}\\,{\\rm MeV}\\) for \\(N_{\\rm f}=2\\) and \\(T_{\\rm cr}\\approx 148\\,^{+32}_{-31}\\) MeV for \\(N_{\\rm f}=3\\) massless quark flavors in good agreement with lattice simulations [33]. The errors arise from the experimental uncertainties on \\(\\alpha_{\\rm s}\\)[16]. Dimensionless ratios of observables are less contaminated by this uncertainty of \\(\\alpha_{\\rm s}\\). For instance, the relative difference for \\(T_{\\rm cr}\\) for \\(N_{\\rm f}\\)=2 and 3 flavors is \\(\\frac{T_{\\rm cr}^{N_{\\rm f}=2}-T_{\\rm cr}^{N_{\\rm f}=3}}{(T_{\\rm cr}^{N_{\\rm cr }=2}+T_{\\rm cr}^{N_{\\rm cr}=3})/2}=0.150\\ldots 0.165\\) in reasonable agreement with the lattice value5 of \\(\\sim 0.121\\pm 0.069\\). Footnote 4: Strictly speaking, this is a sufficient but not a necessary criterion for chiral-symmetry restoration. In this sense, our estimate for \\(T_{\\rm cr}\\) is an upper bound for the true \\(T_{\\rm cr}\\). Small corrections to this estimate could arise, if the quark dynamics becomes uncritical again by a strong decrease of the gauge coupling towards the IR. Footnote 5: The large uncertainty on the lattice value arises from the fact that the statistical errors on the \\(N_{\\rm f}=2\\) and \\(N_{\\rm f}=3\\) results for \\(T_{\\rm cr}\\) are uncorrelated. Furthermore, we compute the critical temperature for the case of many massless quark flavors \\(N_{\\rm f}\\), see Fig. 3. We observe an almost linear decrease of the critical temperature for increasing \\(N_{\\rm f}\\) with a slope of \\(\\Delta T_{\\rm cr}=T(N_{\\rm f})-T(N_{\\rm f}+1)\\approx 24\\,{\\rm MeV}\\) for small \\(N_{\\rm f}\\). In addition, we find a critical number of quark flavors, \\(N_{\\rm f}^{\\rm cr}=12\\), above which no chiral phase transition occurs. This result for \\(N_{\\rm f}^{\\rm cr}\\) agrees with other studies based on the 2-loop \\(\\beta\\) function [29]; however, the precise value of \\(N_{\\rm f}^{\\rm cr}\\) is exceptionally sensitive to the 3-loop coefficient which can bring \\(N_{\\rm f}^{\\rm cr}\\) down to \\(N_{\\rm f}^{\\rm cr}\\simeq 10^{+1.6}_{-0.7}\\)[27]. Since we do not consider our truncation to be sufficiently accurate for a precise estimate of this coefficient, our study does not contribute to a reduction of the current error on \\(N_{\\rm f}^{\\rm cr}\\). Instead, we would like to emphasize that the flattening shape of the phase boundary near \\(N_{\\rm f}^{\\rm cr}\\) is a generic prediction of the IR fixed-point scenario: here, the symmetry status of the system is governed by the fixed-point regime where dimensionful scales such as \\(\\Lambda_{\\rm QCD}\\) lose their importance [8]. In any case, since \\(N_{\\rm f}^{\\rm cr}\\) is smaller than \\(N_{\\rm f}^{\\rm cr.f.}=\\frac{11}{2}N_{\\rm c}=16.5\\), our study provides further evidence for the existence of a regime where QCD is chiral symmetric but is still asymptotically free. ## IV Conclusion In summary, we have determined the \\(\\chi\\)SB phase boundary in QCD in the plane of temperature and flavor number. Our quantitative results are in accord with lattice simulations for \\(N_{\\rm f}=2\\), 3. For larger \\(N_{\\rm f}\\), we observe a linear decrease of \\(T_{\\rm cr}\\), leveling off near \\(N_{\\rm f}^{\\rm cr}\\) owing to the IR fixed-point structure of QCD. Our results are based on a consistent operator expansion of the QCD effective action that can systematically be generalized to higher orders. The qualitative validity and the quantitative convergence of this expansion are naturally difficult to analyze in this strongly-coupled gauge system, particularly for the gluonic sector. The fact that our truncation results in a stable RG flow at strong interactions is already a highly non-trivial check that any ansatz which misses the true degrees of freedom generically fails. A more quantitative evaluation of the validity of our expansion will require the inclusion of higher-order operators in the covariant gradient expansion as well as higher-order ghost terms. An inclusion of operators that distinguish between electric and magnetic sectors at finite \\(T\\), e.g., \\((u_{\\mu}F_{\\mu\ u})^{2}\\) with the heat-bath four-velocity \\(u_{\\mu}\\), should facilitate to distinguish between differing coupling strengths in the two sectors, as done in [25] using an ansatz inspired by hard thermal loop computations. We observe an improved control over the truncation in the quark sector at least for the chirally symmetric phase, which suffices to trace out the phase boundary. Quantitatively, this has been confirmed by a stability analysis of universal quantities such as \\(N_{\\rm f}^{\\rm cr}\\) under a variation of the regulator in [27] which gives strong support to the point-like truncation of the quark self-interactions. Qualitatively, the reliability of the quark truncation can also Figure 3: Chiral-phase-transition temperature \\(T_{\\rm cr}\\) versus the number of massless quark flavors \\(N_{\\rm f}\\). In the dashed-line region, we expect \\(\\rm U_{A}(1)\\)-violating operators to become quantitatively important. The flattening at \\(N_{\\rm f}\\gtrsim 10\\) is a consequence of the IR fixed-point structure [8]. be understood by the fact that the feed-back of higher-order operators, such as \\(\\sim(\\bar{\\psi}\\psi)^{4}\\), is generally suppressed by the one-loop structure of the flow equation. Future extensions should include mesonic operators which can be treated by RG rebosonization techniques [32]. This would not only provide access to the broken phase and mesonic properties, but also permit a study of the order of the phase transition. For further phenomenology, the present quantitative results that rely on only one physical input parameter can serve as a promising starting point. The authors are grateful to J. Jaeckel, J.M. Pawlowski, and H.-J. Pirner for useful discussions. H.G. acknowledges support by the DFG under contract Gi 328/1-3 (Emmy-Noether program). J.B. acknowledges support by the GSI Darmstadt. ## References * (1) F. Karsch and E. Laermann, arXiv:hep-lat/0305025; D. H. Rischke, Prog. Part. Nucl. Phys. **52**, 197 (2004). * (2) R. D. Pisarski and F. Wilczek, Phys. Rev. D **29**, 338 (1984). * (3) E. Shuryak, Prog. Part. Nucl. Phys. **53**, 273 (2004); M. Gyulassy and L. McLerran, Nucl. Phys. A **750**, 30 (2005). * (4) F. Wegner, A. Houghton, Phys. Rev. **A 8** (1973) 401; K. G. Wilson and J. B. Kogut, Phys. Rept. **12** (1974) 75; J. Polchinski, Nucl. Phys. **B231** (1984) 269. * (5) C. Wetterich, Phys. Lett. B **301** (1993) 90. * (6) M. Bonini, M. D'Attanasio and G. Marchesini, Nucl. Phys. B **409** (1993) 441; U. Ellwanger, Z. Phys. C **62** (1994) 503; T. R. Morris, Int. J. Mod. Phys. A **9** (1994) 2411. * (7) H. Gies, Phys. Rev. D **66**, 025006 (2002); **68**, 085015 (2003). * (8) J. Braun and H. Gies, JHEP **0606**, 024 (2006) [arXiv:hep-ph/0602226]. * (9) M. Reuter and C. Wetterich, Nucl. Phys. B **417**, 181 (1994); F. Freire, D. F. Litim and J. M. Pawlowski, Phys. Lett. B **495**, 256 (2000). * (10) M. Reuter and C. Wetterich, Phys. Rev. D **56**, 7893 (1997). * (11) U. Ellwanger, Phys. Lett. B **335** (1994) 364. * (12) D. F. Litim and J. M. Pawlowski, Phys. Rev. D **66**, 025030 (2002). * (13) N. K. Nielsen and P. Olesen, Nucl. Phys. B **144**, 376 (1978). * (14) W. Dittrich and V. Schanbacher, Phys. Lett. B **100**, 415 (1981); B. Muller and J. Rafelski, Phys. Lett. B **101**, 111 (1981); A. O. Starinets, A. S. Vshivtsev and V. C. Zhukovsky, Phys. Lett. B **322**, 403 (1994); P. N. Meisinger and M. C. Ogilvie, Phys. Lett. B **407**, 297 (1997); H. Gies, Ph.D. Thesis, Tubingen U. (1999). * (15) L. F. Abbott, Nucl. Phys. B **185**, 189 (1981). * (16) S. Bethke, Nucl. Phys. Proc. Suppl. **135** (2004) 345. * (17) L. von Smekal, R. Alkofer and A. Hauck, Phys. Rev. Lett. **79**, 3591 (1997); D. Atkinson and J. C. Bloch, Mod. Phys. Lett. A **13**, 1055 (1998); C. Lerche and L. von Smekal, Phys. Rev. D **65**, 125006 (2002); C. S. Fischer and R. Alkofer, Phys. Lett. **B536**, 177 (2002); J. M. Pawlowski, D. F. Litim, S. Nedelko and L. von Smekal, Phys. Rev. Lett. **93**, 152002 (2004); C. S. Fischer and H. Gies, JHEP **0410**, 048 (2004). * (18) T. Kugo and I. Ojima, Prog. Theor. Phys. Suppl. **66** (1979) 1; V. N. Gribov, Nucl. Phys. B **139**, 1 (1978); D. Zwanziger, Phys. Rev. D **69** (2004) 016002. * (19) J. I. Kapusta, Nucl. Phys. B **148**, 461 (1979). * (20) M. A. van Eijck, C. R. Stephens and C. G. van Weert, Mod. Phys. Lett. A **9**, 309 (1994) [arXiv:hep-ph/9308227]. * (21) D. O'Connor, C. R. Stephens and F. Freire, Mod. Phys. Lett. A **8**, 1779 (1993). * (22) A. Maas, J. Wambach and R. Alkofer, Eur. Phys. J. C **42**, 93 (2005); A. Maas, J. Wambach, B. Gruter and R. Alkofer, Eur. Phys. J. C **37**, 335 (2004). * (23) M. D'Attanasio and M. Pietroni, Nucl. Phys. B **472** 711 (1996); Nucl. Phys. B **498** (1997) 443. * (24) D. F. Litim and J. M. Pawlowski, in _The Exact Renormalization Group_, Eds. Krasnitz et al, World Sci (1999) 168, [hep-th/9901063]. * (25) D. Comelli and M. Pietroni, Phys. Lett. B **417**, 337 (1998) [arXiv:hep-ph/9708489]. * (26) H. Gies and J. Jaeckel, Phys. Rev. Lett. **93**, 110405 (2004). * (27) H. Gies and J. Jaeckel, arXiv:hep-ph/0507171. * (28) H. Gies, J. Jaeckel and C. Wetterich, Phys. Rev. D **69** (2004) 105008. * (29) T. Banks and A. Zaks, Nucl. Phys. B **196**, 189 (1982); V. A. Miransky and K. Yamawaki, Phys. Rev. D **55**, 5051 (1997); T. Appelquist, J. Terning and L. C. R. Wijewardhana, Phys. Rev. Lett. **77**, 1214 (1996). * (30) J. Braun, Dissertation, University of Heidelberg (2006). * (31) U. Ellwanger and C. Wetterich, Nucl. Phys. B **423**, 137 (1994). * (32) H. Gies and C. Wetterich, Phys. Rev. D **65** (2002) 065001; Phys. Rev. D **69**, 025001 (2004); J. Jaeckel, hep-ph/0309090. * (33) F. Karsch, E. Laermann and A. Peikert, Nucl. Phys. B **605** (2001) 579.
We analyze the running gauge coupling at finite temperature for QCD, using the functional renormalization group. The running of the coupling is calculated for all scales and temperatures. At finite temperature, the coupling is governed by a fixed point of the 3-dimensional theory for scales smaller than the corresponding temperature. The running coupling can drive the quark sector to criticality, resulting in chiral symmetry breaking. Our results provide for a quantitative determination of the phase boundary in the plane of temperature and number of massless flavors. Using the experimental value of the coupling at the \\(\\tau\\) mass scale as the only input parameter, we obtain, e.g., for \\(N_{\\rm f}=3\\) massless flavors a critical temperature of \\(T_{\\rm cr}\\approx 148\\,\\)MeV in good agreement with lattice simulations. pacs: 64.60.Ak, 11.15.-q + Footnote †: preprint: HD-THEP-05-26
Give a concise overview of the text below.
arxiv-format/0512216v1.md
# Self-similar pressure oscillations in neutron star envelopes as probes of neutron star structure A. I. Chugunov Ioffe Physicotechnical Institute, St. Petersburg, Russia E-mail: [email protected] ## 1 Introduction Neutron stars can be considered as resonators where various oscillation modes can be excited. These oscillations are attracting much attention because, in principle, they can be used to study the internal structure of neutron stars. Some of them (for instance, r-modes) can be accompanied by gravitational radiation. Because neutron stars are relativistic objects, their oscillations must be studied in the framework of General Relativity. The relativistic theory of oscillations was developed in a series of papers by Thorne and coauthors (Thorne & Campolattaro, 1967; Price & Thorne, 1969; Thorne, 1969a,b; Campolattaro & Thorne, 1970; Ipser & Thorne, 1973). In particular, the rapid (\\(\\sim 1\\) s) damping of p-modes with multipolarity \\(l=2\\) by gravitational radiation was demonstrated by Thorne (1969a). An exact treatment of general-relativistic effects is complicated, but in many cases it is possible to use the relativistic Cowling approximation (McDermott et al., 1983). An analysis of various oscillation modes and mechanisms for their dissipation was carried out by McDermott (1988). Let us also note the review paper by Stergioulas (2003), which contains an extensive bibliography. As a rule, one considers neutron star oscillations with low values of \\(l\\). Although neutron stars are objects at the final stage of stellar evolution, they can be seismically active for many reasons. Possible mechanisms for the generation of oscillations have been widely discussed in the literature (see, e.g., McDermott (1988); Stergioulas (2003) and references therein). Recently, much attention has been paid to r-modes - vortex oscillations that can be generated in rapidly rotating neutron stars and accompanied by powerful gravitational radiation. In addition, oscillations can be excited in neutron stars, for example, during X-ray bursts (nuclear explosions in outermost layers of accreting neutron stars), bursting activity of magnetars (anomalous X-ray pulsars and soft gamma-ray repeaters; see, e.g., Kaspi (2004)), and glitches (sudden changes of spin periods) of ordinary pulsars. In this paper we focus of high-frequency (\\(\\sim 100\\) kHz) pressure oscillations (p-modes) with high multipolarity \\(l\\gtrsim 100\\) localized in neutron star envelopes (crusts). In our previous paper (Chugunov & Yakovlev, 2005) we have studied these oscillations for \\(l\\gtrsim 500\\). In that case p-modes are localized in the outer envelope (before the neutron drip point, at densities \\(\\rho\\lesssim 4\\times 10^{11}\\) g cm\\({}^{-3}\\)), where the equation of state (EOS) of stellar matter is relatively smooth. Accordingly, the oscillation spectrum is simple and well established. In the present paper we extend our analysis to p-modes with lower \\(l\\). These oscillations penetrate into the inner envelope of the star, where the EOS undergoes considerable softening due to neutronization and becomes more complicated (essentially different for ground-state and accreted matter). We show that the neutron drip affects strongly the oscillation spectrum. If detected, this spectrum would give valuable information on the EOS in neutron star envelopes and also on global parameters of neutron stars (their masses and radii). ## 2 Formalizm Following Chugunov & Yakovlev (2005) we study oscillations localized in a thin neutron star envelope. It is convenient to use the approximation of a plane-parallel layer, and write space-time metric in the envelope as \\[{\\rm d}s^{2}=c^{2}\\,{\\rm d}t^{2}-\\,{\\rm d}z^{2}-R^{2}\\,({\\rm d}\\vartheta^{2}+ \\sin^{2}\\vartheta\\,{\\rm d}\\varphi^{2}), \\tag{1}\\] where the local time \\(t\\) and local depth \\(z\\) are related to the Schwarzschild time \\(\\tilde{t}\\) and circumferential radius \\(r\\) by \\[t=\\tilde{t}\\,\\sqrt{1-R_{\\rm G}/R},\\quad z=(R-r)/\\sqrt{1-R_{\\rm G}/R}, \\tag{2}\\] \\(r=R\\) is the circumferential radius of the stellar surface, \\(\\vartheta\\) and \\(\\varphi\\) are spherical angles, \\(R_{\\rm G}=2GM/c^{2}\\) is the gravitation radius, and \\(M\\) is the gravitational mass of the neutron star. The metric (1) is locally flat and allows us to use the Newtonian hydrodynamic equations for a thin envelope with the gravitational acceleration \\[g=\\frac{GM}{R^{2}\\sqrt{1-R_{\\rm G}/R}}. \\tag{3}\\] The pressure in the envelope is primarily determined by degenerate electrons and neutrons (in the inner envelope), being almost independent of temperature \\(T\\). Accordingly, we can use the same zero-temperature EOS for the equilibrium structure of the envelope and for perturbations. Employing this EOS, we neglect the buoyancy forces and study p-modes. The linearized hydrodynamic equations (for a non-rotating star) can be rewritten as (see, e.g., the monograph by Lamb 1975) \\[\\frac{\\partial^{2}\\phi}{\\partial t^{2}}=c_{\\rm s}^{2}\\Delta\\phi+\\mathbf{ g}\\cdot\ abla\\phi, \\tag{4}\\] where \\(\\phi\\) is the velocity potential and \\(c_{\\rm s}^{2}\\equiv\\partial P_{0}/\\partial\\rho_{0}\\) is the squared sound speed. The velocity potential can be presented in the form \\[\\phi=e^{i\\omega t}\\,Y_{lm}(\\vartheta,\\varphi)\\,F(z), \\tag{5}\\] where \\(\\omega\\) is an oscillation frequency, and \\(Y_{lm}(\\vartheta,\\varphi)\\) is a spherical function (see, e.g., Varshalovich et al. (1988)). An unknown function \\(F(z)\\) obeys the equation \\[\\frac{{\\rm d}^{2}F}{{\\rm d}z^{2}}+\\frac{g}{c_{\\rm s}^{2}}\\frac{{\\rm d}F}{{\\rm d }z}+\\left(\\frac{\\omega^{2}}{c_{\\rm s}^{2}}-\\frac{l(l+1)}{R^{2}}\\right)F=0. \\tag{6}\\] The boundary condition at the stellar surface is \\(F(0)=0\\). It comes from the requirement of vanishing Lagrange variation of the pressure at the surface. The formal condition \\(\\lim_{z\\to\\infty}F(z)=0\\) in the stellar interior should be imposed to localize oscillations in the envelope. Of course, the actual variable \\(z\\) is finite and the real depth of oscillation localization will be controlled in calculations. The EOS of matter in neutron star envelopes contains a sequence of first-order phase transitions associated with changes of nuclides with growing density. These phase transitions are relatively weak (the density jumps do not exceed 20 per cent). We should add boundary conditions at all phase transitions within the envelope. These are two well known conditions at a plain boundary of two liquids (Lamb 1975). The first condition can be written as \\[F_{1}^{\\prime}(z)=F_{2}^{\\prime}(z). \\tag{7}\\] It ensures equal radial velocities at both sides of the boundary. The second condition is \\[F_{1}=\\frac{\\rho_{2}}{\\rho_{1}}\\,F_{2}+\\left(\\frac{\\rho_{2}}{\\rho_{1}}-1\\right) \\,\\frac{\\vartheta}{\\omega^{2}}\\,F_{1}^{\\prime}. \\tag{8}\\] It comes from the requirement of pressure continuity at the boundary. Note, that the boundary conditions (7) and (8) provide a source of buoyancy which leads to the density discontinuity of g-modes (see, e.g., McDermott 1990). Oscillations of a plane-parallel layer for a polytropic EOS (\\(P\\propto\\rho^{1+1/n}\\), \\(n\\) being the polytropic index) were studied analytically by Gough (1991). In this case, the squared sound speed is \\(c_{s}^{2}=g\\,z/n\\). The solution for eigenfrequencies is \\[\\omega_{k}^{2}=\\frac{g}{R}\\,\\sqrt{l(l+1)}\\,\\left(\\frac{2k}{n}+1\\right), \\tag{9}\\] and eigenmodes are given by \\[F_{k}(z)=\\exp\\left(-\\sqrt{l(l+1)}\\,\\frac{z}{R}\\right)L_{k}^{(n-1)}\\left(2\\, \\sqrt{l(l+1)}\\,\\frac{z}{R}\\right), \\tag{10}\\] where \\(L_{k}^{(n-1)}(x)\\) is a generalized Laguerre polynomial (Abramovitz & Stegun 1971), and \\(k=0,1,\\dots\\) is the number of radial nodes. Note, that the mode with \\(k=0\\) does not have any radial nodes; its properties are independent of the polytropic index \\(n\\). This mode corresponds to the vanishing Lagrangian variation of the density (incompressible motion). Adding the condition \\(\\triangle\\phi=\\triangle{\\boldsymbol{U}}=0\\) to Eq. (4), one can easily show that the mode with the frequency \\[\\omega_{0}^{2}=\\frac{g}{R}\\,\\sqrt{l(l+1)} \\tag{11}\\] and the eigenfunction \\(F_{0}(z)\\), defined by Eq. (10), is the proper mode for a wide class of EOSs. Note, that the boundary conditions (7) and (8) are automatically satisfied for this mode, and it is continuous at phase transitions. The oscillation frequency redshifted for a distant observer is \\[\\tilde{\\omega}_{0}^{2} = \\left(1-\\frac{R_{\\rm g}}{R}\\right)\\,\\frac{g}{R}\\,\\sqrt{l(l+1)} \\tag{12}\\] \\[= \\frac{G\\,M}{R^{3}}\\,\\sqrt{1-R_{\\rm G}/R}\\,\\sqrt{l(l+1)}.\\] The frequency \\(\\omega_{0}\\) will be used to normalize eigenfrequencies of other p-modes. The number of radial nodes \\(k\\) will be used to enumerate the modes. ### Self-similarity and scaling Let us use the equation of hydrostatic equilibrium \\({\\rm d}P/{\\rm d}z=\\rho\\,g\\) and transform Eq. (6) taking the equilibrium pressure \\(P\\) as an independent variable, \\[\\frac{{\\rm d}^{2}F}{{\\rm d}P^{2}}+\\frac{2}{\\rho\\,c_{\\rm s}^{2}}\\frac{{\\rm d}F }{{\\rm d}P}+\\frac{1}{\\rho^{2}}\\left(\\frac{\\omega^{2}}{g^{2}\\,c_{\\rm s}^{2}}- \\frac{l(l+1)}{g^{2}\\,R^{2}}\\right)F=0. \\tag{13}\\]The boundary conditions (7) and (8) can be written as \\[\\rho_{1}\\,\\frac{\\mathrm{d}F_{1}}{\\mathrm{d}P} = \\rho_{2}\\,\\frac{\\mathrm{d}F_{2}}{\\mathrm{d}P}, \\tag{14}\\] \\[F_{1} = \\frac{\\rho_{2}}{\\rho_{1}}\\,F_{2}+\\left(\\frac{\\rho_{2}}{\\rho_{1}}-1 \\right)\\,\\frac{g^{2}}{\\omega^{2}}\\rho_{1}\\,\\frac{\\mathrm{d}F_{1}}{\\mathrm{d}P}. \\tag{15}\\] Therefore, Eq. (13) with the boundary conditions (14), (15) and with regularity requirement can be treated as the equation for an eigennumber \\(\\lambda=\\omega^{2}/g^{2}\\) containing the scaling parameter \\(\\zeta=\\sqrt{l\\left(l+1\\right)}/(gR)\\) (with \\(\\zeta\\approx l/(gR)\\) for \\(l\\gg 1\\)). Accordingly, the eigenfrequencies can be written as \\[\\omega_{k}^{2}=g^{2}\\,g_{k}(\\zeta)=\\omega_{0}^{2}\\,f_{k}(\\zeta). \\tag{16}\\] Here, \\(g_{k}(\\zeta)\\) and \\(f_{k}(\\zeta)\\) are functions which can be calculated numerically. They are universal for all neutron stars with a given EOS in the envelope. The velocity potentials \\(F_{k}\\) are also universal functions of \\(P\\). Therefore, p-mode oscillations in stellar envelopes are self-similar and can be easily rescaled to a neutron star with any radius and mass. In principle, this can be used to determine \\(R\\) and \\(M\\) (see Sec. 3.2). ## 3 Numerical results Numerical results are presented for a \"canonical\" neutron star model, with the mass \\(M_{\\mathrm{c}}=1.4M_{\\odot}\\) and the radius \\(R_{\\mathrm{c}}=10\\) km. For this model, we have \\(g_{\\mathrm{c}}\\approx 2.42\\times 10^{14}\\) cm s\\({}^{-2}\\), \\[\\omega_{0}\\approx 1.56\\times 10^{5}\\left[l(l+1)/10^{4}\\right]^{1/4}\\,\\mathrm{s }^{-1} \\tag{17}\\] and (for a distant observer) \\[\\widetilde{\\omega}_{0}=\\omega_{0}\\,\\sqrt{1-R_{\\mathrm{G}}/R}\\approx 1.19 \\times 10^{5}\\left[l(l+1)/10^{4}\\right]^{1/4}\\,\\mathrm{s}^{-1}. \\tag{18}\\] Oscillation frequencies have been determined via a series of iterative trials, checking the coincidence of the mode number and the number of radial nodes. ### Equations of state We employ two models of matter in neutron star envelopes, the accreted and ground-state matter. For the accreted matter, we use the EOS of Haensel & Zdunik (1990) (HZ). It was derived by following transformations of atomic nuclei (beta captures, emission and absorption of neutrons, pycnonuclear reactions) in an accreted matter element with increasing the pressure. The EOS was calculated for the densities from \\(\\rho=3.207\\times 10^{7}\\) g cm\\({}^{-3}\\) to \\(1.462\\times 10^{13}\\) g cm\\({}^{-3}\\). For lower densities, we have taken the matter composed of \\({}^{56}\\)Fe and the EOS of degenerate electrons with electrostatic corrections. For higher densities, we use the EOS of the ground-state matter presented by Baym, Pethick & Sutherland (1971) (BPS) because, as remarked by Haensel & Zdunik (1990), the HZ EOS becomes very similar to the BPS EOS at \\(\\rho>10^{13}\\) g cm\\({}^{-3}\\). We have also considered envelopes composed of the ground-state (cold catalyzed) matter. In the outer envelope we use the EOS of Haensel & Pichon (1994) (HP) and the recent EOS of Ruster, Hempel & Schaffner-Bielich (2005) (RHS). For the inner envelope, we employ the EOS of Negele & Vautherin (1973). Phase transitions in these EOSs have been treated carefully using the boundary conditions (7) and (8) at any phase transition. For comparison, we have also employed the model of the outer envelope composed of ground-state matter with a smoothed composition (the smooth composition model - SCM). In the latter case we have included only a large density jump at the neutron drip boundary between the inner and outer envelopes. The squared sound speed \\(c_{s}^{2}\\) as a function of depth \\(z\\) for all these EOSs is shown in Fig. 1. The solid line is for the accreted envelope; the dashed, dotted and dash-and-dot lines are for the HP, RHS and SCM EOSs of the ground-state matter. The different versions of the ground-state EOS show approximately the same sound speed profiles, but the profile in the accreted envelope is significantly different. The depth of the accreted envelope (up to the density \\(2.004\\times 10^{14}\\) g cm\\({}^{-3}\\), which is the largest density in the envelope, where the atomic nuclei are present, for the BPS EOS) is \\(z\\approx 1150\\) m. For all models of the ground-state matter, the largest density in the envelope has been taken \\(\\approx 1.7\\times 10^{14}\\) g cm\\({}^{-3}\\); the envelope depth is \\(z\\approx 985\\) m. ### Eigenfrequencies Figures 2 and 3 show squares of dimensionless eigenfrequencies \\(\\omega_{k}^{2}/\\omega_{0}^{2}\\) versus multipolarity \\(l\\) for accreted and ground-state envelopes of the canonical neutron star. Because of the scaling (16) the figures can be easily transformed to a star with any gravity \\(g\\) and radius \\(R\\) by changing scale of the \\(l\\) axis by a factor of \\(g\\,R/(g_{\\mathrm{c}}\\,R_{\\mathrm{c}})\\). For any envelope, the modes with \\(l\\gtrsim 300\\) can be subdivided into two groups, with a pronounced linear dependence and with a weak dependence of \\(\\omega_{k}^{2}/\\omega_{0}^{2}\\) on \\(l\\). As will be shown in Section 3.4, the modes of the first type (_the inner modes_, shown by thicker lines in Fig. 2) are localized in the vicinity of the neutron drip point, while the modes of the second type (_the outer modes_) are localized in the outer envelope. In Figs. 2 and 3 one can see a number of quasi-intersections. When passing through a quasi-intersection point (with growing \\(l\\)), an inner mode gains an additional radial node, but an outer mode loses one. Figure 1: The squared sound speed \\(c_{s}^{2}\\) as a function of depth \\(z\\). Solid line is for an accreted envelope. The dashed, dotted and dash-and-dotted lines are for the HP, RHS and SCM EOSs of the ground-state matter. Let us consider the outer modes. The eigenfrequencies are the same (within \\(\\sim 1\\%\\)) for all ground-state EOSs (see Fig. 3). For the accreted envelope, eigenfrequencies are larger because the EOS is stiffer. With decreasing \\(l\\), oscillations penetrate deeper into the outer envelope, where the EOS is softer because electrons become relativistic, and because they undergo beta-captures. It leads to a gradual decrease of \\(\\omega_{k}^{2}/\\omega_{0}^{2}\\). As in the model with the polytropic EOS, given by Eq. (9), separations between squares of neighboring dimensionless eigenfrequencies \\(\\omega_{k}^{2}/\\omega_{0}^{2}\\) are approximately constant for a fixed \\(l\\). The weak decrease of separations with the growth of \\(k\\) is due to the penetration of oscillations into deeper layers of the star, where the EOS is softer. The latter effect is more pronounced for the ground-state matter owing to a stronger softening of the EOS. Finally, at \\(l\\sim 1000\\) the outer p-modes are localized in the outer layers of the outer envelope, where the matter is composed of \\({}^{56}\\)Fe nuclei for both accreted and ground-state EOSs. Accordingly, eigenfrequencies become nearly equal. Naturally, the oscillation frequencies of outer modes with \\(l\\gtrsim 500\\) for the ground state envelope are the same as calculated by Chugunov & Yakovlev (2005). To demonstrate explicitly that inner modes are caused by the neutron drip in the inner envelope, in Fig. 4 we present eigenfrequencies for a model envelope without any neutron drip. Here we employ the RHS EOS in the outer envelope but assume that the inner envelope is composed of \\({}^{116}\\)Se ions (the last element at the outer envelope) and electron gas (no free neutrons). The oscillation spectrum does not contain any inner modes. A small decrease of \\(\\omega_{k}^{2}/\\omega_{0}^{2}\\) for \\(200\\lesssim l\\lesssim 500\\) is produced by the softening of the EOS at the bottom of the outer envelope. The growth of frequencies at \\(l\\sim 100\\) is caused by the penetration of oscillations into the inner envelope, where our model EOS is polytropic (with the index \\(n=3\\)). Accordingly, oscillation frequencies tend to the values provided by the polytropic model (9). ### Inferring \\(M\\), \\(R\\), and the crustal EOS from oscillation spectrum If detected, outer modes would give us \\(\\widetilde{\\omega}_{0}\\), and therefore \\(M\\,R^{-3}\\,\\sqrt{1-R_{G}/R}\\); see Eq. (12). A detection of the only one fundamental mode (\\(k=0\\)) would be sufficient to determine \\(M\\,R^{-3}\\,\\sqrt{1-R_{G}/R}\\). A detection of several outer modes (with different \\(l\\) and/or \\(k\\)) would confirm and strengthen this determination. Our calculations show that for the inner modes the ratio \\(\\omega_{\\rm in}^{2}/\\omega_{0}^{2}\\) is a linear function of \\(l\\). Using the scaling relation (16) we can present this linear dependence in the form \\[\\omega_{\\rm in}^{2}/\\omega_{0}^{2}=A+B\\,l,\\quad B=\\beta/(g_{14}R_{6}). \\tag{19}\\] Here, \\(g_{14}\\) is the surface gravity in units \\(10^{14}\\) cm s\\({}^{-2}\\), \\(R_{6}=R/10^{6}\\) cm=\\(R/10\\) km, while \\(A\\) and \\(\\beta\\) are dimensionless constants determined by the EOS in a neutron star envelope. For the canonical neutron star with the ground-state envelope, we obtain \\(A=0.75\\) and \\(B=0.0032\\) in the case of Figure 4: Same as in Fig. 2, but for a model envelope in which the outer envelope is composed of the ground-state matter and the inner envelope is composed of \\({}^{116}\\)Se. Figure 3: Same as in Fig. 2, but for the envelope composed of the ground-state matter (the inner modes are not emphasized). Lines are for the HP EOS; crosses ‘x’ are for the RHS EOS; crosses ‘+’ are for the SCM. Figure 2: Squared normalized eigenfrequencies \\(\\omega_{k}^{2}/\\omega_{0}^{2}\\) versus multipolarity \\(l\\) for the accreted envelope of the canonical neutron star. The numbers next to curves indicate the number of radial nodes. Thin parts of the curves correspond to the outer modes and (for low \\(l\\)) modes which a spread over the entire envelope, while thick segments refer to the inner modes. inner modes with lowest frequencies. For the same star with the accreted crust we have \\(A=0.65\\) and \\(B=0.0073\\). The values of \\(B\\) allow us to determine \\(\\beta\\). In this way we obtain \\[A=0.75,\\quad\\beta=0.0013 \\qquad\\mbox{for ground state crust}; \\tag{20}\\] \\[A=0.65,\\quad\\beta=0.0030 \\qquad\\mbox{for accreted crust}. \\tag{21}\\] Hence, the difference between the ground-state and accretion envelopes is quite pronounced in oscillation spectra. Therefore, if several (minimum two) inner modes could be detected in addition to outer modes, their frequencies could be fitted by a function (19) and the values of \\(A\\) and \\(B\\) could be determined. An accurate determination of \\(A\\) would enable one to distinguish between the ground-state and accretion envelopes. The value of \\(B\\) would give then \\(gR\\). Combining this \\(gR\\) with the value \\(g\\,R^{-1}\\sqrt{1-R_{G}/R}\\), determined from the detection of the outer modes, one would get a simple system of two equations for two unknowns, \\(M\\) and \\(R\\). Thus, a detection of one outer mode and several inner ones could in principle enable one to discriminate between the ground-state and accreted envelopes and determine neutron star mass and radius. ### Eigenmodes Figures 5-8 show profiles of the angle-averaged energy density of oscillations as a function of \\(z\\). The root-mean-square amplitude of radial displacements of the stellar surface has been set equal to 1 m. The subscript of \\(\\varepsilon\\) indicates the number of radial nodes. The vertical dotted line marks the boundary between the inner and outer envelopes (\\(z\\approx 432\\) m for the accreted envelope, and \\(z\\approx 364\\) m for all ground-state envelopes of the canonical neutron star). Figure 5 depicts eigenmodes with \\(l=100\\) for the accreted envelope. The modes are spread over the entire envelope; their subdivision into the outer and inner modes is not obvious. However, some traces of two mode types are visible. The mode with one radial node (the dashed line), whose frequency belongs to the branch of the outer modes, is primarily localized in the vicinity of the neutron drip point. However, other modes do not demonstrate this feature. The effects of phase transitions are relatively small (\\(\\sim 20\\%\\)) and only slightly noticeable in Figure 5. They are local and do not change global (on scales of \\(\\gtrsim 20\\) m) energy density profiles. Figure 6 shows eigenmodes with \\(l=400\\) for the accreted envelope. This value of \\(l\\) is very close to the quasi-intersection point for the modes with 4 and 5 radial nodes (see Fig. 2). The subdivision into the outer and inner modes is clear - the modes with \\(k=\\)0, 1, 2, 3, 5, 6 radial nodes are localized in the outer envelope, but the energy of the mode with \\(k=\\)4 is concentrated in the outer part of the inner envelope. This subdivision is the same as in Section 3.2 (see Figure 2). The energy-density profiles of the fourth and fifth modes are very similar at \\(z\\lesssim 250\\) m, but at \\(z\\sim 500\\) m the energy densities differ by more than two orders of magnitude! The outer modes 'feel' the lowering of the sound speed in the outer layers of the inner envelope (see Fig. 1) and increase their energy density in this region. However, the increase is not so large as for the inner modes. The signatures of phase transitions are very small (\\(\\sim 10\\%\\)) and hardly visible in Figure 6. They are local and do not change energy density profiles on scales \\(\\gtrsim 20\\) m. Figures 7 and 8 are plotted for the ground-state envelope with the RHS EOS. The results for the HP and SCM are qualitatively the same. Figure 7 depicts eigenmodes with \\(l=100\\). The modes are localized in the entire envelope and cannot be subdivided into the outer and inner ones. The traces of these two types of modes are weaker than for the accreted envelope (see Fig. 5). The signatures of phase transitions in the outer envelope are small \\(\\lesssim 10\\%\\) and have scales \\(\\sim 10\\) m. They are noticeable only for the modes with a few number of radial nodes. Many modes show large (\\(\\sim 50\\%\\)) jumps of the energy density at the neutron drip point. Figure 8 shows eigenmodes with \\(l=500\\). This value of \\(l\\) is close to the quasi-intersection point for modes with \\(k=\\)2 and 3 and with \\(k=\\)5 and 6 (see Fig. 3). The modes can obviously be subdivided to the two types. The modes with Figure 5: Angle-averaged energy density of oscillations for modes with \\(l=100\\) in the accreted envelope. The subscript of \\(\\varepsilon\\) indicates the number of radial nodes. The root-mean-square amplitude of radial displacements at the stellar surface is 1 m. The vertical dotted line shows the boundary between the inner and outer envelopes. Figure 6: Same as in Fig. 5 for modes with \\(l=400\\) in the accreted envelope. \\(k=\\)0, 1, 3, 4, 5 are localized in the outer envelope; the energy of the modes with \\(k=\\)2 and 6 is concentrated near the neutron drip point. The subdivision of modes is the same as suggested in Section 3.2 on the basis of Figure 3. The energy profiles of the second and third modes are very close for \\(z\\lesssim 200\\) m, but for \\(z\\sim 420\\) m the energy density differs by more than three orders of magnitude. Qualitatively the same feature is demonstrated by the fifth and sixth modes. The outer modes'respond' to the lowering of the sound speed after the neutron drip point by increasing the energy density in this region. This increase is greater for the second and fifth modes whose frequencies are close to the frequencies of the inner modes. However, it is not so large as for the inner modes. The signatures of phase transitions in the outer envelope are extremely small (\\(\\sim 5\\%\\)) and are almost invisible in Figure 8. The scales of such features are \\(\\sim 1\\) m. The large phase transition at the neutron drip point produces the signature with the same properties. ## 4 Conclusions We have studied high-frequency pressure oscillations which are localized in the envelopes of neutron stars composed of the accreted or ground-state matter. Our main conclusions are as follows. (1) The oscillations are almost insensitive to various modifications of the EOS for the ground-state matter (section 3.2). All EOSs we have used (HP, RHS, SCM) give the same oscillation spectrum. (2) The neutron drip and associated softening of the EOS in the inner envelope do not affect strongly the spectrum of the well known (outer) oscillation modes which are localized predominantly in the outer envelope (section 3.2). (3) However, the neutron drip leads to the appearance of inner oscillation modes localized mostly near the neutron drip point (section 3). The spectrum of these modes is sensitive to the EOS in the envelope (accreted or ground-state). (4) The p-mode oscillation problem is self-similar (in the plane-parallel approximation). Once the problem is solved for one stellar model, it can easily be rescaled to neutron star models with any mass and radius (but the same EOS in the envelope; see section 2.1). (5) A detection and identification of one outer mode and several inner modes would enable one to discriminate between the ground-state and accreted envelope and determine neutron star mass and radius (section 3.3). For example, a detection of the fundamental mode with \\(l=900\\) at the frequency 74 kHz and of two inner modes with \\(l=300\\) and \\(l=900\\) at 56 kHz and 140 kHz, respectively, would indicate a canonical neutron star with the ground-state envelope. Therefore, high-frequency pressure modes are excellent tools to explore the physics of matter in neutron star envelopes and to determine masses and radii of neutron stars. The oscillation frequencies could be detected by radio-astronomical methods very precisely. A detailed analysis of pulse shapes of some radio pulsars reveals that oscillations with large multipolarity are possibly excited there (Clemens & Rosen, 2004) but their frequencies are \\(\\sim\\)30 Hz, so that they are not high-frequency p-modes we discuss here. A search for high-frequency p-modes could be useful. High-multipolarity p-modes do not damp very quickly because they do not produce any powerful gravitational or electromagnetic emission (see, e.g., Chugunov & Yakovlev, 2005). They are robust because they are relatively independent of the thermal state of the star, and they should not be strongly affected by neutron star magnetic fields. The inner p-modes, localized in the inner envelope, could be easily triggered by pulsar glitches, which are thought to occur just in inner envelopes of pulsars. Chugunov & Yakovlev (2005) studied the dissipation of p-modes localized in the outer envelope; this dissipation is mainly produced by the shear viscosity. It may be enhanced by thin viscous layers near numerous nuclear phase transitions. The viscosity in these layers can be diffusive or turbulent. Note, that fundamental modes do not produce viscosity layers because they pass phase transitions without velocity discontinuities (see Sec. 2). Their dissipation is not enhanced by phase transitions. Finally, p-modes in neutron star envelopes are relatively insensitive to the EOS and composition of neutron star cores. However, these modes can be useful to discriminate between ordinary neutron stars and strange stars with Figure 8: Same as in Fig. 5 for modes with \\(l=500\\) in the ground-state envelope. Figure 7: Same as in Fig. 5 for modes with \\(l=100\\) in the ground-state envelope. crust. The latter stars are thought to contain extended cores composed of strange quark matter. Nevertheless, a core is assumed to be surrounded by an envelope of normal matter (see, e.g., Zdunik 2002), so that a strange star with the crust may look like an ordinary neutron star from outside. The density of the normal matter in a strange star does not exceed the neutron drip density. The pressure modes in the envelopes of such stars should easily 'feel' underlying dense quark matter, and the oscillation spectrum would reflect the presence of the quark core. We intend to consider this effect in a future publication. ## Acknowledgments I am grateful to D.G. Yakovlev for discussions. This work was supported by a grant of the \"Dynasty\" Foundation and the International Center for Fundamental Physics in Moscow, by the Russian Foundation for Basic Research (project no. 05-02-16245), and by the Program of Support for Leading Scientific Schools of Russia (NSh-1115.2003.2). ## References * [1] Abramowitz M., Stegun I.A., 1971, Handbook of Mathematical Functions, Dover, New York * [2] Baym G., Pethick C., Sutherland P., ApJ, 1971, 170, 299 * [3] Campolattaro A., Thorne K.S., 1970 ApJ, 159, 847 * [4] Chugunov A.I., Yakovlev D.G., 2005, Astr. Rep., 49, 724 * [5] Clemens J.C., Rosen R., 2004, ApJ, 609, 340 * [6] Gough D.O., 1991, in Zahn J-P., Zinn-Justin J., eds, Astrophysical fluid dynamics, Elsevier, Amsterdam, p. 399 * [7] Haensel P., Pichon B., A&A, 1994, 283, 313 * [8] Haensel H., Zdunik J.L., 1990, A&A, 229, 117 * [9] Ipser J.R., Thorne K.S., 1973, ApJ, 181, 181 * [10] Kaspi V.M., 2004, in Camilo F., Gaensler B.M., eds, Young Neutron Stars and Their Environments, Astron. Soc. Pac., San Francisco, p. 231 * [11] Lamb H., 1975, Hydrodynamics, Cambrige Univ. Press, London * [12] Negele J.W., Vautherin D., 1973, Nucl. Phys. A, 207, 298 * [13] McDermott P.N., 1990, MNRAS, 245, 508 * [14] McDermott P.N., Van Horn H.M., Scholl J.F., 1983, ApJ, 268, 837 * [15] McDermott P.N., Van Horn H.M., Hansen C.J., 1988, ApJ, 325, 725 * [16] Price R., Thorne K.S., 1969, ApJ, 155, 163 * [17] Ruster S.B., Hempel M., Schaffner-Bielich J., 2005, preprint (astro-ph/0509325) * [18] Stergioulas N., 2003, Living Rev. in Relativity, 6, 3 * [19] Thorne K.S., 1969a, ApJ, 158, 1 * [20] Thorne K.S., 1969b, ApJ, 158, 997 * [21] Thorne K.S., Campolattaro A., 1967, ApJ, 49, 591 * [22] Varshalovich D.A., Moskalev A.N., Khersonskii V.K., 1988, Quantum Theory of Angular Momentum, World Scientific, Singapore * [23] Zdunik J.L., 2002, A&A, 394, 641
We study eigenmodes of acoustic oscillations of high multipolarity \\(l\\sim 100\\) - \\(1000\\) and high frequency (\\(\\sim 100\\) kHz), localized in neutron star envelopes. We show that the oscillation problem is self-similar. Once the oscillation spectrum is calculated for a given equation of state (EOS) in the envelope and given stellar mass \\(M\\) and radius \\(R\\), it can be rescaled to a star with any \\(M\\) and \\(R\\) (but the same EOS in the envelope). For \\(l\\gtrsim 300\\) the modes can be subdivided into the outer and inner ones. The outer modes are mainly localized in the outer envelope. The inner modes are mostly localized near the neutron drip point, being associated with the softening of the EOS after the neutron drip. We calculate oscillation spectra for the EOSs of cold-catalyzed and accreted matter and show that the spectra of the inner modes are essentially different. A detection and identification of high-frequency pressure modes would allow one to infer \\(M\\) and \\(R\\) and determine also the EOS in the envelope (accreted or ground-state) providing thus a new and simple method to explore the main parameters and internal structure of neutron stars. keywords: stars: neutron - stars: oscillations.
Give a concise overview of the text below.
arxiv-format/0512224v1.md
# Cosmo-dynamics and dark energy with non-linear equation of state: a quadratic model Kishore N. Ananda Institute of Cosmology and Gravitation, University of Portsmouth, Mercantile House, Portsmouth PO1 2EG, Britain Marco Bruni Institute of Cosmology and Gravitation, University of Portsmouth, Mercantile House, Portsmouth PO1 2EG, Britain Dipartimento di Fisica, Universita di Roma \"Tor Vergata\", via della Ricerca Scientifica 1, 00133 Roma, Italy November 3, 2021 ## I Introduction Model building in cosmology requires two main ingredients: a theory of gravity and a description of the matter content of the universe. In general relativity (GR) the gravity sector of the theory is completely fixed, there are no free parameters. The matter sector is represented in the field equations by the energy-momentum tensor, and for a fluid the further specification of an equation of state (EoS) is required. Apart from scalar fields, typical cosmological fluids such as radiation or cold dark matter (CDM) are represented by a _linear_ EoS, \\(P=w\\rho\\). The combination of cosmic microwave background radiation (CMBR) [1; 2], large scale structure (LSS) [3] and supernova type Ia (SNIa) [4] observations provides support for a flat universe presently dominated by a component, dubbed in general \"dark energy\", causing an accelerated expansion. The simplest form of dark energy is an _ad hoc_ cosmological constant \\(\\Lambda\\) term in the field equations, what Einstein called his \"biggest blunder\". However, although the standard \\(\\Lambda\\)CDM \"concordance\" model provides a rather robust framework for the interpretation of present observations (see e.g. [2; 5]), it requires a \\(\\Lambda\\) term that is at odds by many order of magnitudes with theoretical predictions [6]. This has prompted theorists to explore possible dark energy sources for the acceleration that go beyond the standard but unsatisfactory \\(\\Lambda\\). With various motivations, many authors have attempted to describe dark energy as quintessence, _k_-essence or a ghost field, i.e. with scalar fields with various properties. There have also been attempts to describe dark energy by a fluid with a specific non-linear EoS like the Chaplygin gas [7], generalized Chaplygin gas [8], van der Waals fluid [9], wet dark fluid [11] and other specific gas EoS's [10]. Recently, various \"phantom models\" (\\(w=P/\\rho<-1\\)) have also been considered [13; 14]. More simply, but also with a higher degree of generality, many authors have focused on phenomenological models where dark energy is parameterized by assuming a \\(w=P/\\rho=w(a)\\), where \\(a=a(t)\\) is the expansion scale factor (see e.g. [15; 16]). Another possibility is to advocate a modified theory of gravity. At high energies, modification of gravity beyond general relativity could come from extra dimensions, as required in string theory. In the brane world [17; 18; 19; 20] scenario the extra dimensions produce a term quadratic in the energy density in the effective 4-dimensional energy-momentum tensor. Under the reasonable assumption of neglecting 5-dimensional Weyl tensor contributions on the brane, this quadratic term has the very interesting effect of suppressing anisotropy at early enough times. In the case of a Bianchi I brane-world cosmology containing a scalar field with a large kinetic term the initial expansion is quasi-isotropic [21]. Under the same assumptions, Bianchi I and Bianchi V brane-world cosmological models containing standard cosmological fluids with linear EoS also behave in a similar fashion1[23], and the same remains true for more general homogeneous models [24; 25] and even some inhomogeneous exact solutions [26]. Finally, within the limitations of a perturbative treatment, the quadratic-term-dominated isotropic brane-world models have beenshown to be local past attractors in the larger phase space of inhomogeneous and anisotropic models [27; 28]. More precisely, again assuming that the 5-d Weyl tensor contribution to the brane can be neglected, perturbations of the isotropic models decay in the past. Thus in the brane scenario the observed high isotropy of the universe is the natural outcome of _generic initial conditions_, unlike in GR where in general cosmological models with a standard energy momentum tensor are highly anisotropic in the past (see e.g. [29]). Recently it has been shown that loop quantum gravity corrections result in a modified Friedmann equation [30], with the modification appearing as a negative term which is quadratic in the energy density. Further motivation for considering a quadratic equation of state comes from recent studies of \\(k\\)-essence fields as unified dark matter (UDM) models2[31; 32]. The general \\(k\\)-essence field can be described by a fluid with a closed-form barotropic equation of state. The UDM fluid discussed in [31] has a non-linear EoS of the form \\(P\\propto\\rho^{2}\\) at late times. More recently, it has been shown that any purely kinetic \\(k\\)-essence field can be interpreted as an isentropic perfect fluid with an EoS of the form \\(P=P(\\rho)\\)[33]. Also, low energy dynamics of the Higgs phase for gravity have been shown to be equivalent to the irrotational flow of a perfect fluid with equation of state \\(P=\\rho^{2}\\)[34]. Footnote 2: These attempt to provide a unified model for both the dark matter and the dark energy components necessary to make sense of observations. Given the isotropizing effect that the quadratic energy density term has at early times in the brane scenario this then prompts the question: can a term quadratic in the energy density have the same effect in general relativity. This question is non-trivial as the form of the equations in the two cases is quite different. On the brane, for a given EoS the effective 4-dimensional Friedmann and Raychaudhuri equations are modified, while the continuity equation is identical to that of GR. With the introduction of a quadratic EoS in GR, the Friedman equation remains the same, while the continuity and Raychaudhuri equations are modified3. Footnote 3: With Respect to the case of the same EoS with vanishing quadratic term. Taking into account this question (to be explored in detail in Paper II [48]), the diverse motivations for a quadratic energy density term mentioned above and with the dark energy problem in mind, in this paper we explore the GR dynamics of homogeneous isotropic Robertson-Walker models with a quadratic EoS, \\(P=P_{0}+\\alpha\\rho+\\beta\\rho^{2}\\). This is the simplest model we can consider without making any more specific assumptions on the EoS [35]. It represents the first terms of the Taylor expansion of _any_ EoS function \\(P=P(\\rho)\\) about \\(\\rho=0\\). It can also be taken to represent (after re-grouping of terms) the Taylor expansion about the present energy density \\(\\rho_{0}\\), see [35]. In this sense therefore the out-coming dynamics is very general. Indeed it turns out that this simple model can produce a large variety of qualitatively different dynamical behaviors that we classify using dynamical systems theory [36; 37]. An outcome of our analysis is that accelerated expansion phases are mostly natural for nonlinear EoS's. These are _in general_ asymptotically de Sitter thanks to the appearance of an _effective cosmological constant_. This suggests that an EoS with the right combination of \\(P_{0}\\), \\(\\alpha\\) and \\(\\beta\\) may provide a good and simple phenomenological model for UDM, or at least for a dark energy component. Other interesting possibilities that arise from the quadratic EoS are closed models that can oscillate with no singularity, models that bounce between infinite contraction/expansion and models which evolve from a phantom phase, asymptotically approaching a de Sitter phase instead of evolving to a \"big rip\" or other pathological future states [13; 38; 39]. As mentioned before, the question of the dynamical effects the quadratic energy density term has on the anisotropy in GR is explored in Paper II [48]. There we analyze Bianchi I and V models with the EoS \\(P=\\alpha\\rho+\\beta\\rho^{2}\\), as well as perturbations of the isotropic past attractor of those models that are singular in the past. We anticipate that Bianchi I and V non-phantom models with \\(\\beta>0\\) have an isotropic singularity, i.e. they are asymptotic in the past to a certain isotropic model, and that perturbations of this model decay in the past. Phantom anisotropic models with \\(\\beta>0\\) are necessarily asymptotically de Sitter in the future, but the shear anisotropy dominates in the past. For \\(\\beta<0\\) all models are anisotropic in the past, while their specific future evolution depends on the value of \\(\\alpha\\). The paper is organized as follows. In section II we outline the setup and the three main cases we will investigate. In section III, we study the dynamics of isotropic cosmological models in the high energy limit (neglecting the \\(P_{0}\\) term). We find the critical points, their stability nature and the occurrence of bifurcations of the dynamical system. In section IV, we consider the low energy limit (neglecting the \\(\\rho^{2}\\) term). The full system is then analyzed in section V, showing the qualitatively different behavior with respect to the previous cases. We then finish with some concluding remarks and an outline of work in progress in section VI. Units are such that \\(8\\pi G/c^{4}=1\\). ## II Cosmology with a quadratic EoS ### Dynamics with non-linear EoS The evolution of Robertson-Walker isotropic models with no cosmological constant \\(\\Lambda\\) term is given in GR by the following non-linear planar autonomous dynamical system: \\[\\dot{\\rho} = -3H\\left(\\rho+P\\right), \\tag{1}\\] \\[\\dot{H} = -H^{2}-\\frac{1}{6}\\left(\\rho+3P\\right), \\tag{2}\\] where \\(H\\) is the Hubble expansion function, related to the scale factor \\(a\\) by \\(H=\\dot{a}/a\\). In order to close this system of equations, an EoS must be specified, relating the isotropic pressure \\(P\\) and the energy density \\(\\rho\\). When an EoS \\(P=P(\\rho)\\) is given, the above system admits a first integral, the Friedman equation \\[H^{2}=\\frac{1}{3}\\rho-\\frac{K}{a^{2}}, \\tag{3}\\] where \\(K\\) is the curvature, \\(K=0,\\pm 1\\) as usual for flat, closed and open models. Here we are interested in exploring the general dynamical features of a non-linear EoS \\(P=P(\\rho)\\). Before considering the specific case of a quadratic EoS, we note some important general points. First, it is immediately clear from Eq. (1) that an effective cosmological constant is achieved whenever there is an energy density value \\(\\rho_{\\Lambda}\\) such that \\(P(\\rho_{\\Lambda})=-\\rho_{\\Lambda}\\). More specifically: **Remark 1.** If for a given EoS function \\(P=P(\\rho)\\) there exists a \\(\\rho_{\\Lambda}\\) such that \\(P(\\rho_{\\Lambda})=-\\rho_{\\Lambda}\\), then \\(\\rho_{\\Lambda}\\) has the dynamical role of an effective cosmological constant. **Remark 2.** A given EoS \\(P(\\rho)\\) may admit more than one point \\(\\rho_{\\Lambda}\\). If these points exist, they are fixed points of Eq. (1). **Remark 3.** From Eq. (2), since \\(\\dot{H}+H^{2}=\\ddot{a}/a\\), an accelerated phase is achieved whenever \\(P(\\rho)<-\\rho/3\\). **Remark 4.** Remark 3 is only valid in GR, and a different condition will be valid in other theories of gravity. Remarks 1 and 2, however, are only based on conservation of energy, Eq. (1). The latter is also valid (locally) in inhomogeneous models, provided that the time derivative is taken to represent the derivative along the fluid flow lines (e.g. see [40]), and is a direct consequence of \\(T^{ab}{}_{;b}=0\\). Thus Remarks 1 and 2 are valid in any gravity theory where \\(T^{ab}{}_{;b}=0\\), as well as (locally) in inhomogeneous models. Second, assuming expansion, \\(H>0\\), we may rewrite Eq. (1) as: \\[\\frac{d\\rho}{d\\tau}=-3\\left[\\rho+P(\\rho)\\right], \\tag{4}\\] where \\(\\tau=\\ln a\\). Eq. (4) is a 1-dimensional dynamical system with fixed point(s) \\(\\rho_{\\Lambda}\\)(s), if they exist. If \\(\\rho+P(\\rho)<0\\) the fluid violates the null energy condition [41; 42] and Eq. (1) implies what has been dubbed phantom behavior [13] (cf. [43]), i.e. the fluid behaves counter intuitively in that the energy density increases (decreases) in the future for an expanding (contracting) universe. Then: **Remark 5.** Any point \\(\\rho_{\\Lambda}\\) is an attractor (repeller) of the evolution during expansion (the autonomous system (4)) if \\(\\rho+P(\\rho)<0\\) (\\(>0\\)) for \\(\\rho<\\rho_{\\Lambda}\\) and \\(\\rho+P(\\rho)>0\\) (\\(<0\\)) for \\(\\rho>\\rho_{\\Lambda}\\). **Remark 6.** Any point \\(\\rho_{\\Lambda}\\) is a shunt4 of the autonomous system Eq. (4) if either \\(\\rho+P(\\rho)<0\\) on both sides of \\(\\rho_{\\Lambda}\\), or \\(\\rho+P(\\rho)>0\\) on both sides of \\(\\rho>\\rho_{\\Lambda}\\). In this case the fluid is respectively phantom or standard on both sides. Footnote 4: This is a fixed point which is an attractor for one side of the phase line and a repeller for the other [37]. Let's now consider the specific case of a general quadratic EoS of the form: \\[P=P_{o}+\\alpha\\rho+\\beta\\rho^{2}. \\tag{5}\\] The parameter \\(\\beta\\) sets the characteristic energy scale \\(\\rho_{c}\\) of the quadratic term as well as it's sign \\(\\epsilon\\) \\[\\beta=\\frac{\\epsilon}{\\rho_{c}}. \\tag{6}\\] **Remark 7.** Eq. (5) represents the Taylor expansion, up to \\(\\mathcal{O}(3)\\), of _any_ barotropic EoS function \\(P=P(\\rho)\\) about \\(\\rho=0\\). It also represents, after re-grouping of terms, the Taylor expansion about the present energy density value \\(\\rho_{0}\\)[35]. In this sense, the dynamical system (1,2) with (5) is _general_, i.e. it represents the late evolution, in GR, of _any_ cosmological model with non-linear barotropic EoS approximated by Eq. (5). The usual scenario for a cosmological fluid is a standard linear EoS (\\(P_{0}=\\beta=0\\)), in which case \\(\\alpha=w\\) is usually restricted to the range \\(-1<\\alpha<1\\). For the sake of generality, we will consider values of \\(\\alpha\\) outside this range, considering dynamics only restricted by the request that \\(\\rho\\geq 0\\). The first term in Eq (5) is a constant pressure term which in general becomes important in what we call the low energy regime. The second term is the standard linear term usually considered, with \\[\\alpha=\\frac{dP}{d\\rho}\\Bigg{|}_{\\rho=0}. \\tag{7}\\] If it is positive, \\(\\alpha\\) has an interpretation in terms of the speed of sound of the fluid in the limit \\(\\rho\\to 0\\), \\(\\alpha=c_{s}^{2}\\). The third term is quadratic in the energy density and will be important in what we call the high energy regime. In the following, we first split the analysis of the dynamical system Eqs. (1, 2, 5) in two parts, the high energy regime where we neglect \\(P_{0}\\) and the low energy regime where we set \\(\\beta=0\\), then we consider the full system with EoS (5). Using only the energy conservation Eq. (1) we list the various sub-cases, also briefly anticipating the main dynamical features coming out of the analysis in Sections III, IV and V. ### Quadratic EoS for the high energy regime In the high energy regime we consider the restricted equation of state: \\[P_{HE}=\\alpha\\rho+\\frac{\\epsilon\\rho^{2}}{\\rho_{c}}. \\tag{8}\\] The energy conservation Eq. (1) can be integrated in general to give: \\[\\rho_{HE}(a) = \\frac{A(\\alpha+1)\\rho_{c}}{a^{3(\\alpha+1)}-\\epsilon A}, \\tag{9}\\] \\[A = \\frac{\\rho_{o}a_{o}^{3(\\alpha+1)}}{(\\alpha+1)\\rho_{c}+\\epsilon \\rho_{o}}, \\tag{10}\\] where \\(\\rho_{o}\\), \\(a_{o}\\) represent the energy density and scale factor at an arbitrary time \\(t_{o}\\). This is valid for all values of \\(\\epsilon\\), \\(\\rho_{c}\\) and \\(\\alpha\\), except for \\(\\alpha\ eq-1\\). In the case \\(\\alpha=-1\\) the evolution of the energy density is: \\[\\rho_{HE}(a) = \\left[\\frac{1}{\\rho_{o}}+\\frac{3\\epsilon}{\\rho_{c}}\\ln\\left( \\frac{a}{a_{o}}\\right)\\right]^{-1}. \\tag{11}\\] The EoS with this particular choice of parameters has already been considered as a possible dark energy model [38; 44]. We will concentrate on the broader class of models where \\(\\alpha\ eq-1\\). In Section III we will give a dynamical system analysis of the high energy regime, but it is first useful to gain some insight directly from Eq. (9). We start by defining \\[\\rho_{\\Lambda}:=-\\epsilon(1+\\alpha)\\rho_{c}, \\tag{12}\\] noticing that this is an effective positive cosmological constant point only if \\(\\epsilon(1+\\alpha)<0\\). It is then convenient to rewrite Eq. (9) in three different ways, defining \\(a_{\\star}=|A|^{1/3(\\alpha+1)}\\), each representing two different sub-cases. **A:**\\(\\epsilon(1+\\alpha)>0\\), \\(\\rho_{\\Lambda}<0\\), \\[\\rho=\\frac{|1+\\alpha|\\rho_{c}}{\\left(\\frac{a}{a_{\\star}}\\right)^{3(1+\\alpha)} -1}. \\tag{13}\\] **A1:**\\(\\epsilon>0\\), \\((1+\\alpha)>0\\). In this case \\(a_{\\star}<a<\\infty\\), with \\(\\infty>\\rho>0\\). Further restrictions on the actual range of values that \\(a\\) and \\(\\rho\\) can take may come from the geometry. For a subset of appropriate initial conditions closed (positively curved) models may expand to a maximum \\(a\\) (minimum \\(\\rho\\)) and re-collapse, and for \\(\\alpha<-1/3\\) not all closed models have a past singularity at \\(a=a_{\\star}\\), having instead a bounce at a minimum \\(a\\) (maximum \\(\\rho\\)). **A2:**\\(\\epsilon<0\\), \\((1+\\alpha)<0\\). In this case \\(0<a<a_{\\star}\\), with \\(0<\\rho<\\infty\\), and the fluid exhibits phantom behavior. All models have a future singularity at \\(a=a_{\\star}\\), but in general closed models contract from a past singularity, bounce at a minimum \\(a\\) and \\(\\rho\\), then re-expand to the future singularity (we will refer to this as a phantom bounce). **B:**\\(\\rho_{\\Lambda}>0\\), \\(\\rho>\\rho_{\\Lambda}\\), \\[\\rho=\\frac{\\rho_{\\Lambda}}{1-\\left(\\frac{a}{a_{\\star}}\\right)^{3(1+\\alpha)}}. \\tag{14}\\] **B1:**\\(\\epsilon>0\\), \\((1+\\alpha)<0\\), \\(A>0\\). In this case \\(a_{\\star}<a<\\infty\\), with \\(\\infty>\\rho>\\rho_{\\Lambda}\\). As in case **A1**, further restrictions on the actual range of values that \\(a\\) and \\(\\rho\\) can take may come from the geometry. For a subset of initial conditions closed models may expand to a maximum \\(a\\) (minimum \\(\\rho\\)) and re-collapse, while for another subset closed models don't have a past singularity at \\(a=a_{\\star}\\), having instead a bounce at a minimum \\(a\\) (maximum \\(\\rho\\)). **B2:**\\(\\epsilon<0\\), \\((1+\\alpha)>0\\), \\(A<0\\). In this case \\(0<a<a_{\\star}\\), with \\(\\rho_{\\Lambda}<\\rho<\\infty\\). As in the case **A2**, the fluid has a phantom behavior. All models have a future singularity at \\(a=a_{\\star}\\), with closed models contracting from a past singularity to a minimum \\(a\\) and \\(\\rho\\) before re-expanding. **C:**\\(\\rho_{\\Lambda}>0\\), \\(\\rho<\\rho_{\\Lambda}\\), \\[\\rho=\\frac{\\rho_{\\Lambda}}{1+\\left(\\frac{a}{a_{\\star}}\\right)^{3(1+\\alpha)}}. \\tag{15}\\] **C1:**\\(\\epsilon>0\\), \\((1+\\alpha)<0\\), \\(A<0\\). In this case \\(0<a<\\infty\\), with \\(0<\\rho<\\rho_{\\Lambda}\\). The fluid behaves in a phantom manner but avoids the future singularity and instead evolves to a constant energy density \\(\\rho_{\\Lambda}\\). Closed models, however, typically bounce with a minimum \\(\\rho\\) at a finite \\(a\\). **C2:**\\(\\epsilon<0\\), \\((1+\\alpha)>0\\), \\(A>0\\). In this case \\(0<a<\\infty\\), with \\(\\rho_{\\Lambda}>\\rho>0\\). Again, closed models may evolve within restricted ranges of \\(a\\) and \\(\\rho\\), even oscillating, for \\(\\alpha\\geq-1/3\\), between maxima and minima of \\(a\\) and \\(\\rho\\). ### Low energy regime: affine EoS In the low energy regime we consider the affine equation of state: \\[P_{LE}=P_{o}+\\alpha\\rho. \\tag{16}\\] This particular EoS has been investigated as a possible dark energy model [11; 12], however, only spatially flat Friedmann models where considered. The scale factor dependence of the energy density is: \\[\\rho_{LE}(a) =-\\frac{P_{o}}{(\\alpha+1)}+Ba^{-3(\\alpha+1)}, \\tag{17}\\] \\[B =\\biggl{[}\\frac{P_{o}}{(1+\\alpha)}+\\rho_{o}\\biggr{]}a_{o}{}^{3(1+ \\alpha)}. \\tag{18}\\] This is valid for all values of \\(P_{o}\\) and \\(\\alpha\\) except \\(\\alpha\ eq-1\\). In the case \\(\\alpha=-1\\), the evolution of the energy density is: \\[\\rho_{LE}(a) = \\rho_{o}-3P_{o}\\ln\\left(\\frac{a}{a_{o}}\\right), \\tag{19}\\] As in the high energy case, we will concentrate on the broader class of models where \\(\\alpha\ eq-1\\). In Section IV we present the dynamical system analysis of the low energy regime, but first let us gain some insight from Eq. (17). As with the high energy case, in many cases the fluid violates the null energy condition (\\(\\rho+P<0\\)) and exhibit phantom behavior. Defining \\[\\tilde{\\rho}_{\\Lambda}:=-P_{o}/(1+\\alpha), \\tag{20}\\] we see that a positive effective cosmological constant point exists, \\(\\tilde{\\rho}_{\\Lambda}>0\\), only if \\(P_{o}/(1+\\alpha)<0\\). Eq. (17) can be rewritten in three different ways, defining \\(\\tilde{a}_{\\star}=|B|^{1/3(\\alpha+1)}\\), each representing two different sub-cases. **D:**\\(P_{o}/(1+\\alpha)>0\\), \\(\\tilde{\\rho}_{\\Lambda}<0\\), \\[\\rho=-\\frac{P_{o}}{(\\alpha+1)}+\\left(\\frac{a}{\\tilde{a}_{\\star}}\\right)^{-3(1 +\\alpha)}. \\tag{21}\\] **D1:**\\(P_{o}>0\\), \\((1+\\alpha)>0\\). In this case \\(0<a<\\infty\\), with \\(\\infty>\\rho>-|\\tilde{\\rho}_{\\Lambda}|\\). The geometry places further restrictions on the values that \\(a\\) and \\(\\rho\\) can take. The subset of open models (negative curvature) are all non-physical as they evolve to the \\(\\rho<0\\) region of the phase space. The spatially flat models expand to a maximum \\(a\\) (when \\(\\rho=0\\)) and recollapse. The closed (positively curved) models expand to a maximum \\(a\\) (minimum \\(\\rho\\)) and recollapse, and for \\(-1\\leq\\alpha<-1/3\\) a subset of closed models oscillate between a maximum and minimum \\(a\\) (minimum and maximum \\(\\rho\\)). **D2:**\\(P_{o}<0\\), \\((1+\\alpha)<0\\). In this case \\(0<a<\\infty\\), with \\(-|\\tilde{\\rho}_{\\Lambda}|<\\rho<\\infty\\). In this case the fluid exhibits phantom behavior. The subset of open models are all non-physical as they evolve from the \\(\\rho<0\\) region of the phase space. The spatially flat models contract, bounce at a minimum \\(a\\) when \\(\\rho=0\\) and re-expand in the future. The closed models contract, bounce at a minimum \\(a\\) and \\(\\rho\\), then re-expand in the future. **E:**\\(\\tilde{\\rho}_{\\Lambda}>0\\), \\(\\rho>\\tilde{\\rho}_{\\Lambda}\\), \\[\\rho=\\tilde{\\rho}_{\\Lambda}+\\left(\\frac{a}{\\tilde{a}_{\\star}}\\right)^{-3(1+ \\alpha)}. \\tag{22}\\] **E1:**\\(P_{o}>0\\), \\((1+\\alpha)<0\\), \\(B>0\\). In this case \\(0<a<\\infty\\), with \\(\\tilde{\\rho}_{\\Lambda}<\\rho<\\infty\\). As in the case **D2**, the fluid behaves in a phantom manner. The flat and open models are asymptotically de Sitter in the past, when their energy density approaches a finite value (\\(\\rho\\rightarrow\\tilde{\\rho}_{\\Lambda}\\) as \\(a\\to 0\\)), and when \\(\\tilde{\\rho}_{\\Lambda}\\) becomes negligible in Eq. (22) they evolve as standard linear phantom models, reaching a future singularity in a finite time (\\(\\rho\\rightarrow\\infty\\) as \\(a\\rightarrow\\infty\\)). The closed models contract to a minimum \\(a\\) (minimum \\(\\rho\\)), bounce and re-expand. **E2:**\\(P_{o}<0\\), \\((1+\\alpha)>0\\), \\(B>0\\). In this case \\(0<a<\\infty\\), with \\(\\infty>\\rho>\\tilde{\\rho}_{\\Lambda}\\). All flat and open models expand from a singularity and asymptotically evolve to a de Sitter model, with \\(\\rho=\\tilde{\\rho}_{\\Lambda}\\). The closed models evolve from a contracting de Sitter model to minimum \\(a\\) (maximum \\(\\rho\\)), bounce and then evolve to an expanding de Sitter model. **F:**\\(\\tilde{\\rho}_{\\Lambda}>0\\), \\(\\rho<\\tilde{\\rho}_{\\Lambda}\\), \\[\\rho=\\tilde{\\rho}_{\\Lambda}-\\left(\\frac{a}{\\tilde{a}_{\\star}}\\right)^{-3(1+ \\alpha)}. \\tag{23}\\] **F1:**\\(P_{o}>0\\), \\((1+\\alpha)<0\\), \\(B<0\\). In this case \\(0<a<\\infty\\), with \\(\\tilde{\\rho}_{\\Lambda}>\\rho>-\\infty\\). The subset of open models are all non-physical as they evolve to the \\(\\rho<0\\) region of the phase space. The flat models evolve from an expanding de Sitter phase to a contracting de Sitter phase. The closed models oscillate between a maximum and minimum \\(a\\) (minimum and maximum \\(\\rho\\)). **F2:**\\(P_{o}<0\\), \\((1+\\alpha)>0\\), \\(B<0\\). In this case \\(0<a<\\infty\\), with \\(-\\infty<\\rho<\\tilde{\\rho}_{\\Lambda}\\). The fluid exhibits phantom behavior. The open models are all non-physical as they evolve from the \\(\\rho<0\\) region of the phase space. The flat and closed models evolve from a contracting de Sitter phase, bounce at minimum \\(a\\) and \\(\\rho\\), then re-expand, asymptotically approaching a expanding de Sitter phase. ### The full quadratic EoS In Section V we present the dynamical system analysis of the full quadratic EoS models given by Eq. (5), but again we first study the form of \\(\\rho(a)\\) implied by conservation of energy, Eq. (1). As with the previous cases the fluid can violate the null energy condition (\\(\\rho+P<0\\)) and therefore may exhibit phantom behavior. The system may admit two (possibly negative) effective cosmological constant points: \\[\\rho_{\\Lambda,1} := \\frac{1}{2\\beta}\\left[-(\\alpha+1)+\\sqrt{\\Delta}\\right], \\tag{24}\\] \\[\\rho_{\\Lambda,2} := \\frac{1}{2\\beta}\\left[-(\\alpha+1)-\\sqrt{\\Delta}\\right], \\tag{25}\\] if \\[\\Delta:=(\\alpha+1)^{2}-4\\beta P_{o} \\tag{26}\\]is non negative. Clearly, the existence of the effective cosmological points depends on the values of the parameters in the EoS. This in turn affects the functional form of \\(\\rho(a)\\). In order to find \\(\\rho(a)\\) the following integral must be evaluated: \\[-3\\ln\\left(\\frac{a}{a_{o}}\\right)=\\int_{\\rho_{o}}^{\\rho}\\frac{d\\rho}{P_{o}+( \\alpha+1)\\rho+\\beta\\rho^{2}}. \\tag{27}\\] This is done separately for the cases when no effective cosmological points exist (\\(\\Delta<0\\)), when one cosmological point exist, \\(\\rho_{\\Lambda,1}=\\rho_{\\Lambda,2}=\\bar{\\rho}_{\\Lambda}\ eq 0\\) (\\(\\Delta=0\\)) and when two cosmological points exist, \\(\\rho_{\\Lambda,1}\ eq\\rho_{\\Lambda,2}\ eq 0\\) (\\(\\Delta>0\\)). We now consider these three separate sub-cases. **G:**\\((1+\\alpha)^{2}<4\\beta P_{o}\\), \\(\\Delta<0\\), \\[\\rho = \\frac{\\Gamma-\\sqrt{|\\Delta|}\\tan\\left(\\frac{3}{2}\\sqrt{|\\Delta|} \\ln\\left(\\frac{a}{a_{o}}\\right)\\right)}{2\\beta+\\frac{2\\beta}{\\sqrt{|\\Delta|}} \\Gamma\\tan\\left(\\frac{3}{2}\\sqrt{|\\Delta|}\\ln\\left(\\frac{a}{a_{o}}\\right) \\right)}-\\frac{(\\alpha+1)}{2\\beta},\\] \\[\\Gamma = 2\\beta\\rho_{o}+(\\alpha+1). \\tag{28}\\] **G1:**\\(\\beta>0\\), \\(P_{o}>0\\). In this case \\(a_{1}<a<a_{2}\\) (where \\(a_{1}<a_{2}\\)), with \\(\\infty>\\rho>-\\infty\\). The fluid behaves in a standard manner and all models have a past singularity at \\(a=a_{1}\\). All open models are non-physical as they evolve to the \\(\\rho<0\\) region of the phase space. The flat models expand to a maximum \\(a\\) (\\(\\rho=0\\)) and then re-collapse. The closed models can behave in a similar manner to flat models except they reach a minimum \\(\\rho\\) before re-collapsing. Some closed models oscillate between maxima and minima \\(a\\) and \\(\\rho\\). **G2:**\\(\\beta<0\\), \\(P_{o}<0\\). In this case \\(a_{1}<a<a_{2}\\) (where \\(a_{1}<a_{2}\\)), with \\(-\\infty<\\rho<\\infty\\). The fluid behaves in a phantom manner. All open models are non-physical as they evolve from the \\(\\rho<0\\) region of the phase space. The flat and closed models represent phantom bounce models, that is they evolve from a singularity at \\(a=a_{1}\\) (\\(\\rho=\\infty\\)), contract to a minimum \\(a\\) (minimum \\(\\rho\\)) and then re-expand to the future singularity at \\(a=a_{2}\\). **H:**\\((1+\\alpha)^{2}=4\\beta P_{o}\\), \\(\\Delta=0\\), \\[\\rho = \\bar{\\rho}_{\\Lambda}+\\frac{1}{3\\beta\\ln\\left(\\frac{a}{a_{o}} \\right)+\\frac{2\\beta}{\\Gamma}}. \\tag{29}\\] **H1:**\\(\\beta>0\\), \\(P_{o}>0\\), \\(\\rho<\\bar{\\rho}_{\\Lambda}\\). In this case \\(0<a<a_{1}\\) with \\(\\bar{\\rho}_{\\Lambda}>\\rho>-\\infty\\). The fluid behaves in a standard manner. The subset of open models are all non-physical as they evolve to the \\(\\rho<0\\) region of the phase space. The flat models evolve from an expanding de Sitter phase to a contracting de Sitter phase. The closed models oscillate between maxima and minima \\(a\\) and \\(\\rho\\). **H2:**\\(\\beta>0\\), \\(P_{o}>0\\), \\(\\rho>\\bar{\\rho}_{\\Lambda}\\). In this case \\(a_{1}<a<\\infty\\) with \\(\\infty>\\rho>\\bar{\\rho}_{\\Lambda}\\) and the fluid behaves in a standard manner. If \\(\\bar{\\rho}_{\\Lambda}>0\\), the open and flat models evolve from a past singularity (\\(a=a_{1}\\)) and evolve to a expanding de Sitter phase. For a subset of initial conditions closed models may expand to a maximum \\(a\\) (minimum \\(\\rho\\)) and re-collapse, while for another subset closed models avoid a past singularity, instead having a bounce at a minimum \\(a\\) (maximum \\(\\rho\\)). If \\(\\bar{\\rho}_{\\Lambda}<0\\), the open models are non-physical, while flat and closed models represent recollapse models. **H3:**\\(\\beta<0\\), \\(P_{o}<0\\), \\(\\rho<\\bar{\\rho}_{\\Lambda}\\). In this case \\(a_{1}<a<\\infty\\) with \\(-\\infty<\\rho<\\bar{\\rho}_{\\Lambda}\\). The fluid behaves in a phantom manner. The open models are all non-physical as they evolve from the \\(\\rho<0\\) region of the phase space. The flat and closed models evolve from a contracting de Sitter phase, bounce at minimum \\(a\\) and \\(\\rho\\), then re-expand, asymptotically approaching an expanding de Sitter phase. **H4:**\\(\\beta<0\\), \\(P_{o}<0\\), \\(\\rho>\\bar{\\rho}_{\\Lambda}\\). In this case \\(0<a<a_{1}\\) with \\(\\bar{\\rho}_{\\Lambda}<\\rho<\\infty\\) and the fluid behaves in a phantom manner. All models have a future singularity at \\(a=a_{1}\\). If \\(\\bar{\\rho}_{\\Lambda}>0\\), closed models contract from a past singularity to a minimum \\(a\\) and \\(\\rho\\) before re-expanding (phantom bounce), while flat and open models are asymptotic to generalized de Sitter models in the past. If \\(\\bar{\\rho}_{\\Lambda}<0\\), open models are non-physical, while flat and closed models contract from a past singularity to a minimum \\(a\\) and \\(\\rho\\) before re-expanding. **I:**\\((1+\\alpha)^{2}>4\\beta P_{o}\\), \\(\\Delta>0\\), \\[\\rho = \\frac{\\rho_{\\Lambda,2}\\left(\\frac{a}{a_{o}}\\right)^{-3\\sqrt{\\Delta }}-\\rho_{\\Lambda,1}C}{\\left(\\frac{a}{a_{o}}\\right)^{-3\\sqrt{\\Delta}}-C}, \\tag{30}\\] \\[C = \\frac{\\rho_{o}-\\rho_{\\Lambda,2}}{\\rho_{o}-\\rho_{\\Lambda,1}}. \\tag{31}\\] Note that \\(\\beta>0\\) (\\(<0\\)) implies \\(\\rho_{\\Lambda,2}<\\rho_{\\Lambda,1}\\) (\\(\\rho_{\\Lambda,1}<\\rho_{\\Lambda,2}\\)), and \\(C<0\\) implies \\(\\rho_{\\Lambda,2}<\\rho_{o}<\\rho_{\\Lambda,1}\\) for \\(\\beta>0\\) (\\(\\rho_{\\Lambda,1}<\\rho_{o}<\\rho_{\\Lambda,2}\\) for \\(\\beta<0\\)). **I1:**\\(\\beta>0\\), \\(P_{o}>0\\), \\(\\rho<\\rho_{\\Lambda,2}\\), hence we consider \\(\\rho_{\\Lambda,2}>0\\). In this case \\(0<a<a_{1}\\) with \\(\\rho_{\\Lambda,2}>\\rho>-\\infty\\) and the fluid behaves in a standard manner. The open models are all non-physical as they evolve to the \\(\\rho<0\\) region of the phase space. The flat models evolve from an expanding de Sitter phase to a contracting de Sitter phase. The closed model region contains a generalized Einstein static fixed point and models which oscillate indefinitely (between minima and maxima \\(a\\) and \\(\\rho\\)). **I2:**\\(\\beta>0\\), \\(P_{o}>0\\), \\(\\rho_{\\Lambda,2}<\\rho<\\rho_{\\Lambda,1}\\). In this case \\(0<a<\\infty\\) with \\(\\rho_{\\Lambda,2}<\\rho<\\rho_{\\Lambda,1}\\) and the fluid behaves in a phantom manner. The open models evolve from one expanding de Sitter phase (\\(\\rho=\\rho_{\\Lambda,2}\\)) to more rapid (greater \\(\\rho\\) and \\(H\\)) de Sitter phase (\\(\\rho=\\rho_{\\Lambda,1}\\)),however the spatial curvature is negative in the past and asymptotically approaches zero in the future. The flat models behave in a similar manner except that the curvature remains zero. The closed models undergo a phantom bounce with asymptotic de Sitter behavior, that is they evolve from a contracting de Sitter phase, reach a minimum \\(a\\), minimum \\(\\rho\\) and then evolve to a expanding de Sitter phase. **I3:**\\(\\beta>0\\), \\(P_{o}>0\\), \\(\\rho>\\rho_{\\Lambda,1}\\). In this case \\(a_{1}<a<\\infty\\) with \\(\\infty>\\rho>\\rho_{\\Lambda,1}\\) and the fluid behaves in a standard manner. All flat and open models expand from a singularity at \\(a=a_{1}\\) and asymptotically evolve to a expanding de Sitter phase (\\(\\rho=\\rho_{\\Lambda,1}\\)). A subset of closed models evolve from a contracting de Sitter phase to minimum \\(a\\) (maximum \\(\\rho\\)), bounce and then evolve to an expanding de Sitter phase. Another subset of closed models expand from a singularity at \\(a=a_{1}\\), reach a maximum \\(a\\) and minimum \\(\\rho\\), only to re-collapse. **I4:**\\(\\beta<0\\), \\(P_{o}<0\\), \\(\\rho<\\rho_{\\Lambda,1}\\). In this case \\(a_{1}<a<\\infty\\), with \\(-\\infty<\\rho<\\rho_{\\Lambda,1}\\) and the fluid behaves in a phantom manner. The open models are all non-physical as they evolve from the \\(\\rho<0\\) region of the phase space. The flat and closed models evolve from a contracting de Sitter phase, bounce at minimum \\(a\\) and \\(\\rho\\), then re-expand, asymptotically approaching a expanding de Sitter phase. **I5:**\\(\\beta<0\\), \\(P_{o}<0\\), \\(\\rho_{\\Lambda,1}<\\rho<\\rho_{\\Lambda,2}\\) (where \\(\\rho_{\\Lambda,1}<\\rho_{\\Lambda,2}\\)). In this case \\(0<a<\\infty\\) with \\(\\rho_{\\Lambda,2}>\\rho>\\rho_{\\Lambda,1}\\) and the fluid behaves in a standard manner. The open models evolve from a expanding de Sitter phase (\\(\\rho=\\rho_{\\Lambda,2}\\)) to less rapid (lower \\(\\rho\\) and \\(H\\)) de Sitter phase (\\(\\rho=\\rho_{\\Lambda,1}\\)) with the spatial curvature being negative in the past and zero asymptotically in the future. The flat models behave in a similar manner, except that the curvature remains zero throughout the evolution. The closed models can undergo a phantom bounce with asymptotic de Sitter behavior in the future and past, a subset of these models enter a loitering phase both before and after the bounce. There are a subset of closed models which oscillate indefinitely. **I6:**\\(\\beta<0\\), \\(P_{o}<0\\), \\(\\rho>\\rho_{\\Lambda,2}\\). In this case \\(0<a<a_{1}\\) with \\(\\rho_{\\Lambda,2}<\\rho<\\infty\\) and the fluid behaves in a phantom manner. All models have a future singularity at \\(a=a_{1}\\), with closed models contracting from a past singularity to a minimum \\(a\\) and \\(\\rho\\) before re-expanding (phantom bounce). ### The Singularities In general, singularities may behave in qualitatively different ways. The singularities present for the non linear EoS are quite different from the standard \"Big Bang\"/\"Big Crunch\" singularity. The standard singularities are such that: * \"Big Bang\"/\"Big Crunch\" : For \\(a\\to 0\\), \\(\\rho\\to\\infty\\). If the singularity occurs in the past (future) we refer to it as a \"Big Bang\" (\"Big Crunch\"). In order to differentiate between various types of singularities, we will use the following classification system for future singularities [38] (cf. also [45]): * Type I (\"Big Rip\") : For \\(t\\to t_{\\star}\\), \\(a\\to\\infty\\), \\(\\rho\\to\\infty\\) and \\(|P|\\to\\infty\\). * Type II (\"sudden\") : For \\(t\\to t_{\\star}\\), \\(a\\to a_{\\star}\\), \\(\\rho\\to\\rho_{\\star}\\) or \\(0\\) and \\(|P|\\to\\infty\\). * Type III : For \\(t\\to t_{\\star}\\), \\(a\\to a_{\\star}\\), \\(\\rho\\to\\infty\\) and \\(|P|\\to\\infty\\). * Type IV : For \\(t\\to t_{\\star}\\), \\(a\\to a_{\\star}\\), \\(\\rho\\to\\rho_{\\star}\\) or \\(0\\), \\(|P|\\to|P_{\\star}|\\) or \\(0\\) and derivatives of \\(H\\) diverge. Here \\(t_{\\star}\\), \\(a_{\\star}\\), \\(\\rho_{\\star}\\) and \\(|P_{\\star}|\\) are constants with \\(a_{\\star}\ eq 0\\). The main difference in our case is that the various types of singularities may occur in the past or the future. The future singularity described in case **A2** falls into the category of Type III, however, the past singularity mentioned in case **A1** is also a Type III singularity. In the case of the full quadratic EoS, all singularities which occur for a finite scale factor (\\(a=a_{1}\\)) are of Type III. ## III High energy regime dynamics ### The dimensionless dynamical system It is convenient to describe the dynamics in terms of dimensionless variables. In the high energy regime these are: \\[x=\\frac{\\rho}{|\\rho_{c}|}\\,\\ \\ y=\\frac{H}{\\sqrt{|\\rho_{c}|}}\\,\\ \\ \\eta=\\sqrt{|\\rho_{c}|}t. \\tag{32}\\] The system of equations (1)-(2) then changes into: \\[x^{\\prime} = -3y\\left((\\alpha+1)x+\\epsilon x^{2}\\right),\\] \\[y^{\\prime} = -y^{2}-\\frac{1}{6}\\left((3\\alpha+1)x+3\\epsilon x^{2}\\right), \\tag{33}\\] and the Friedman equation (3) gives \\[y^{2} = \\frac{x}{3}-\\frac{K}{|\\rho_{c}|a^{2}}. \\tag{34}\\] The discrete parameter \\(\\epsilon\\) denotes the sign of the quadratic term, \\(\\epsilon\\in\\{-1,1\\}\\). The primes denote differentiation with respect to \\(\\eta\\), the normalized time variable. The variable \\(x\\) is the normalized energy density and \\(y\\) the normalized Hubble function. We will only consider the region of the phase space for which the energy density remains positive (\\(x\\geq 0\\)). The system of equations above is of the form \\(u^{\\prime}_{i}=f_{i}(u_{j})\\). Since this system is autonomous, trajectories in phase space connect the fixed/equilibriumpoints of the system (\\(u_{j,o}\\)), which satisfy the system of equations \\(f_{i}(u_{j,0})=0\\). The fixed points of the high energy system and their existence conditions (the conditions for which \\(x\\geq 0\\) and \\(x,y\\in\\mathbb{R}\\)) are given in Table 1. The first fixed point (M) represents an empty flat (Minkowski) model. The parabola \\(y^{2}=x/3\\) is the union of trajectories representing flat models, \\(K=0\\) in Eq. (34) (see Figs. 1 and 3-7). The trajectories below the parabola represent open models (\\(K=-1\\)), while trajectories above the parabola represent closed models (\\(K=+1\\)). The second fixed point (E) represents a generalized static Einstein universe. This requires some form of inflationary matter and therefore may only exists when \\(\\alpha<-1/3\\) if \\(\\epsilon=+1\\) and when \\(\\alpha>-1/3\\) if \\(\\epsilon=-1\\). The last two points represent expanding and contracting spatially flat de Sitter models (\\(dS_{\\pm}\\)). These points exist when the fluid permits an effective cosmological constant point, \\(x_{\\Lambda}:=\\rho_{\\Lambda}/\\rho_{c}=-\\epsilon(\\alpha+1)\\); in addition \\(x_{\\Lambda}>0\\) must be true for the fixed points to be in the physical region of the phase space. There are further fixed points at infinity, these can be found by studying the corresponding compactified phase space. The first additional fixed point is at \\(x=y=\\infty\\) and represents a singularity with infinite expansion and infinite energy density. The second point is at \\(x=\\infty\\), \\(y=-\\infty\\) and represents a singularity with infinite contraction and infinite energy density. ### Generalities of stability analysis The stability nature of the fixed points can be found by carrying out a linear stability analysis. In brief (see e.g. [37] for details), this involves analyzing the behavior of linear perturbations \\(u_{j}=u_{j,o}+v_{j}\\) around the fixed points, which obey the equations \\(v_{i}^{\\prime}={\\bf M}v_{j}\\). The matrix \\({\\bf M}\\) is the Jacobian matrix of the dynamical system and is of the form: \\[{\\bf M}_{ij}=\\left.\\frac{\\partial f_{i}}{\\partial u_{j}}\\right|_{u_{k}=u_{k,o}}. \\tag{35}\\] The eigenvalues \\(\\lambda_{i}\\) of the Jacobian matrix evaluated at the fixed points tell us the linear stability character of the fixed points. The fixed point is said to be hyperbolic if the real part of the eigenvalues is non-zero (\\(\\mathbb{R}(\\lambda_{i})\ eq 0\\)). If all the real parts of the eigenvalues are positive (\\(\\mathbb{R}(\\lambda_{i})>0\\)) the point is said to be a repeller. Any small deviations from this point will cause the system to move away from this state. If all the real parts are negative (\\(\\mathbb{R}(\\lambda_{i})<0\\)), the point is said to be an attractor. This is because if the system is perturbed away from this state, it will rapidly return to the equilibrium state. If some of the values are positive, while others are negative then the point is said to be a saddle point. If the eigenvalues of the fixed point are purely imaginary then the point is a center. If the center nature of the fixed point is confirmed by some non-linear analysis, then the trajectories will form a set of concentric closed loops around the point. If the eigenvalues do not fall into these categories, we will resort to numerical methods to determine their stability. The eigenvalues for the fixed points of the system (Eq.'s (33)) are given in Table 2 and the linear stability character is given in Table 3. ### The \\(\\epsilon=+1\\) case We first consider the system when we have a positive quadratic energy density term (\\(\\epsilon=+1\\)) in the high energy regime EoS. We will concentrate on the region around the origin as this is where the finite energy density fixed points are all located. The plots have been created using the symbolic mathematics application Maple 9.5. The individual plots are made up by three layers, \\begin{table} \\begin{tabular}{c c c} Name & \\(\\lambda_{1}\\) & \\(\\lambda_{2}\\) \\\\ \\hline \\(M\\) & \\(0\\) & \\(0\\) \\\\ \\(E\\) & \\(\\sqrt{\\epsilon(3\\alpha+1)}\\) & \\(-\\sqrt{\\epsilon(3\\alpha+1)}\\) \\\\ \\(dS_{+}\\) & \\((\\alpha+1)\\sqrt{-3\\epsilon(\\alpha+1)}\\) & \\(-\\frac{2}{3}\\sqrt{-3\\epsilon(\\alpha+1)}\\) \\\\ \\(dS_{-}\\) & \\(-(\\alpha+1)\\sqrt{-3\\epsilon(\\alpha+1)}\\) & \\(\\frac{2}{3}\\sqrt{-3\\epsilon(\\alpha+1)}\\) \\\\ \\end{tabular} \\end{table} Table 2: Eigenvalues for the fixed points of the high energy regime system. \\begin{table} \\begin{tabular}{c c c c} Name & \\(x\\) & \\(y\\) & Existence \\\\ \\hline \\(M\\) & \\(0\\) & \\(0\\) & \\(-\\infty<\\alpha<\\infty\\) \\\\ \\(E\\) & \\(-\\frac{\\epsilon(3\\alpha+1)}{3}\\) & \\(0\\) & \\(\\epsilon(3\\alpha+1)<0\\) \\\\ \\(dS_{+}\\) & \\(-\\epsilon(\\alpha+1)\\) & \\(+\\sqrt{\\frac{-\\epsilon(\\alpha+1)}{3}}\\) & \\(\\epsilon(\\alpha+1)<0\\) \\\\ \\(dS_{-}\\) & \\(-\\epsilon(\\alpha+1)\\) & \\(-\\sqrt{\\frac{-\\epsilon(\\alpha+1)}{3}}\\) & \\(\\epsilon(\\alpha+1)<0\\) \\\\ \\end{tabular} \\end{table} Table 1: Location and existence conditions (\\(x\\geq 0\\) and \\(x,y\\in\\mathbb{R}\\)) of the fixed points of the high energy regime system. \\begin{table} \\begin{tabular}{c c c} Name & \\(\\epsilon=+1\\) & \\(\\epsilon=-1\\) \\\\ \\hline \\(M\\) & undefined & undefined \\\\ \\(E\\) & Saddle (\\(\\alpha\ eq-1/3\\)) & Center (\\(\\alpha\ eq-1/3\\)) \\\\ \\(dS_{+}\\) & Attractor (\\(\\alpha<-1\\)) & Saddle (\\(\\alpha>-1\\)) \\\\ \\(dS_{-}\\) & Repeller (\\(\\alpha<-1\\)) & Saddle (\\(\\alpha>-1\\)) \\\\ \\end{tabular} \\end{table} Table 3: The linear stability of the fixed points for the high energy regime system. the first is a directional (represented by grey arrows) field plot of the state space. The second layer represents the separatrices and fixed points of the state space. A separatrix (black lines) is a union of trajectories that marks a boundary between subsets of trajectories with different properties and can not be crossed. The fixed points are represented by black dots. The final layer represents some example trajectories (grey lines) which have been calculated by numerically integrating the system of equations for a set of initial conditions. The character of the fixed point M is undefined and so is determined numerically. The fixed point representing the generalized Einstein static solution is a saddle point. The fixed points representing the generalized expanding (contracting) de Sitter points are attractor (repeller) points. The trajectories or fixed points in the \\(y>0\\) (\\(y<0\\)) region represent expanding (contracting) models. We will mainly discuss the right hand side of the state space (expanding models) as in general the corresponding trajectory on the left hand side is identical under time reversal. #### iii.2.1 The \\(\\alpha<-1\\) sub-case The phase space of the system is considered when \\(\\alpha<-1\\) and is shown in Fig. 1. The lowest horizontal line (\\(x=0\\)) is the separatrix for open models (\\(K=-1\\)) and will be referred to as the open Friedmann separatrix (OFS). The trajectories on the separatrix represent Milne models (\\(x=0\\), \\(K=-1\\) and \\(a(\\eta)\\propto\\eta\\)) which are equivalent to a Minkowski space-time in a hyperbolic co-ordinate system. The second higher horizontal line (\\(x_{\\Lambda}=-(\\alpha+1)\\)) is the separatrix which is the dividing line between regions of phantom (\\(x<x_{\\Lambda}\\)) and non-phantom/standard behavior(\\(x>x_{\\Lambda}\\)), we will call this the phantom separatrix (PS). The standard region corresponds to the case **B1**, while the phantom region corresponds to the case **C1**. In the phantom region the fluid violates the Null Energy Condition (\\(\\rho+P<0\\)). This means the energy density is increasing (decreasing) in the future for an expanding (contracting) universe. In the standard case of the linear EoS in GR, this occurs when \\(w<-1\\) and ultimately leads to a Type I singularity [39; 46]. The parabola (\\(y^{2}=x/3\\)) represents the separatrix for flat Friedmann models (\\(K=0\\)), we will call this the flat Friedmann separatrix (FFS). The inner most thick curve is the separatrix for closed Friedmann models (\\(K=+1\\)) and will be called the closed Friedmann separatrix (CFS). The separatrix has the form: \\[y^{2}=\\frac{x}{3}-\\left[\\frac{A(\\alpha+1)x}{(\\alpha+1)+\\epsilon x}\\right]^{ \\frac{2}{3(\\alpha+1)}}. \\tag{36}\\] The constant \\(A\\) is fixed by ensuring that the saddle fixed point coincides with the fixed point representing the generalized Einstein static model (\\(E\\)). The constant is given in terms of the EoS parameters and has the form: \\[A=-\\frac{2}{\\epsilon(3\\alpha+1)(\\alpha+1)}\\left(-\\frac{\\epsilon(3\\alpha+1)}{9 }\\right)^{\\frac{3(\\alpha+1)}{2}}. \\tag{37}\\] The Minkowski fixed point is located at the intersection of the OFS and FFS. The generalized flat de Sitter fixed points are located at the intersection of the PS and FFS. The generalized Einstein static fixed point is located on the CFS. The trajectories between the OFS and the PS (\\(0<x<x_{\\Lambda}\\)) represent models which exhibit phantom behavior (the case **C1**). The open models in the phantom region are asymptotic to a Milne model in the past and to a generalized flat de Sitter model (\\(dS_{+}\\)) in the future. The closed models in the phantom region evolve from a contracting de Sitter phase, through a phantom phase to an expanding de Sitter phase (phantom bounce). It is interesting to note that unlike the standard GR case the phantom behavior does not result in a Type I singularity but asymptotically evolves to a expanding de Sitter phase. This is similar to the behavior seen in the phantom generalized Chaplygin gas case [39]. The trajectories on the PS all represent generalized de Sitter models (\\(x^{\\prime}=0\\)). The fixed points represent generalized flat de Sitter models (\\(K=0\\)). The open model on the PS represent generalized open de Sitter models (\\(K=-1\\)) in hyperbolic co-ordinates. The closed models on the PS evolve from a contracting phase to an expanding phase and represent generalized closed de Sitter models (\\(K=+1\\)). The Friedmann equation can be solved for Figure 1: The phase space for the high energy regime system with \\(\\epsilon=+1\\) and \\(\\alpha<-1\\). The upper (lower) region corresponds to the case **B1** (**C1**). such models to give: \\[a(\\eta) =\\sqrt{\\frac{3}{x_{\\sigma}}}\\cosh\\left[\\sqrt{\\frac{x_{\\sigma}}{3}}( \\eta-\\eta_{o})\\right]\\quad\\text{ for }k=1\\,,\\] \\[a(\\eta) =\\mathrm{e}^{\\sqrt{\\frac{2\\pi}{3}}(\\eta-\\eta_{o})}\\quad\\quad \\quad\\quad\\quad\\quad\\quad\\text{ for }k=0\\,, \\tag{38}\\] \\[a(\\eta) =\\sqrt{\\frac{3}{x_{\\sigma}}}\\sinh\\left[\\sqrt{\\frac{x_{\\sigma}}{3} }(\\eta-\\eta_{o})\\right]\\quad\\text{ for }k=-1\\,,\\] The region above the PS represents models which evolve in a non-phantom/standard manner (the case **B1**). The trajectories in the expanding region (\\(y>0\\)) of the phase space are asymptotic to a Type III singularity in the past5. The trajectories outside the FFS represent open models which evolve from a Type III singularity to a flat de Sitter phase, as do the trajectories on the FFS. The trajectories in between the CFS and the FFS evolve from a Type III singularity to a flat expanding de Sitter phase but may enter a phase of loitering. Loitering is characterized by the Hubble parameter dipping to a low value over a narrow red-shift range, followed by a rise again. In order to see this more clearly, we have plotted the normalized Hubble parameter (\\(y\\)) as a function of scale factor for three different trajectories in Fig. 2. The top two curves represent the open and flat models, with the Hubble parameter dropping off quicker for the flat Friedmann model. The lower most curve is the Hubble parameter for the closed model. The plot shows that the closed model evolves to a loitering phase. Loitering cosmological models in standard cosmology were first found for closed FLRW models with a cosmological constant. More recently, brane-world models which loiter have been found [47], these models are spatially flat but can behave dynamically like a standard FLRW closed model. The interesting point here is that the models mentioned above loiter without the need of a cosmological constant (due to the appearance of an effective cosmological constant), the topology is asymptotically closed in the past and flat in the future. The trajectories inside the CFS can have two distinct types of behavior corresponding to the central regions above and below the generalized Einstein static fixed point. Trajectories in the lower region represent closed models which evolve from a contracting de Sitter phase, bounce and then evolve to a expanding de Sitter phase. The trajectories in the upper region evolve from a Type III Singularity, expand to a maximum \\(a\\) (minimum \\(x\\)) and then re-collapse to a Type III singularity (we will refer to such re-collapsing models as turn-around models). Footnote 5: The Type III singularity appears to be a generic feature of the high energy regime EoS and can occur both in the future and the past. #### iii.2.2 The \\(-1<\\alpha<-1/3\\) sub-case The phase space for the system when \\(-1<\\alpha<-1/3\\) is shown in Fig. 3. The fixed points representing the Figure 3: The phase space for the high energy regime system with \\(\\epsilon=+1\\) and \\(-1<\\alpha<-1/3\\). The entire region corresponds to the case **A1**. Figure 2: The normalized Hubble parameter, \\(y\\) for models with differing curvature. Starting from the top we have open, flat and the closed models. flat generalized de Sitter models are no longer in the physical region (\\(x>0\\)) of the phase space. The open, flat and closed Friedmann separatrices (OFS, FFS and CFS) remains the same. The phantom separatrix (PS) is no longer present and all trajectories represent models with non-phantom/standard fluids (this corresponds to the case **A1**). The main difference is that the generic future attractor is now the Minkowski model. The trajectories between the OFS and FFS now evolve from a Type III singularity to a Minkowski model, as do the flat Friedmann models. The models between the FFS and CFS now evolve from a Type III singularity to a Minkowski with the possibility of entering a loitering phase (as before the model is asymptotically flat in the future). The trajectories inside the CFS and above the Einstein static fixed point still represent turn-around models. The trajectories inside the OFS and below the Einstein static model now represent standard bounce models, that is they evolve from a Minkowski model, contract to a finite size, bounce and then expand to a Minkowski model. #### iii.4.3 The \\(\\alpha\\geq-1/3\\) sub-case Next we consider the system when \\(\\alpha\\geq-1/3\\), the phase space is shown in Fig. 4. The fixed point representing the Einstein static models is now located in the \\(x<0\\) region of the phase space. The fluid in the entire physical region behaves in a non-phantom manner and corresponds to the case **A1**. The OFS and FFS remain the same and the CFS is no longer present. The trajectories between the OFS and FFS evolve from a Type III singularity to a Minkowski model. All trajectories above the FFS now represent turn-around models which start and terminate at a Type III singularity. The behavior of the models is qualitatively the same as that of the standard FLRW model with a linear EoS where \\(w=\\alpha\\), in the linear EoS case the Type III singularity is replaced by a standard \"Big Bang\". ### The \\(\\epsilon=-1\\) case We now consider the system when we have a negative quadratic energy density term (\\(\\epsilon=-1\\)) in the high energy regime EoS. The character of the fixed point M is still undefined. The fixed point representing the generalized Einstein static model is now a center. The fixed points representing the expanding/contracting flat de Sitter points now have saddle stability. As before, the trajectories or fixed points in the \\(y>0\\) (\\(y<0\\)) region represent expanding models (contracting models), the black lines represent separatrix, grey lines represent example trajectories and fixed points are represented by black dots. Figure 4: The phase space for the high energy regime system with \\(\\epsilon=+1\\) and \\(\\alpha\\geq-1/3\\). The entire region corresponds to the case **A1**. Figure 5: The phase space for the high energy regime system with \\(\\epsilon=-1\\) and \\(\\alpha<-1\\). The entire region corresponds to the case **A2**. The \\(\\alpha<-1\\) sub-case The phase space of the system when \\(\\alpha<-1\\) is shown in Fig. 5. The horizontal line (\\(x=0\\)) is still the open Friedman separatrix (OFS). The parabola is the flat Friedmann separatrix (FFS). The intersection of the OFS and FFS coincides with the Minkowski fixed point. All the trajectories in the physical region of the phase space exhibit phantom behavior (corresponding to the case **A2**), the energy density increases in an expanding model. The trajectories in the expanding (contracting) region in general evolve to a Type III singularity in the future (past). The trajectories between the OFS and the FFS are asymptotic to a Milne model in the past and are asymptotic to a Type III singularity in the future. The trajectories on the FFS start from a Minkowski model and enter a phase of super-inflationary expansion and evolve to a Type III singularity. Trajectories that start in a contracting phase during which the energy density decreases, reach a minimum \\(a\\) (minimum \\(x\\)) and then expand where the energy density increases represent phantom bounce models. The trajectories above the FFS represent closed models which evolve through a phantom bounce, but start and terminate in a Type III singularity. The behavior of the models is qualitatively the same as that of the FLRW models with a phantom linear EoS (\\(w<-1\\)) except that the Type III singularity is replaced by a Type I (\"Big Rip\") singularity. #### iv.2.2 The \\(-1<\\alpha<-1/3\\) sub-case The phase space for the system when \\(-1<\\alpha<-1/3\\) is shown in Fig. 6. The lowest horizontal line (\\(x=0\\)) is the OFS. The second higher horizontal line, \\(x_{\\Lambda}=(\\alpha+1)\\) is the phantom separatrix (PS), this divides the state space into regions of phantom (\\(x>x_{\\Lambda}\\)) and non-phantom/standard behavior (\\(x<x_{\\Lambda}\\)). The phantom region corresponds to the case **B2** and the standard region corresponds to the case **C2**. The flat de Sitter (\\(dS_{\\pm}\\)) points are located at the intersection of the FFS and the PS. The open models in the standard matter region (\\(0<x<x_{\\Lambda}\\)) are past asymptotic to open expanding de Sitter models in the past and evolve to Minkowski models in the future. The closed models in the region represent the standard bounce models, that is they evolve from a Minkowski model, contract to a minimum \\(a\\) (maximum \\(x\\)) and then expand to a Minkowski model. The trajectories above the PS (\\(x>(\\alpha+1)\\)) all exhibit phantom behavior. The open models in this region are past asymptotic to open de Sitter models in the past and evolve to a Type III singularity in the future. The closed models in the region all represent models which undergo a phantom bounce but start and terminate in a Type III singularity. Figure 6: The phase space for the high energy regime system with \\(\\epsilon=-1\\) and \\(-1<\\alpha<-1/3\\). The upper (lower) region corresponds to the case **B2** (**C2**). Figure 7: The phase space for the high energy regime system with \\(\\epsilon=-1\\) and \\(\\alpha\\geq-1/3\\). The upper (lower) region corresponds to the case **B2** (**C2**). The \\(\\alpha\\geq-1/3\\) sub-case We now consider the system when \\(\\alpha\\geq-1/3\\), the phase space is shown in Fig. 7. The OFS, FFS and the PS are all still present, the phantom regions still corresponds to the case **B2** and the standard region to the case **C2**. The trajectories in the phantom region (\\(x>x_{\\rm A}\\)) behave in a similar manner to the previous case, as do the open models in the standard matter region (\\(0<x<x_{\\rm A}\\)). The main difference is in the region representing closed models (\\(K=1\\)) with non-phantom behavior. There is now a new fixed point which represents a generalized Einstein static model (\\(E\\)). The closed models in the region now represent oscillating models. This is represented by closed concentric loops centered on the Einstein static fixed point. These oscillating models also appear in the low energy system and will be discussed in more detail later. ## IV Low energy regime dynamics ### The dimensionless dynamical system We now consider the system of equations for the low energy regime EoS, which can be simplified and expressed in terms of the following dimensionless variables: \\[x=\\frac{\\rho}{|P_{o}|}\\;,\\;\\;y=\\frac{H}{\\sqrt{|P_{o}|}}\\;,\\;\\;\\eta=\\sqrt{|P_{ o}|}t\\;. \\tag{39}\\] The system of equations is then: \\[y^{2} = \\frac{x}{3}-\\frac{K}{|P_{o}|a^{2}}, \\tag{40}\\] \\[y^{\\prime} = -y^{2}-\\frac{1}{6}\\left(3\\epsilon_{p}+(3\\alpha+1)x\\right),\\] (41) \\[x^{\\prime} = -3y\\left(\\epsilon_{p}+(\\alpha+1)x\\right). \\tag{42}\\] The discrete parameter \\(\\epsilon_{p}\\) denotes the sign of the pressure term, \\(\\epsilon_{p}\\in\\{-1,1\\}\\). The primes denote differentiation with respect to the new \\(\\eta\\). The variables \\(x\\) and \\(y\\) are the new normalized energy density and Hubble parameter. As before only the positive energy density region of the phase space will be considered. The fixed points of the system and the existence conditions are given in Table 4. As before, by existence we mean the conditions on the parameters to insure \\(x\\geq 0\\) and \\(x,y\\in\\mathbb{R}\\). The Minkowski model (\\(x=y=0\\)) is no longer a fixed point of the system. The first fixed point (E) represents a generalized static Einstein model. This requires the overall effective equation of state to be that of inflationary matter and therefore only exists when \\(\\epsilon_{p}/(3\\alpha+1)<0\\). The last two points represent generalized expanding and contracting flat de Sitter models. These points only exist if the fluid permits an effective cosmological constant point \\(\\tilde{x}_{\\rm A}=\\tilde{\\rho}_{\\rm A}/|P_{o}|=-\\epsilon_{p}/(\\alpha+1)\\), also \\(\\tilde{x}_{\\rm A}\\geq 0\\) for the points to be in the physical region of the phase space. The eigenvalues of the equilibrium points are given in Table 5, while the linear stability character is given in Table 6. ### The \\(\\epsilon_{p}=+1\\) case We start by considering the system when we have a positive constant pressure term (\\(\\epsilon_{p}=+1\\)) in the low energy regime EoS. The Minkowski (\\(x=y=0\\)) point is no longer present and the Einstein static solution has the stability character of a center. The fixed points representing the generalized expanding/contracting de Sitter points (\\(dS_{\\pm}\\)) now have saddle stability. As before black lines represent separatrix, grey lines represent example trajectories and black dots represent fixed points of the system. \\begin{table} \\begin{tabular}{c c c} \\hline \\hline Name & \\(\\lambda_{1}\\) & \\(\\lambda_{2}\\) \\\\ \\hline \\(E\\) & \\(\\sqrt{-\\epsilon_{p}}\\) & \\(-\\sqrt{-\\epsilon_{p}}\\) \\\\ \\(dS_{+}\\) & \\(\\sqrt{\\frac{-3(\\alpha+1)}{\\epsilon_{p}}}\\) & \\(-\\frac{2}{\\sqrt{-3\\epsilon_{p}(\\alpha+1)}}\\) \\\\ \\(dS_{-}\\) & \\(-\\sqrt{\\frac{-3(\\alpha+1)}{\\epsilon_{p}}}\\) & \\(\\frac{2}{\\sqrt{-3\\epsilon_{p}(\\alpha+1)}}\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 5: Eigenvalues of the fixed points of the low energy regime system. \\begin{table} \\begin{tabular}{c c c c} \\hline \\hline Name & \\(x\\) & \\(y\\) & Existence \\\\ \\hline \\(E\\) & \\(-\\frac{3\\epsilon_{p}}{(3\\alpha+1)}\\) & \\(0\\) & \\(\\frac{\\epsilon_{p}}{(3\\alpha+1)}<0\\) \\\\ \\(dS_{+}\\) & \\(-\\frac{\\epsilon_{p}}{(\\alpha+1)}\\) & \\(+\\sqrt{\\frac{-\\epsilon_{p}}{3(\\alpha+1)}}\\) & \\(\\frac{\\epsilon_{p}}{(\\alpha+1)}<0\\) \\\\ \\(dS_{-}\\) & \\(-\\frac{\\epsilon_{p}}{(\\alpha+1)}\\) & \\(-\\sqrt{\\frac{-\\epsilon_{p}}{3(\\alpha+1)}}\\) & \\(\\frac{\\epsilon_{p}}{(\\alpha+1)}<0\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: Location and existence conditions of the fixed points of the low energy regime system. \\begin{table} \\begin{tabular}{c c c} \\hline \\hline Name & \\(\\epsilon_{p}=+1\\) & \\(\\epsilon_{p}=-1\\) \\\\ \\hline \\(E\\) & Center (\\(\\alpha\ eq-1/3\\)) & Saddle (\\(\\alpha\ eq-1/3\\)) \\\\ \\(dS_{+}\\) & Saddle (\\(\\alpha<-1\\)) & Attractor (\\(\\alpha>-1\\)) \\\\ \\(dS_{-}\\) & Saddle (\\(\\alpha<-1\\)) & Repeller (\\(\\alpha>-1\\)) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 6: The linear stability of the fixed points for the low energy regime system. The \\(\\alpha<-1\\) sub-case The phase space for the system when \\(\\alpha<-1\\) is shown in Fig. 8. The open Friedmann separatrix (\\(x=0\\)) is no longer present, and the \\(x=y=0\\) point is no longer a fixed point of the system. The horizontal line (\\(\\tilde{x}_{\\Lambda}=-(\\alpha+1)^{-1}\\)) is the phantom separatrix (PS), dividing the state space into regions with phantom (\\(x>\\tilde{x}_{\\Lambda}\\)) and standard (\\(x<\\tilde{x}_{\\Lambda}\\)) behavior. The phantom region corresponds to the case **E1** and the standard region corresponds to the case **F1**. The parabola \\(y^{2}=x/3\\) is the separatrix representing the flat Friedmann models (FFS), this divides the remaining trajectories into open and closed models. The intersection of the PS and FFS coincides with the fixed points of the generalized flat de Sitter models. The trajectories in the upper region that start in a contracting phase (during which the energy density decreases), reach a minimum \\(a\\) (minimum \\(x\\)) and then expand, representing phantom bounce models which terminate in a Type I singularity. The closed models in the phantom region (\\(x>\\tilde{x}_{\\Lambda}\\)) represent phantom bounce models which start and terminate in a Type I singularity 6. The open models in the phantom region are asymptotic to open de Sitter models in the past and evolve to a Type I singularity in the future. The trajectories below the PS (\\(x<\\tilde{x}_{\\Lambda}\\)) represent models which all behave in a standard manner (the **F1** case). The open models in this region are all non-physical as they all evolve to the \\(x<0\\) region of the phase space. The region corresponding to closed models (above the FFS) contain a fixed point which represents the generalized Einstein static (E) model. The region is filled by a infinite set of concentric closed loops centered on the Einstein static fixed point, the closed loops represent oscillating models. The Friedmann equation for such models is given by: Footnote 6: The Type I singularity is a generic feature of the low energy regime EoS and can appear both in the future and the past. \\[y^{2}=\\frac{x}{3}-K\\left[\\frac{\\epsilon_{p}+(\\alpha+1)x}{B(\\alpha+1)}\\right]^ {\\frac{2}{B(\\alpha+1)}}. \\tag{43}\\] The constant \\(B\\) is fixed by the location of the Einstein fixed point (\\(E\\)). The constant is given in terms of \\(\\alpha\\) and \\(\\epsilon_{p}\\): \\[B=\\frac{-2\\epsilon_{p}}{(\\alpha+1)(3\\alpha+1)}\\left(\\frac{3\\alpha+1}{-\\epsilon _{p}}\\right)^{\\frac{3(\\alpha+1)}{2}}. \\tag{44}\\] These oscillating models appear for \\(-\\infty<\\alpha<-1/3\\) when \\(\\epsilon_{p}=+1\\) and are qualitatively similar to the oscillating models seen in the high energy case. The exact behavior of the variables for these models can be calculated by fixing the EoS parameter \\(\\alpha\\). The qualitative behavior remains the same for the models, however the maximum and minimum values of the variables change. In the case of \\(\\alpha=-2/3\\) the equations are greatly simplified, the scale factor oscillates such that: \\[a(\\eta)=a_{o}\\left(1+\\sqrt{1-K}\\sin(\\eta_{o}-\\eta)\\right) \\tag{45}\\] The maximum and minimum scale factor is then: \\[a_{max}=a_{o}(1+\\sqrt{1-K})\\;,\\;\\;a_{min}=a_{o}(1-\\sqrt{1-K})\\;. \\tag{46}\\] The normalized hubble parameter (\\(y\\)) is: \\[y=y_{o}\\frac{\\sqrt{1-K}\\cos(\\eta_{o}-\\eta)}{1+\\sqrt{1-K}\\sin(\\eta_{o}-\\eta)}. \\tag{47}\\] The maximum and minimum \\(y\\) is given by: \\[y_{max}=y_{o}\\sqrt{\\frac{1-K}{K}}\\;,\\;\\;y_{min}=-y_{o}\\sqrt{\\frac{1-K}{K}}\\;. \\tag{48}\\] The normalized energy density (\\(x\\)) is given by: \\[x=x_{o}\\left(\\frac{1-\\sqrt{1-K}\\sin(\\eta_{o}-\\eta)}{1+\\sqrt{1-K}\\sin(\\eta_{o}- \\eta)}\\right). \\tag{49}\\] The maximum and minimum \\(x\\) are: \\[x_{max}=x_{o}\\left(\\frac{1+\\sqrt{1-K}}{1-\\sqrt{1-K}}\\right)\\;,\\;\\;x_{min}=x_{o }\\left(\\frac{1-\\sqrt{1-K}}{1+\\sqrt{1-K}}\\right)\\;. \\tag{50}\\] Figure 8: The phase space for the low energy regime system with \\(\\epsilon_{p}=+1\\) and \\(\\alpha<-1\\). The upper (lower) region corresponds to the case **E1** (**F1**). #### iv.2.2 The \\(-1<\\alpha<-1/3\\) sub-case We now consider the case when \\(-1<\\alpha<-1/3\\), the phase space is shown in Fig. 9. All trajectories in the physical region of the phase space exhibit standard behavior and correspond to the case \\({\\bf D1}\\). There is only one fixed point in the \\(x\\geq 0\\) region of the phase space, this point represents the generalized Einstein static model (\\(E\\)). The FFS represent models which evolve from a standard \"Big Bang\", evolve to a Minkowski model and then to a standard \"Big Crunch\" (turn around model). The open models (below the FFS) are non-physical as the evolve into the \\(x<0\\) region. The trajectories above the separatrix represent closed models (\\(K>0\\)) which oscillate indefinitely between a maximum and minimum \\(a\\) (minimum and maximum \\(x\\)), as seen in the previous case. #### iv.2.3 The \\(-1/3\\leq\\alpha\\) sub-case The phase space for the system when \\(-1/3\\leq\\alpha\\) is shown in Fig. 10. As in the previous sub-case the fluid behaves in a standard manner and corresponds to the case \\({\\bf D1}\\). There are no fixed points in the physical region of the phase space. The parabola is the FFS and represents flat model which evolve from a \"Big Bang\", approach a Minkowski model and then re-collapse (turn-around models) to a \"Big Crunch\". The open models (below the FFS) are all non-physical as they evolve to the negative energy density region (\\(x<0\\)) of the phase space. The closed models evolve from a \"Big Bang\", reach a maximum \\(a\\) (minimum \\(x\\)) and re-collapse to a \"Big Crunch\". ### The \\(\\epsilon_{p}=-1\\) case We now consider the system when we have a negative constant pressure term (\\(\\epsilon_{p}=-1\\)) in the low energy regime EoS. As before,the Minkowski (\\(x=y=0\\)) point is no longer a fixed point of the system and the OFS is not present. The fixed point representing the generalized Einstein static model (\\(E\\)) has the stability character of a saddle. The fixed points representing the generalized expanding (contracting) flat de Sitter points now have attractor (repeller) stability. #### iv.3.1 The \\(\\alpha<-1\\) sub-case The phase space for the low energy system when \\(\\alpha<-1\\) is shown in Fig. 11. All the trajectories in the \\(x>0\\) region of the phase space now exhibit phantom behavior and correspond to the case \\({\\bf D2}\\). The open models are all non-physical as they all evolve from the negative energy density region of the phase space. The flat and closed models represent phantom bounce models which start and end in a Type I singularity. They evolve from a Type I singularity, contract, bounce at a minimum Figure 10: The phase space for the low energy regime system with \\(\\epsilon_{p}=+1\\) and \\(-1/3\\leq\\alpha\\). The entire region corresponds to the case \\({\\bf D1}\\). Figure 9: The phase space for the low energy regime system with \\(\\epsilon_{p}=+1\\) and \\(-1\\leq\\alpha<-1/3\\). The entire region corresponds to the case \\({\\bf D1}\\). (minimum \\(x\\)) and expand to a Type I singularity. #### iv.2.2 The \\(-1<\\alpha<-1/3\\) sub-case The phase space for the system when \\(-1<\\alpha<-1/3\\) is shown in Fig. 12. The horizontal line, \\(\\tilde{x}_{\\Lambda}=(\\alpha+1)^{-1}\\) is the phantom separatrix (PS), dividing the state space into regions with phantom (\\(x<\\tilde{x}_{\\Lambda}\\)) and standard behavior (\\(x>\\tilde{x}_{\\Lambda}\\)). The standard region corresponds to the case **E2** and the phantom region corresponds to the case **F2**. The intersection of the PS and FFS coincides with the fixed points of the generalized flat de Sitter models (\\(dS_{\\pm}\\)). The flat expanding (contracting) de Sitter model is the generic future attractor (repeller). The open models in the standard matter region (\\(x>\\tilde{x}_{\\Lambda}\\)) evolve from a standard \"Big Bang\" to a flat expanding de Sitter phase. The closed models in this region evolve from a contracting flat de Sitter phase, reach a minimum \\(a\\) (maximum \\(x\\)), bounce and then evolve to expanding flat de Sitter phase. These models represent standard bounce models with asymptotic de Sitter behavior. The open models in the phantom region (\\(x<\\tilde{x}_{\\Lambda}\\)) are all non-physical. The flat and closed models in this region represent models exhibiting phantom bounce behavior which avoid the \"Big Rip\" and instead evolve to an expanding flat de Sitter phase. #### iv.2.3 The \\(-1/3\\leq\\alpha\\) sub-case Figure 11: The phase space for the low energy regime system with \\(\\epsilon_{p}=-1\\) and \\(\\alpha<-1\\). The entire region corresponds to the case **D2**. Figure 12: The phase space for the low energy regime system with \\(\\epsilon_{p}=-1\\) and \\(-1\\leq\\alpha<-1/3\\). The upper (lower) region corresponds to the case **E2** (**F2**). Figure 13: The phase space for the low energy regime system with \\(\\epsilon_{p}=-1\\) and \\(-1/3\\leq\\alpha\\). The upper (lower) region corresponds to the case **E2** (**F2**). We now consider the system in the parameter range \\(-1/3\\leq\\alpha\\), the phase space is shown in Fig. 13. The PS (\\(\\tilde{x}_{\\Lambda}=(\\alpha+1)^{-1}\\)), FFS (\\(y^{2}=x/3\\)) and generalized flat de Sitter points (\\(dS_{\\pm}\\)) still remain. The flat expanding (contracting) de Sitter model is the generic future attractor (repeller). The inner most black curve is the closed Friedmann separatrix (CFS) and coincides with the generalized Einstein static fixed point (\\(E\\)), which has saddle stability. The CFS is given by: \\[y^{2}=\\frac{x}{3}-D\\left[\\frac{(\\alpha+1)x-1}{2}\\right]^{\\frac{2}{3(\\alpha+1)}}. \\tag{51}\\] The constant \\(D\\) is a constant of integration and can be fixed by the location of the fixed point \\(E\\). The constant is given in terms of \\(\\alpha\\) and has the form: \\[D=(3\\alpha+1)^{-\\frac{(3\\alpha+1)}{3(\\alpha+1)}}\\,. \\tag{52}\\] The region below the PS (\\(x<\\tilde{x}_{\\Lambda}\\)) remains qualitatively the same. The open models in the standard matter region (\\(x>\\tilde{x}_{\\Lambda}\\)) all evolve from a \"Big Bang\" to a expanding flat de Sitter phase. The trajectories between the FFS and the CFS also evolve from a \"Big Bang\" to a generalized expanding flat de Sitter model with the possibility of entering a loitering phase. The models inside the CFS can behave in one of two ways. The trajectories above the generalized Einstein static point represent turn-around models which evolve from a \"Big Bang\", reach a maximum \\(a\\) (minimum \\(x\\)) and then re-collapse to a \"Big Crunch\". The trajectories below evolve from a contracting de Sitter phase to an expanding de Sitter phase and represent bounce models. ## V The full system ### The dimensionless dynamical system We now consider the system of equations with the full quadratic EoS, this can be simplified in a similar fashion to the previous case by introducing a new set variables: \\[x=\\frac{\\rho}{|\\rho_{c}|}\\,\\ \\ y=\\frac{H}{\\sqrt{|\\rho_{c}|}}\\,\\ \\eta=\\sqrt{|\\rho_{c}|}t\\,\\ \\ \ u=\\frac{P_{o}}{\\sqrt{|\\rho_{c}|}}\\,. \\tag{53}\\] The system of equations then become: \\[y^{2} = \\frac{x}{3}-\\frac{K}{|\\rho_{c}|a^{2}}, \\tag{54}\\] \\[y^{\\prime} = -y^{2}-\\frac{1}{6}\\left(3\ u+(3\\alpha+1)x+3\\epsilon x^{2}\\right),\\] (55) \\[x^{\\prime} = -3y\\left(\ u+(\\alpha+1)x+\\epsilon x^{2}\\right). \\tag{56}\\] The parameter \\(\\epsilon\\) denotes the sign of the quadratic term, \\(\\epsilon\\in\\{-1,1\\}\\). The parameter \\(\ u\\) is the normalized constant pressure term. The primes denote differentiation with respect to the new normalized time variable \\(\\eta\\) and only the physical region of the phase space is considered (\\(x\\geq 0\\)). The fixed points and their existence conditions are given in Table 7. The phase space undergo's a topological change for special values of the \\(\ u\\) parameter, these values can be expressed in terms of \\(\\alpha\\) and are: \\[\ u_{1}=\\frac{(3\\alpha+1)^{2}}{36},\\qquad\ u_{2}=\\frac{(\\alpha+1)^{2}}{4}. \\tag{57}\\] As in the previous case, by existence we mean \\(x\\geq 0\\) and \\(x,y\\in\\mathbb{R}\\). The general eigenvalues derived from the linear stability analysis are given in Table 9. The linear stability character of the fixed points is given in Table 8. The system has six fixed points and the sign of \\(\\epsilon\\) no longer effects the linear stability character of the fixed point (but changes it's position in the \\(x-y\\) plane). The first fixed point \\(M\\) represents a Minkowski model and is only present if \\(\ u=0\\), the linear stability character is undefined. The second fixed point \\(E_{1}\\) represents an Einstein static model and has the linear stability character of a saddle. The third fixed point \\(E_{2}\\) represents an Einstein static model with the linear stability character of a center. In general, this fixed point is surrounded by a set of closed concentric loops representing oscillating models. The next pair of fixed points \\(dS_{1,\\pm}\\) represents a set of generalized flat de Sitter models, the expanding (contracting) model has attractor (repeller) stability. The next pair of fixed points \\(dS_{2,\\pm}\\) also represents a set of generalized flat de Sitter models, but now they have saddle stability. The separatrix for open Friedmann models (OFS) is only present if \\(\ u=0\\). The parabola \\(y^{2}=x/3\\) (FFS) still separates the open and closed models. The separatrix for the closed Friedmann models (CFS) is present for a narrow range of the parameters and always coincides with the fixed points representing the generalized Einstein static model, \\(E_{1}\\). The fluid permits two possible effective cosmological constants points, they are given by: \\[x_{\\Lambda,1} = \\frac{\\rho_{\\Lambda,1}}{|\\rho_{c}|}=-\\frac{(\\alpha+1)}{2\\epsilon} +\\frac{\\sqrt{\\delta}}{2\\epsilon}, \\tag{58}\\] \\[x_{\\Lambda,2} = \\frac{\\rho_{\\Lambda,2}}{|\\rho_{c}|}=-\\frac{(\\alpha+1)}{2\\epsilon} -\\frac{\\sqrt{\\delta}}{2\\epsilon}. \\tag{59}\\] Where \\(\\delta=(\\alpha+1)^{2}-4\\epsilon\ u\\). There is also a separatrix associated with each of the effective cosmological points, which divide the regions of phantom and non-phantom behavior. The separatrices will be referred to as the phantom separatrix (\\(PS_{i}\\) which corresponds to the line \\(x=x_{\\Lambda,i}\\)), with the appropriate subscript. For special choices of parameters the separatrices coincide. The discussion of the system will be split into the two categories, \\(\\epsilon=+1\\) and \\(\\epsilon=-1\\). ### The \\(\\epsilon=+1\\) case We first consider the system when we have a positive quadratic energy density term (\\(\\epsilon=+1\\)). The dynamical system can be further sub-divided into sub-cases with different values of the parameters \\(\\alpha\\) and \\(\ u\\). The various subcases have been highlighted in Table 10. The majority of sub-cases result in a phase space diagram which is qualitatively similar to cases discussed in previous sections. That is the qualitative behavior of trajectories is the same even though the functional form of \\(\\rho(a)\\) is different. The figure numbers not in bold (standard text) indicate choices of variable for which the phase space is qualitatively similar to a previous case, with the following differences: * The regions which corresponded to different types of behavior of the fluid now change (replaced by new \\(\\rho(a)\\) behavior): * The case \\(\\mathbf{D1}\\)\\(\\rightarrow\\)\\(\\mathbf{G1}\\), * The case \\(\\mathbf{E2}\\)\\(\\rightarrow\\)\\(\\mathbf{I3}\\), * The case \\(\\mathbf{F2}\\)\\(\\rightarrow\\)\\(\\mathbf{I2}\\), * The Type I singularities are now replaced by Type III singularities. There is a narrow range of the parameters for which the state space is qualitatively different. The figure numbers given in bold in Table 10, indicate the choice of variables for which the state space is qualitatively different to previously discussed cases. We will now discuss the four sub-cases which are different to those discussed in previous sections. #### iv.2.1 The \\(\\alpha<-1\\), \\(\ u=\ u_{1}\\) sub-case The phase space of the system when \\(\\alpha<-1\\) and \\(\ u=\ u_{1}\\) is shown in Fig. 14. As before the black lines represent separatrix, grey lines represent example trajectories and fixed points are represented by black dots. The fluid behaves in a standard manner and corresponds to the case \\(\\mathbf{G1}\\). This choice of parameters results in the two Einstein points (\\(E_{i}\\)) coinciding. The resulting fixed point is highly non-linear and cannot be classified into the standard linear stability categories as in previous cases. The fixed point coincides with the CFS and the parabola is the FFS. The open models are all non-physical as they \\begin{table} \\begin{tabular}{c c c} Name & \\(\\epsilon=\\pm 1\\) & Exceptions \\\\ \\hline \\(M\\) & Undefined & - \\\\ \\(E_{1}\\) & Saddle & \\(36\\epsilon\ u\ eq(3\\alpha+1)^{2}\\) \\\\ \\(E_{2}\\) & Center & \\(36\\epsilon\ u\ eq(3\\alpha+1)^{2}\\) \\\\ \\(dS_{1,+}\\) & Attractor & \\(4\\epsilon\ u\ eq(\\alpha+1)^{2}\\) \\\\ \\(dS_{1,-}\\) & Repeller & \\(4\\epsilon\ u\ eq(\\alpha+1)^{2}\\) \\\\ \\(dS_{2,+}\\) & Saddle & \\(4\\epsilon\ u\ eq(\\alpha+1)^{2}\\) \\\\ \\(dS_{2,-}\\) & Saddle & \\(4\\epsilon\ u\ eq(\\alpha+1)^{2}\\) \\\\ \\end{tabular} \\end{table} Table 8: The linear stability character of the fixed points for the full system. The stability character is only valid for choices of parameters which are consistent with the existence conditions and constraints given below. \\begin{table} \\begin{tabular}{c c c c c} Name & \\(x\\) & \\(y\\) & Existence (\\(\\epsilon=+1\\)) & Existence (\\(\\epsilon=-1\\)) \\\\ \\hline \\(M\\) & \\(0\\) & \\(0\\) & \\(\ u=0\\) & \\(\ u=0\\) \\\\ \\(E_{1}\\) & \\(-\\frac{(3\\alpha+1)}{6\\epsilon}+\\frac{\\sqrt{(3\\alpha+1)^{2}-36\\epsilon\ u}}{6\\epsilon}\\) & \\(0\\) & \\(\ u\\leq\ u_{1}\\), \\(\\alpha<-1/3\\) & \\(-\ u_{1}<\ u<0\\), \\(\\alpha>-1/3\\) \\\\ & & \\(\ u<0\\), \\(\\alpha>-1/3\\) & \\\\ \\(E_{2}\\) & \\(-\\frac{(3\\alpha+1)}{6\\epsilon}-\\frac{\\sqrt{(3\\alpha+1)^{2}-36\\epsilon\ u}}{6\\epsilon}\\) & \\(0\\) & \\(0<\ u<\ u_{1}\\), \\(\\alpha<-1/3\\) & \\(\ u>0\\), \\(\\alpha<-1/3\\) & \\(\ u\\geq-\ u_{1}\\), \\(\\alpha>-1/3\\) \\\\ & & & & \\\\ \\(dS_{1,\\pm}\\) & \\(-\\frac{(\\alpha+1)}{2\\epsilon}+\\frac{\\sqrt{(\\alpha+1)^{2}-4\\epsilon\ u}}{2\\epsilon}\\) & \\(\\pm\\left(-\\frac{(\\alpha+1)}{6\\epsilon}+\\frac{\\sqrt{(\\alpha+1)^{2}-4\\epsilon \ u}}{6\\epsilon}\\right)^{\\frac{1}{2}}\\) & \\(\ u\\leq\ u_{2}\\), \\(\\alpha<-1\\) & \\(-\ u_{2}<\ u<0\\), \\(\\alpha>-1\\) \\\\ & & & \\(\ u<0\\), \\(\\alpha>-1\\) & \\\\ \\(dS_{2,\\pm}\\) & \\(-\\frac{(\\alpha+1)}{2\\epsilon}-\\frac{\\sqrt{(\\alpha+1)^{2}-4\\epsilon\ u}}{2\\epsilon}\\) & \\(\\pm\\left(-\\frac{(\\alpha+1)}{6\\epsilon}-\\frac{\\sqrt{(\\alpha+1)^{2}-4\\epsilon \ u}}{6\\epsilon}\\right)^{\\frac{1}{2}}\\) & \\(0<\ u<\ u_{2}\\), \\(\\alpha<-1\\) & \\(\ u>0\\), \\(\\alpha<-1\\) & \\(\ u>0\\), \\(\\alpha<-1\\) \\\\ & & & & \\(-\ u_{2}\\leq\ u\\), \\(\\alpha>-1\\) \\\\ \\end{tabular} \\end{table} Table 7: The locations of the fixed points of the full system. The existence conditions are also given, that is \\(x,y,\\in\\mathbb{R}\\) and \\(x\\geq 0\\). To simplify the expressions special values of \\(\ u\\) are used which can be expressed in terms of \\(\\alpha\\), these values are \\(\ u_{1}=\\frac{(3\\alpha+1)^{2}}{36}\\) and \\(\ u_{2}=\\frac{(\\alpha+1)^{2}}{4}\\). evolve to the \\(x<0\\) region of the phase space. The models between the FFS and the CFS represent turn-around models which evolve from a Type III singularity7, evolve to a maximum \\(a\\) (minimum \\(x\\)) and then re-collapse. The trajectories above the CFS also represent similar turn-around models. Footnote 7: As with the case of the high energy EoS, the Type III singularity is a generic feature of the fully quadratic EoS. #### iv.2.2 The \\(\\alpha<-1\\), \\(\ u_{2}<\ u<\ u_{1}\\) sub-case The phase space of the system when \\(\\alpha<-1\\) and \\(\ u_{2}<\ u<\ u_{1}\\) is shown in Fig. 15. The fluid behaves in a standard manner and corresponds to the case **G1**. The Einstein fixed point of the previous case splits into two individual Einstein fixed points (\\(E_{i}\\)) via bifurcation. The first Einstein fixed point (\\(E_{1}\\)) coincides with the CFS, while the second Einstein fixed point (\\(E_{2}\\)) is located inside the lower region enclosed by the CFS. Only the trajectories above the FFS differ from the previous case. The trajectories between the CFS and FFS still represent turn-around models which evolve from a Type III singularity but may now enter a loitering phase. The trajectories inside the CFS and above the Einstein fixed point (\\(E_{1}\\)), evolve from a Type III singularity, reach a maximum \\(a\\) and then re-collapse to a Type III singular \\begin{table} \\begin{tabular}{c c c c} \\hline \\hline & \\(\\alpha<-1\\) & \\(-1\\leq\\alpha<-1/3\\) & \\(-1/3\\leq\\alpha\\) \\\\ \\hline \\(\ u>\ u_{1}\\) & FIG.10 & FIG.10 & FIG.10 \\\\ \\(\ u=\ u_{1}\\) & **FIG.14** & **FIG.14** & FIG.10 \\\\ \\(\ u_{2}<\ u<\ u_{1}\\) & **FIG.15** & **FIG.15** & FIG.10 \\\\ \\(\ u=\ u_{2}\\) & **FIG.16** & **FIG.15** & FIG.10 \\\\ \\(0<\ u<\ u_{2}\\) & **FIG.17** & **FIG.15** & FIG.10 \\\\ \\(\ u=0\\) & FIG.1 & FIG.3 & FIG.4 \\\\ \\(\ u<0\\) & FIG.13 & FIG.13 & FIG.13 \\\\ \\end{tabular} \\end{table} Table 10: The various sub-cases of the full system when \\(\\epsilon=+1\\). The figure numbers given in bold, indicate the choice of variables for which the state space is qualitatively different to previously discussed cases. \\begin{table} \\begin{tabular}{c c c} \\hline \\hline Name & \\(\\lambda_{1}\\) & \\(\\lambda_{2}\\) \\\\ \\hline \\(M\\) & \\(0\\) & \\(0\\) \\\\ \\(E_{1}\\) & \\(+\\left(\\frac{\\gamma_{1}^{2}-\\gamma_{1}\\sqrt{\\gamma_{1}^{2}-36\\epsilon\ u}-36 \\epsilon\ u}{18\\epsilon}\\right)^{\\frac{1}{2}}\\) & \\(-\\left(\\frac{\\gamma_{1}^{2}-\\gamma_{1}\\sqrt{\\gamma_{1}^{2}-36\\epsilon\ u}-36 \\epsilon\ u}{18\\epsilon}\\right)^{\\frac{1}{2}}\\) \\\\ \\(E_{2}\\) & \\(+\\left(\\frac{\\gamma_{1}^{2}+\\gamma_{1}\\sqrt{\\gamma_{1}^{2}-36\\epsilon\ u}-36 \\epsilon\ u}{18\\epsilon}\\right)^{\\frac{1}{2}}\\) & \\(-\\left(\\frac{\\gamma_{1}^{2}+\\gamma_{1}\\sqrt{\\gamma_{1}^{2}-36\\epsilon\ u}-36 \\epsilon\ u}{18\\epsilon}\\right)^{\\frac{1}{2}}\\) \\\\ \\(dS_{1,\\pm}\\) & \\(\\mp\\sqrt{\\frac{\\delta-\\gamma_{2}}{6\\epsilon}}\\left(1+\\frac{3\\delta}{2}\\right)+ \\left(\\frac{6\\delta^{2}(3\\delta-3\\gamma_{2}-4)+8(\\gamma_{2}(3\\delta-1)+\\delta )}{48\\epsilon}\\right)^{\\frac{1}{2}}\\) & \\(\\mp\\sqrt{\\frac{\\delta-\\gamma_{2}}{6\\epsilon}}\\left(1+\\frac{3\\delta}{2}\\right)- \\left(\\frac{6\\delta^{2}(3\\delta-3\\gamma_{2}-4)+8(\\gamma_{2}(3\\delta-1)+\\delta )}{48\\epsilon}\\right)^{\\frac{1}{2}}\\) \\\\ \\(dS_{2,\\pm}\\) & \\(\\pm\\sqrt{\\frac{-(\\delta+\\gamma_{2})}{6\\epsilon}}\\left(\\frac{3\\delta}{2}-1\\right) +\\left(-\\frac{6\\delta^{2}(3\\delta+3\\gamma_{2}+4)+8(\\gamma_{2}(3\\delta+1)+ \\delta)}{48\\epsilon}\\right)^{\\frac{1}{2}}\\) & \\(\\pm\\sqrt{\\frac{-(\\delta+\\gamma_{2})}{6\\epsilon}}\\left(\\frac{3\\delta}{2}-1\\right) -\\left(-\\frac{6\\delta^{2}(3\\delta+3\\gamma_{2}+4)+8(\\gamma_{2}(3\\delta+1)+ \\delta)}{48\\epsilon}\\right)^{\\frac{1}{2}}\\) \\\\ \\end{tabular} \\end{table} Table 9: The various sub-cases of the full system when \\(\\epsilon=+1\\). The figure numbers given in bold, indicate the choice of variables for which the state space is qualitatively different to previously discussed cases. ity. The trajectories below \\(E_{1}\\) represent closed oscillating models, the closed loops are centered on the second Einstein fixed point (\\(E_{2}\\)). #### iv.2.3 The \\(\\alpha<-1\\), \\(\ u=\ u_{2}\\) sub-case The phase space of the system when \\(\\alpha<-1\\) and \\(\ u=\ u_{2}\\) is shown in Fig. 16. The fluid behaves in a standard manner in both regions and the upper (lower) region corresponds to the case **H2** (**H1**). This choice of parameters results in the two sets of generalized de Sitter points (\\(dS_{i,\\pm}\\)) coinciding. The fixed points coincide with \\(PS_{i}\\) (\\(x=x_{\\Lambda,i}\\)) and the FFS. The resulting fixed points are highly non-linear, the points have shunt stability along the FFS direction and the generalized expanding (contracting) de Sitter point has attractor (repeller) stability along the \\(PS_{i}\\) direction. The two Einstein points (\\(E_{i}\\)) and the CFS are still present. In the \\(x<x_{\\Lambda,i}\\) region, the open models are all non-physical as they evolve to the \\(x<0\\) region and the closed models represent oscillating models which are centered on the Einstein point (\\(E_{2}\\)) with center linear stability. In the \\(x>x_{\\Lambda,i}\\) region, the open models are asymptotic to a Type III singularity in the past and a expanding flat de Sitter phase (\\(dS_{i,+}\\)) in the future. The trajectories between the FFS and the CFS evolve from a Type III singularity to \\(dS_{i,+}\\) with the possibility of entering a loitering phase. The models inside the CFS and above the \\(E_{1}\\) point represent turn-around models which asymptotically approach a Type III singularity. The closed models inside the CFS and below the \\(E_{2}\\) point are asymptotic to a contracting de Sitter model phase (\\(dS_{i,-}\\)) in the past and a expanding de Sitter phase (\\(dS_{i,+}\\)) in the future. The generic attractor in the \\(x>x_{\\Lambda,i}\\) region is the \\(dS_{i,+}\\) fixed point. #### iv.2.4 The \\(\\alpha<-1\\), \\(0<\ u<\ u_{2}\\) sub-case The phase space of the system when \\(\\alpha<-1\\) and \\(0<\ u<\ u_{2}\\) is shown in Fig. 17. The upper (lower) horizontal line is the \\(PS_{1}\\) (\\(PS_{2}\\)). The region above \\(PS_{1}\\) corresponds to the case **I3** and is qualitatively similar to the **H2** region in the previous sub-case. The region below \\(PS_{2}\\) corresponds to the case **I1** and is qualitatively similar to the **H1** region in the previous sub-case. The set of generalized flat de Sitter fixed points (\\(dS_{i,\\pm}\\)) of the previous case split into two sets of generalized flat de Sitter fixed points via bifurcation. The upper (lower) set of generalized de Sitter points, \\(dS_{1,\\pm}\\) (\\(dS_{2,\\pm}\\)) have attractor/repeller (saddle) stability. The region between \\(PS_{1}\\) and \\(PS_{2}\\) corresponds to the case **I2** and the fluid behaves in a phantom manner. The open models in this region are asymptotic to open de Sitter models in the past and flat de Sitter models in the future. The closed models in the phantom region represent phantom bounce models which asymptotically approach a expanding (contracting) de Sitter phases in the future (past). Figure 16: The phase space for the full system with \\(\\epsilon=+1\\), \\(\\alpha<-1\\) and \\(\ u=\ u_{2}\\). The upper (lower) region corresponds to the case **H2** (**H1**). ### The \\(\\epsilon=-1\\) case We now consider the system when we have a negative quadratic energy density term (\\(\\epsilon=-1\\)). As before the system can be sub-divided into various sub-cases with different values of parameters of \\(\\alpha\\) and \\(\ u\\). The various sub-cases have been highlighted in Table 11 As before, the figure numbers not in bold (standard text) indicate choices of variable for which the phase space is qualitatively similar to a previous case, with the following differences: * The regions which corresponded to different types of behavior of the fluid now change (replaced by new form of \\(\\rho(a)\\)): * The case **D2**\\(\\rightarrow\\)**G2**, * The case **E1**\\(\\rightarrow\\)**I6**, * The case **F1**\\(\\rightarrow\\)**I5**, * The Type I singularities are now replaced by Type III singularities. There are choices of parameters for which the phase space is different (figure numbers in bold in Table 11) and these four sub-cases will be discussed in the following sections. #### iv.3.1 The \\(\\alpha>-1/3\\), \\(-\ u_{1}<\ u<0\\) sub-case The phase space of the system when \\(\\alpha>-1/3\\) and \\(-\ u_{1}<\ u<0\\) is shown in Fig. 18. The upper (lower) horizontal line at \\(x=x_{\\Lambda,2}\\) (\\(x=x_{\\Lambda,1}\\)) is the \\(PS_{2}\\) (\\(PS_{1}\\)) (they have swapped position with respect to the \\(\\epsilon=+1\\) case). The region above \\(PS_{2}\\) corresponds to the case **I6**, the region below \\(PS_{1}\\) corresponds to the case **I4** and the fluid behaves in a phantom manner in both regions. The region between \\(PS_{1}\\) and \\(PS_{2}\\) corresponds to the case **I5** and the fluid behaves in a standard manner. The lower set of generalized de Sitter points (\\(dS_{1,\\pm}\\) - at the intersection of \\(PS_{1}\\) and FFS) have attractor/repeller stability, while the upper set (\\(dS_{2,\\pm}\\) - at the intersection of \\(PS_{2}\\) \\begin{table} \\begin{tabular}{c c c c} & \\(\\alpha<-1\\) & \\(-1\\leq\\alpha<-1/3\\) & \\(-1/3\\leq\\alpha\\) \\\\ \\hline \\(\ u>0\\) & FIG.8 & FIG.8 & FIG.8 \\\\ \\(\ u=0\\) & FIG.5 & FIG.6 & FIG.7 \\\\ \\(-\ u_{1}<\ u<0\\) & FIG.11 & **FIG.20** & **FIG.18** \\\\ \\(\ u=-\ u_{1}\\) & FIG.11 & **FIG.20** & **FIG.19** \\\\ \\(-\ u_{2}<\ u<-\ u_{1}\\) & FIG.11 & **FIG.20** & **FIG.20** \\\\ \\(\ u=-\ u_{2}\\) & FIG.11 & **FIG.21** & **FIG.21** \\\\ \\(\ u<-\ u_{2}\\) & FIG.11 & FIG.11 & FIG.11 \\\\ \\end{tabular} \\end{table} Table 11: The various sub-cases of the \\(\\epsilon=-1\\) full system. The figure numbers given in bold, indicate the choice of variables for which the phase space is qualitatively different to previous cases. Figure 17: The phase space for the full system with \\(\\epsilon=+1\\), \\(\\alpha<-1\\) and \\(0<\ u<\ u_{2}\\). The upper, middle and lower regions correspond to the cases **I3**, **I2** and **I1** respectively. and FFS) have saddle stability. The CFS is located in between \\(PS_{1}\\) and \\(PS_{2}\\) and coincides with the Einstein point (\\(E_{1}\\)). The open models in the \\(x<x_{\\Lambda,1}\\) region (the case **I4**) are all non-physical as they evolve from the \\(x<0\\) region of the phase space. The closed models in this region represent phantom bounce models which evolve from a contracting de Sitter phase (\\(dS_{1,-}\\)) to a expanding de Sitter phase (\\(dS_{1,+}\\)). The open models in the standard region (\\(x_{\\Lambda,1}<x<x_{\\Lambda,2}\\) corresponding to the case **I5** ) are asymptotic to a generalized open de Sitter model in the past and generalized flat de Sitter model in the future (the future attractor has lower \\(x\\) and \\(y\\)). The models between the CFS and the FFS in this region represent bounce models which evolve from a contracting de Sitter phase to a expanding de Sitter phase with the possibility of entering a loitering phase. The models enclosed by the CFS can be split into two groups. The models above the fixed point, \\(E_{1}\\) represent oscillating models, the closed loops are centered on the fixed point \\(E_{2}\\). The models below the fixed point, \\(E_{1}\\) represent bounce models which evolve from \\(dS_{1,-}\\) to \\(dS_{1,+}\\). In the \\(x>x_{\\Lambda,2}\\) region (the case **I6**) the open models are asymptotic to generalized open de Sitter models in the past and a Type III singularity in the future. The closed models in this region represent phantom bounce models which evolve from a Type III singularity, reach a minimum \\(a\\) (minimum \\(x\\)) and then evolve to a Type III singularity. The generalized expanding flat de Sitter model, \\(dS_{1,+}\\) (Type III singularity) is the generic future attractor in the region \\(x<x_{\\Lambda,2}\\) (\\(x>x_{\\Lambda,2}\\)). The trajectories in the regions, \\(x<x_{\\Lambda,1}\\) and \\(x>x_{\\Lambda,2}\\) remain qualitatively similar in the following two cases (Fig.19, 20). #### iv.2.2 The \\(\\alpha>-1/3\\), \\(\ u=-\ u_{1}\\) sub-case The phase space of the system when \\(\\alpha>-1/3\\) and \\(\ u=-\ u_{1}\\) is shown in Fig. 19. The phase space is equivalent to the previous sub-case, except for the region \\(x_{\\Lambda,1}<x<x_{\\Lambda,2}\\) (the case **I5**). The open models in this region are still asymptotic to generalized open (flat) de Sitter models in the past (future). The behavior of the closed models has now changed, there are no longer trajectories representing oscillating models. The two generalized Einstein fixed points (\\(E_{i}\\)) have now coalesced to form one fixed point via bifurcation. The closed models above \\(E_{i}\\) represent bounce models which evolve from \\(dS_{1,-}\\) to \\(dS_{1,+}\\), with the possibility of entering a loitering phase. The closed models below \\(E_{i}\\) represent bounce models which evolve from \\(dS_{1,-}\\) to \\(dS_{1,+}\\) without entering a loitering phase. #### iv.2.3 The \\(\\alpha>-1/3\\), \\(-\ u_{2}<\ u<-\ u_{1}\\) sub-case The phase space of the system when \\(\\alpha>-1/3\\) and \\(-\ u_{2}<\ u<-\ u_{1}\\) is shown in Fig. 20. The phase space is qualitatively similar to the previous sub-cases except for the \\(x_{\\Lambda,1}<x<x_{\\Lambda,2}\\) region. There are no longer Figure 19: The phase space for the full system with \\(\\epsilon=-1\\), \\(\\alpha>-1/3\\) and \\(\ u=-\ u_{1}\\). The upper, middle and lower regions correspond to the cases **I6**, **I5** and **I4** respectively. any fixed points representing generalized Einstein static models and the CFS is no longer present. The open models in the region behave as in previous sub-cases. The closed models in the region represent bounce model, which evolve to a expanding (collapsing) de Sitter phase in the future (past) without the possibility of entering a loitering phase. #### v.3.4 The \\(\\alpha>-1/3\\), \\(\ u=-\ u_{2}\\) sub-case The next case is the phase space of the system when \\(\\alpha>-1/3\\) and \\(\ u=-\ u_{2}\\) and is shown in Fig. 21. The fluid behaves in a phantom manner in both regions and the upper (lower) region corresponds to the case **H4** (**H3**). The two sets of generalized de Sitter points (\\(dS_{i,\\pm}\\)) have now coalesced into a single set of generalized de Sitter points (\\(dS_{\\pm}\\)) which are located at the intersection of the FFS and the \\(PS_{i}\\) which have also coalesced to form a single separatrix (\\(x_{\\Lambda,1}=x_{\\Lambda,2}\\)). The resulting fixed points are highly non-linear, the points have shunt stability along the FFS direction and the generalized expanding (contracting) de Sitter point has attractor (repeller) stability along the \\(PS_{i}\\) direction. The Type III singularity is the generic attractor in the upper region (\\(x>x_{\\Lambda,i}\\)) and the \\(dS_{i,+}\\) is the generic attractor in the lower region (\\(x<x_{\\Lambda,i}\\)). ## VI Discussion and Conclusions In this paper we have systematically studied the dynamics of homogeneous and isotropic cosmological models containing a fluid with a quadratic EoS. This has it's own specific interest (see Section I for a variety of motivations) and serves as a simple example of more general EoS's. It can also be taken to represent the truncated Taylor expansion of any barotropic EoS, and as such it serves (with the right choice of parameters) as a useful phenomenological model for dark energy, or even UDM. Indeed, we have shown the dynamics to be very different and much richer than the standard linear EoS case, finding that an almost generic feature of the evolution is the existence of an accelerated phase, most often asymptotically de Sitter, thanks to the appearance of an _effective cosmological constant_. Of course to properly build physical cosmological models would require to consider the quadratic EoS for dark energy or UDM together with standard matter and radiation. Our analysis was aimed instead to derive and classify the large variety of different dynamical effects that the quadratic EoS fluid has when is the dominant component. In this respect, it should be noticed that a positive quadratic term in the EoS allows, in presence of another fluid such as radiation, equi-density between the two fluid to occur twice, i.e. the quadratic EoS fluid can be dominant at early and late times, and subdominant in an intermediate era. In Section II we have made some general remarks, mostly based on conservation of energy only and as such valid independently of any specific theory of gravity. We have also given the various possible functional forms of the energy density as a function of the scale factor, \\(\\rho(a)\\), and listed the many subcases, grouped in three main cases, what we call: _i)_ the high energy models (no constant \\(P_{o}\\) term); _ii)_ the low energy affine EoS with no quadratic term; _iii)_ the complete quadratic EoS. The quadratic term in the EoS affects the high energy behavior as expected but can additionally affect the dynamics at relatively low energies. First, in Section III, we have concentrated on the high energy models. The specific choice of parameters fixes the behavior of the fluid, it can behave in a phantom or standard manner. In the case of phantom behavior, \\(\\rho\\) can tend to zero at early times and either tend to an effective cosmological constant (**C1**) or a Type III singularity (**A2**) at late times. Alternatively \\(\\rho\\) can also tend to an effective cosmological constant in the past (**B2**) and a Type III singularity at late times. When the fluid behaves in a standard manner, it can tend to a Type III singularity at early times, with \\(\\rho\\) either tending to zero (**A1**) or to an effective cosmological constant (**B1**) at late times. The fluid can also behave as an effective cosmological constant at early times with \\(\\rho\\) decaying away at late times (**C2**). The effective cosmological constant allows for the existence of generalized Einstein static (\\(E\\)) and flat de Sitter fixed (\\(dS_{\\pm}\\)) points which modify the late time behavior. The main new feature is the existence of models which evolve Figure 21: The phase space for the full system with \\(\\epsilon=-1\\), \\(\\alpha>-1/3\\) and \\(\ u=-\ u_{2}\\) (additionally when \\(-1<\\alpha<-1/3\\) and \\(\ u=-\ u_{2}\\)). The upper (lower) region corresponds to the case **H4** (**H3**). from a Type III singularity and asymptotically approach a flat de Sitter model (\\(dS_{+}\\)). Of specific interest are the closed models of this type, which can also evolve through an intermediate loitering phase. Neglecting the quadratic term, in Section IV we have considered the low energy models with affine EoS. As expected, the constant term in the quadratic EoS affects the relatively low energy behavior. It can result in a variety of qualitatively different dynamics with respect to those of the linear EoS case. Again, the fluid can have a phantom or standard behavior. When the fluid behaves in a phantom manner, \\(\\rho\\) can tend to an effective cosmological constant (**F2**), or can tend to a Type I (\"Big Rip\") singularity (**D2**) at late times. Alternatively, \\(\\rho\\) can also tend to an effective cosmological constant in the past and a Big Rip in the future(**E1**). When the fluid behaves in a standard manner, we recover the linear EoS at early times and \\(\\rho\\) can either tend to zero (**D1**) or to an effective cosmological constant (**E2**) at late times. The fluid can also behave as an effective cosmological constant at early times, with \\(\\rho\\) decaying away at late times (**F1**). The effective cosmological constant allows for the existence of new fixed points(\\(E\\) and \\(dS_{\\pm}\\)). Comparing with standard linear EoS cosmology, the most interesting differences are new closed models which oscillate indefinitely and new closed models which exhibit phantom behavior which do not terminate in a \"Big Rip\", but asymptotically approach an expanding flat de Sitter model (flat and closed models where the fluid behaves as case **F2**). When we study the dynamics of the system with the complete quadratic EoS, Section V, we see the appearance of new fixed points representing generalized Einstein and de Sitter models which are not present in the high/low energy systems. The various models of the simplified systems are present in the full system (but with differing \\(\\rho(a)\\)), but there are also models with qualitatively new behavior. As with the previous cases, in the case of phantom behavior, \\(\\rho\\) can tend to zero at early times and either tend to an effective cosmological constant (**H3** and **I4**) or a Type III singularity (**G2**) at late times. Alternatively \\(\\rho\\) can also tend to an effective cosmological constant in the past (**H4** and **I6**) and a Type III singularity at late times. Finally, in the phantom case \\(\\rho\\) can also tend to an effective cosmological constant both in the past and future (**I2**). In the case of standard behavior the fluid can tend to a Type III singularity at early times, with \\(\\rho\\) either tending to zero (**G1**) or to an effective cosmological constant (**H2** and **I3**) at late times. The fluid can also behave as an effective cosmological constant at early times with \\(\\rho\\) decaying away at late times (**H1** and **I1**). Finally, in the standard fluid case \\(\\rho\\) can also tend to an effective cosmological constant both in the past and future (**I5**). There are models which evolve from a Type III singularity, reach a maximum \\(a\\) (minimum \\(x\\)) and then evolve to Type III singularity. These also enter a loitering phase before and after the turn around point. We also see bounce models which enter a loitering phase and asymptotically tend to generalized expanding (contracting) de Sitter models at late (early) times. Of specific interest are models which evolve from a Type III singularity as opposed to the standard \"Big Bang\" (**A1, B1**). The simplest models of this type correspond to the high energy EoS with a positive quadratic term (is possible to recover standard behavior at late times). For these models the positive quadratic energy density term has the potential to force the initial singularity to be isotropic. The effects of such a fluid on anisotropic Bianchi I and V models is investigated in Paper II [48]. This is achieved by carrying out a dynamical systems analysis of these models. Additionally, using a linearized perturbative treatment we study the behavior of inhomogeneous and anisotropic perturbations at the singularity. The singularity is itself represented by an isotropic model and, If the perturbations of the latter decay in the past, this model represents the local past attractor in the larger phase space of inhomogeneous and anisotropic models (within the validity of the perturbative treatment). This would mean that in inhomogeneous anisotropic models with a positive non-linear term (at least quadratic) in the EoS isotropy is a natural outcome of _generic initial conditions_, unlike in the standard linear EoS case where generic cosmological models are, in GR, highly anisotropic in the past. ###### Acknowledgements. KNA is supported by PPARC (UK). MB is partly supported by a visiting grant by MIUR (Italy). The authors would like to thank Chris Clarkson, Mariam Bouhmadi-Lopez and Roy Maartens for useful comments and discussions. ## References * (1) C.L. Bennett et al., Astrophys. J. Suppl. **148** 1 (2003) ; L. Page et al., Astrophys. J. Suppl. **148** 233 (2003). * (2) D.N. Spergel et al., Astrophys. J. Suppl. **148**, 175 (2003). * (3) R. Scranton et al., astro-ph/0307335; M. Tegmark et al., Phys. Rev. D **69**, 103501 (2004). * (4) A.G. Riess et al., Astron. J. **116**, 1009 (1998); S. Perlmutter et al., Astrophys. J. **517**, 565 (1999); A.G. Riess et al., Astrophys. J. **607**, 665 (2004). * (5) W. L. Freedman and M. S. Turner, Rev. Mod. Phys. **75**, 1433 (2003). * (6) S. Weinberg, Rev. Mod. Phys. **61**, 1 (1989). * (7) A. Kamenshchik, U. Moschella and V. Pasquier, Phys. Lett. B **511**, 265-268 (2001). * (8) M.C. Bento, O. Bertolami and A.A. Sen, Phys. Rev. D **66**, 043507 (2002); M.C. Bento, O. Bertolami and A.A. Sen, Phys. Rev. D **67**, 063003 (2003); L. Amendola, F. Finelli, G. Burigana and D. Carturan, JCAP **0307**, 005 (2003); M.C. Bento, O. Bertolami and A.A. Sen, Phys. Lett. B **575**, 172 (2003); H.B. Sandvik, M. Tegmark, M. Zaldarriaga and I. Waga, Phys. Rev. D **69**, 123524 (2004). * (9) G.M. Kremer, [gr-qc/0401060] (2004); S. Capozzielllo, V. F. Cardone, S. Carloni, S. De Martino, M. Falanga and M. Bruni, JCAP **04**, 005 (2005). * (10) V. F. Cardone, C. Tortora, A. Troisi and S. Capozziello, [astro-ph/0511528] (2005). * (11) R. Holman and S. Naidu, [astro-ph/0408102] (2004). * (12) E. Babichev, V. Dokuchaev and Y. Eroshenko, Class. Quant. Grav. **22**, 143-154 (2005). * (13) R.R. Caldwell, Phys. Lett. B **545**, 23 (2002). * (14) V. Sahni, in _The Physics of the Early Universe_, ed. E. Papantonopoulos (Springer 2005). * (15) B. A. Bassett, P. S. Corasaniti and M. Kunz, Astrophys. J. **617**, L1-L4 (2004). * (16) W. J. Percival, [astro-ph/0508156] (2005). * (17) T. Shiromizu, K. Maeda and M. Sasaki, Phys. Rev. D **62**, 024012 (2000). * (18) H. A. Bridgman, K. Malik and D. Wands, Phys. Rev. D **65**, 043502 (2002). * (19) D. Langois, Astrophys. Space Sci. **283**, 469-479 (2003). * (20) R. Maartens, 2004, Living. Rev. Rel. **7**, 7 (2004). * (21) R. Maartens, V. Sahni and T.D. Saini, Phys. Rev. D **63**, 063509 (2001). * (22) J.K. Erickson, D.H. Wesley, P.J. Steinhardt and N. Turok, Phys. Rev. D **69**, 063514 (2004). * (23) A. Campos and C. Sopuerta, Phys. Rev. D **63**, 104012 (2001). * (24) A. Coley, Phys. Rev. D **66**, 023512 (2002). * (25) R. J. van den Hoogen, A. A. Coley and Y. He, Phys. Rev. D **68**, 023502 (2003). * (26) A. A. Coley, Y. He and W. C. Lim, Class. Quant. Grav. **21**, 1311 (2004). * (27) P.K.S Dunsby, N. Goheer, M. Bruni and A. Coley, 2004, Phys. Rev. D **69**, 101303(R) (2004). * (28) N. Goheer, P.K.S Dunsby, A. Coley and M. Bruni, Phys. Rev. D **70**, 123517 (2004). * (29) L. D. Landau and E. M. Lifshitz, _The Classical Theory of Fields_ (Pergamon, Oxford, 1975). * (30) K. Vandersloot, Phys. Rev. D **71**, 103506 (2005). * (31) D. Giannakis and W. Hu, [astro-ph/0501423] (2005). * (32) R.J. Scherrer, Phys. Rev. Lett. **93**, 011301 (2004). * (33) A. Diez-Tejedor and A. Feinstein, [gr-qc/0501101] (2005). * (34) N. Arkani-Hamed, H. Cheng, M.A. Luty, S. Mukohyama and T. Wiseman, [hep-ph/0507120] (2005). * (35) M. Visser, Class. Quantum Grav. **21**, 2603 (2004). * (36) J. Wainwright and G. F. R. Ellis, _Dynamical systems in cosmology_ (Cambridge University Press, Cambridge, 1997). * (37) D. K. Arrowsmith and C. M. Place, _Dynamical systems: differential equations, maps and chaotic behaviour_ (Chapman and Hall, London, 1992). * (38) S. Nojiri, S.D. Odintsov and S. Tsujikawa, Phys. Rev. D **71**, 063004 (2005). * (39) M. Bouhmadi-Lopez and J. A. Jimenez Madrid, JCAP **05**, 005 (2005). * (40) G. F. R. Ellis, Relativistic Cosmology, in _General Relativity and Cosmology_, Proceedings of the XLVII Enrico Fermi Summer School, Ed. R. K. Sachs (Academic Press 1971). * (41) S. Carroll, _Spacetime and Geometry: Introduction to General Relativity_ (Addison Wesley, Boston, 2003). * (42) M. Visser, Science **276**, 88 (1997); M. Visser, Phys. Rev. D **56**, 7578 (1997). * (43) F. Lucchin, S. Matarrese, Phys. Rev. Lett. B, **164**, 282 (1985). * (44) H. Stefancic, Phys. Rev. D **71**, 084024 (2005); H. Stefancic, Phys. Rev. D **71**, 124036 (2005). * (45) J. D. Barrow, Class. Quant. Grav. **21**, 5619 (2004); J. D. Barrow,:Class. Quant. Grav. **21**, L79 (2004). * (46) R.R. Caldwell, M. Kamionkowski and N.N. Weinberg, Phys. Rev. Lett. **91**, 071301 (2003). * (47) V. Sahni and Y. Shtanov, 2005, Phys. Rev. D **71**, 084018 (2005). * (48) K. Ananda, M. Bruni, in preparation.
We investigate the general relativistic dynamics of Robertson-Walker models with a non-linear equation of state (EoS), focusing on the quadratic case \\(P=P_{0}+\\alpha\\rho+\\beta\\rho^{2}\\). This may be taken to represent the Taylor expansion of any arbitrary barotropic EoS, \\(P(\\rho)\\). With the right combination of \\(P_{0}\\), \\(\\alpha\\) and \\(\\beta\\), it serves as a simple phenomenological model for dark energy, or even unified dark matter. Indeed we show that this simple model for the EoS can produce a large variety of qualitatively different dynamical behaviors that we classify using dynamical systems theory. An almost universal feature is that accelerated expansion phases are mostly natural for these non-linear EoS's. These are often asymptotically de Sitter thanks to the appearance of an _effective cosmological constant_. Other interesting possibilities that arise from the quadratic EoS are closed models that can oscillate with no singularity, models that bounce between infinite contraction/expansion and models which evolve from a phantom phase, asymptotically approaching a de Sitter phase instead of evolving to a \"Big Rip\". In a second paper we investigate the effects of the quadratic EoS in inhomogeneous and anisotropic models, focusing in particular on singularities. pacs: 98.80.Jk, 98.80.-k, 95.35.+d, 95.36.+x
Provide a brief summary of the text.
arxiv-format/0601034v2.md
# Direct Characterization of Quantum Dynamics: General Theory M. Mohseni Department of Chemistry and Chemical Biology, Harvard University, 12 Oxford St., Cambridge, MA 012138 Department of Chemistry, University of Southern California, Los Angeles, CA 90089 D. A. Lidar Department of Chemistry, University of Southern California, Los Angeles, CA 90089 Departments of Electrical Engineering and Physics, University of Southern California, Los Angeles, CA 90089 ## I Introduction The characterization of quantum dynamical systems is a fundamental problem in quantum physics and quantum chemistry. Its ubiquity is due to the fact that knowledge of quantum dynamics of (open or closed) quantum systems is indispensable in prediction of experimental outcomes. In particular, accurate estimation of an unknown quantum dynamical process acting on a quantum system is a pivotal task in coherent control of the dynamics, especially in verifying/monitoring the performance of a quantum device in the presence of decoherence. The procedures for characterization of quantum dynamical maps are traditionally known as quantum process tomography (QPT) [1; 2; 3]. In most QPT schemes the information about the quantum dynamical process is obtained indirectly. The quantum dynamics is first mapped onto the state(s) of an ensemble of probe quantum systems, and then the process is reconstructed via quantum state tomography of the output states. Quantum state tomography is itself a procedure for identifying a quantum system by measuring the expectation values of a set of non-commuting observables on identical copies of the system. There are two general types QPT schemes. The first is Standard Quantum Process Tomography (SQPT) [1; 4; 5]. In SQPT all quantum operations, including preparation and (state tomography) measurements, are performed on the system whose dynamics is to be identified (the \"principal\" system), without the use of any ancillas. The SQPT scheme has already been experimentally demonstrated in a variety of systems including liquid-state nuclear magnetic resonance (NMR) [6; 7; 8], optical [9; 10], atomic [11], and solid-state systems [12]. The second type of QPT scheme is known as Ancilla-Assisted Process Tomography (AAPT) [13; 14; 15; 16]. In AAPT one makes use of an ancilla (auxiliary system). First, the combined principal system and ancilla are prepared in a \"faithful\" state, with the property that all information about the dynamics can be imprinted on the final state [13; 15; 16]. The relevant information is then extracted by performing quantum state tomography in the joint Hilbert space of system and ancilla. The AAPT scheme has also been demonstrated experimentally [15; 17]. The total number of experimental configurations required for measuring the quantum dynamics of \\(n\\)\\(d\\)-level quantum systems (qudits) is \\(d^{4n}\\) for both SQPT and separable AAPT, where separable refers to the measurements performed at the end. This number can in principle be reduced by utilizing non-separable measurements, e.g., a generalized measurement [1]. However, the non-separable QPT schemes are rather impractical in physical applications because they require many-body interactions, which are not experimentally available or must be simulated at high resource cost [3]. Both SQPT and AAPT make use of a mapping of the dynamics onto a state. This raises the natural question of whether it is possible to avoid such a mapping and instead perform a _direct_ measurement of quantum dynamics, which does not require any state tomography. Moreover, it seems reasonable that by avoiding the indirect mapping one should be able to attain a reduction in resource use (e.g., the total number of measurements required), by eliminating redundancies. Indeed, there has been a growing interest in the development of direct methods for obtaining specific information about the states or dynamics of quantum systems. Examples include the estimation of general functions of a quantum state [18], detection of quantum entanglement [19], measurement of nonlinear properties of bipartite quantum states [20], reconstruction of quantum states or dynamics from incomplete measurements [21], estimation of the average fidelity of a quantum gate or process [22; 23], and universal source coding and data compression [24]. However, these schemes cannot be used directly for a _complete_ characterization of quantum dynamics. In Ref. [25] we presented such a scheme, which we called \"Direct Characterization of Quantum Dynamics\" (DCQD). In trying to address the problem of _direct_ and _complete_ characterization of quantum dynamics, we were inspired by the observation that quantum error detection (QED) [1] provides a means to directly obtain partial information about the nature of a quantum process, without ever revealing the state of the system. In general, however, it is unclear if there isa fundamental relationship between QED and QPT, namely whether it is possible to completely characterize the quantum dynamics of arbitrary quantum systems using QED. And, providing the answer is affirmative, how the physical resources scale with system size. Moreover, one would like to understand whether entanglement plays a fundamental role, and what potential applications emerge from such a theory linking QPT and QED. Finally, one would hope that this approach may lead to new ways of understanding and/or controlling quantum dynamical systems. We addressed these questions for the first time in Ref. [25] by developing the DCQD algorithm in the context of two-level quantum systems. In DCQD - see Fig. 1 - the state space of an ancilla is utilized such that experimental outcomes from a Bell-state measurement provide direct information about specific properties of the underlying dynamics. A complete set of probe states is then used to fully characterize the unknown quantum dynamics via application of a single Bell-state measurement device [3; 25]. Here we generalize the theory of Ref. [25] to arbitrary open quantum systems undergoing an unknown, completely-positive (CP) quantum dynamical map. In the generalized DCQD scheme, each probe qudit (with \\(d\\) prime) is initially entangled with an ancillary qudit system of the same dimension, before being subjected to the unknown quantum process. To extract the relevant information, the corresponding measurements are devised in such a way that the final (joint) probability distributions of the outcomes are directly related to specific sets of the dynamical superoperator's elements. A complete set of probe states can then be utilized to fully characterize the unknown quantum dynamical map. The preparation of the probe systems and the measurement schemes are based on QED techniques, however, the objective and the details of the error-detection schemes are different from those appearing in the protection of quantum systems against decoherence (the original context of QED). More specifically, we develop error-detection schemes to directly measure the coherence in a quantum dynamical process, represented by off-diagonal elements of the corresponding superoperator. We explicitly demonstrate that for characterizing a dynamical map on \\(n\\) qudits, the number of required experimental configurations is reduced from \\(d^{4n}\\), in SQPT and separable AAPT, to \\(d^{2n}\\) in DCQD. A useful feature of DCQD is that it can be efficiently applied to _partial_ characterization of quantum dynamics [25; 26]. For example, it can be used for the task of Hamiltonian identification, and also for simultaneous determination of the relaxation time \\(T_{1}\\) and the dephasing time \\(T_{2}\\). This paper is organized as follows. In Sec. II, we provide a brief review of completely-positive quantum dynamical maps, and the relevant QED concepts such as stabilizer codes and normalizers. In Sec. III, we demonstrate how to determine the quantum dynamical populations, or diagonal elements of a superoperator, through a single (ensemble) measurement. In order to further develop the DCQD algorithm and build the required notations, we introduce some lemmas and definitions in Sec. IV, and then we address the characterization of quantum dynamical coherences, or off-diagonal elements of a superoperator, in Sec. V. In Sec. VI, we show that measurement outcomes obtained in Sec. V provide \\(d^{2}\\) linearly independent equations for estimating the coherences in a process, which is in fact the maximum amount of information that can be extracted in a single measurement. A complete characterization of the quantum dynamics, however, requires obtaining \\(d^{4}\\) independent real parameters of the superoperator (for non-trace preserving maps). In Sec. VII, we demonstrate how one can obtain complete information by appropriately rotating the input state and repeating the above algorithm for a complete set of rotations. In Sec. VIII and IX, we address the general constraints on input stabilizer codes and the minimum number of physical qudits required for the encoding. In Sec. X and Sec. XI, we define a standard notation for stabilizer and normalizer measurements and then provide an outline of the DCQD algorithm for the case of a single qudit. For convenience, we provide a brief summary of the entire DCQD algorithm in Sec. XII. We conclude with an outlook in Section XIII. In Appendix A, we generalize the scheme for arbitrary open quantum systems. For a discussion of the experimental feasibility of DCQD see Ref. [25], and for a detailed and comprehensive comparison of the required physical resources in different QPT schemes see Ref. [3]. ## II Preliminaries In this section we introduce the basic concepts and notation from the theory of open quantum system dynamics and quantum error detection, required for the generalization of the DCQD algorithm to qudits. ### Quantum Dynamics The evolution of a quantum system (open or closed) can, under natural assumptions, be expressed in terms of a completely positive quantum dynamical map \\(\\mathcal{E}\\), which can be represented as [1] \\[\\mathcal{E}(\\rho)=\\sum_{m,n=0}^{d^{2}-1}\\chi_{mn}\\;E_{m}\\rho E_{n}^{\\dagger}. \\tag{1}\\] Here \\(\\rho\\) is the initial state of the system, and the \\(\\{E_{m}\\}\\) are a set of (error) operator basis elements in the Hilbert-Schmidt space of the linear operators acting on the system. I.e., any arbitrary operator acting on a \\(d\\)-dimensional quantum system can be expanded over an orthonormal and unitary error operator basis \\(\\{E_{0},E_{1},\\ldots,E_{d^{2}-1}\\}\\), where \\(E_{0}=I\\) and \\(\\text{tr}(E_{1}^{\\dagger}E_{j})=d\\delta_{ij}\\) Figure 1: Schematic of DCQD for a single qubit, consisting of Bell-state preparations, application of the unknown quantum map, \\(\\mathcal{E}\\), and Bell-state measurement (BSM). [27]. The \\(\\{\\chi_{mn}\\}\\) are the matrix elements of the superoperator \\(\\mathbf{\\chi}\\), or \"process matrix\", which encodes all the information about the dynamics, relative to the basis set \\(\\{E_{m}\\}\\)[1]. For an \\(n\\)-qudit system, the number of independent matrix elements in \\(\\mathbf{\\chi}\\) is \\(d^{in}\\) for a non-trace-preserving map and \\(d^{tn}-d^{2n}\\) for a trace-preserving map. The process matrix \\(\\mathbf{\\chi}\\) is positive and \\(\\mathrm{Tr}\\mathbf{\\chi}\\leq 1\\). Thus \\(\\mathbf{\\chi}\\) can be thought of as a density matrix in the Hilbert-Schmidt space, whence we often refer to its diagonal and off-diagonal elements as \"quantum dynamical population\" and \"quantum dynamical coherence\", respectively. In general, any successive operation of the (error) operator basis can be expressed as \\(E_{i}E_{j}=\\sum_{k}\\omega^{i,j,k}E_{k}\\), where \\(i,j,k=0,1,\\ldots,d^{2}-1\\). However, we use the \"very nice (error) operator basis\" in which \\(E_{i}E_{j}=\\omega^{i,j}E_{i*j}\\), \\(\\det E_{i}=1\\), \\(\\omega^{i,j}\\) is a \\(d\\)th root of unity, and the operation \\(*\\) induces a group on the indices [27]. This provides a natural generalization of the Pauli group to higher dimensions. Any element \\(E_{i}\\) can be generated from appropriate products of \\(X_{d}\\) and \\(Z_{d}\\), where \\(X_{d}\\ket{k}=\\ket{k+1}\\), \\(Z_{d}\\ket{k}=\\omega^{k}\\ket{k}\\), and \\(X_{d}Z_{d}=\\omega^{-1}Z_{d}X_{d}\\)[27; 28]. Therefore, for any two elements \\(E_{i=\\{a,q,p\\}}=\\omega^{a}X_{d}^{2}Z_{d}^{p}\\) and \\(E_{j=\\{a^{\\prime},q^{\\prime},p^{\\prime}\\}}=\\omega^{a^{\\prime}}X_{d}^{q^{\\prime }}Z_{d}^{p^{\\prime}}\\) (where \\(0\\leq q,p<d\\)) of the single-qudit Pauli group, we always have \\[E_{i}E_{j}=\\omega^{pq^{\\prime}-qp^{\\prime}}E_{j}E_{i}\\, \\tag{2}\\] where \\[pq^{\\prime}-qp^{\\prime}\\equiv k\\:(\\mathrm{mod}\\:d). \\tag{3}\\] The operators \\(E_{i}\\) and \\(E_{j}\\) commute iff \\(k=0\\). Henceforth, all algebraic operations are performed in \\(\\mathrm{mod}(d)\\) arithmetic, and all quantum states and operators, respectively, belong to and act on a \\(d\\)-dimensional Hilbert space. For simplicity, from now on we drop the subscript \\(d\\) from the operators. ### Quantum Error Detection In the last decade the theory of quantum error correction (QEC) has been developed as a general method for detecting and correcting quantum dynamical errors acting on multi-qubit systems such as a quantum computer [1]. QEC consists of three steps: preparation, quantum error detection (QED) or syndrome measurements, and recovery. In the preparation step, the state of a quantum system is encoded into a subspace of a larger Hilbert space by entangling the principal system with some other quantum systems using unitary operations. This encoding is designed to allow detection of arbitrary errors on one (or more) physical qubits of a code by performing a set of QED measurements. The measurement strategy is to map different possible sets of errors only to orthogonal and undeformed subspaces of the total Hilbert space, such that the errors can be unambiguously discriminated. Finally the detected errors can be corrected by applying the required unitary operations on the physical qubits during the recovery step. A key observation relevant for our purposes is that by performing QED one can actually obtain partial information about the dynamics of an open quantum system. For a qudit in a general state \\(\\ket{\\phi_{c}}\\) in the code space, and for arbitrary error basis elements \\(E_{m}\\) and \\(E_{n}\\), the Knill-Laflamme QEC condition for degenerate codes is \\(\\bra{\\phi_{c}}E_{n}^{\\dagger}E_{m}\\ket{\\phi_{c}}=\\alpha_{nm}\\), where \\(\\alpha_{nm}\\) is a Hermitian matrix of complex numbers [1]. For nondegenerate codes, the QEC condition reduces to \\(\\bra{\\phi_{c}}E_{n}^{\\dagger}E_{m}\\ket{\\phi_{c}}=\\delta_{nm}\\); i.e., in this case the errors always take the code space to orthogonal subspaces. The difference between nondegenerate and degenerate codes is illustrated in Fig. 2. In this work, we concentrate on a large class of error-correcting codes known as stabilizer codes [29]; however, in contrast to QEC, we restrict our attention almost entirely to degenerate stabilizer codes as the initial states. Moreover, by definition of our problem, the recovery/correction step is not needed or used in our analysis. A stabilizer code is a subspace \\(\\mathcal{H}_{C}\\) of the Hilbert space of \\(n\\) qubits that is an eigenspace of a given Abelian subgroup \\(\\mathcal{S}\\) of the \\(n\\)-qubit Pauli group with the eigenvalue \\(+1\\)[1; 29]. In other words, for \\(\\ket{\\phi_{c}}\\in\\mathcal{H}_{C}\\) and \\(S_{i}\\in\\mathcal{S}\\), we have \\(S_{i}\\ket{\\phi_{c}}=\\ket{\\phi_{c}}\\), where \\(S_{i}\\)'s are the stabilizer _generators_ and \\([S_{i},S_{j}]=0\\). Consider the action of an arbitrary error operator \\(E\\) on the stabilizer code \\(\\ket{\\phi_{c}}\\), \\(E\\ket{\\phi_{c}}\\). The detection of such an error will be possible if the error operator anticommutes with (at least one of) the stabilizer generators, \\(\\{S_{i},E\\}=0\\). I.e, by measuring all generators of the stabilizer and obtaining one or more negative eigenvalues we can determine the nature of the error unambiguously as: \\[S_{i}(E\\ket{\\phi_{c}})=-E(S_{i}\\ket{\\phi_{c}})=-(E\\ket{\\phi_{c}}).\\] A stabilizer code \\([n\\text{,}k\\text{,}d_{c}]\\) represents an encoding of \\(k\\) logical qudits into \\(n\\) physical qudits with code distance \\(d_{c}\\), such that an arbitrary error on any subset of \\(t=(d_{c}-1)/2\\) or fewer qudits can be detected by QED measurements. A stabilizer group with \\(n-k\\) generators has \\(d^{n-k}\\) elements and the code space is \\(d^{k}\\)-dimensional. Note that this is valid when \\(d\\) is a power of a prime [28]. The unitary operators that preserve the stabilizer group by conjugation, i.e., \\(USU^{\\dagger}=S\\), are called the normalizer of the stabilizer group, \\(N(S)\\). Since the normalizer elements preserve the code space they can be used to perform certain logical operations in the code space. However, they are insufficient for performing arbitrary quantum operations [1]. Similarly to the case of a qubit [25], the DCQD algorithm for the case of a qudit system consists of two procedures: (i) a single experimental configuration for characterization of the quantum dynamical populations, and (ii) \\(d^{2}-1\\) experimental configurations for characterization of the quantum dynamical coherences. In both procedures we always use two physical qudits for the encoding, the principal system \\(A\\) and the ancilla \\(B\\), i.e., \\(n=2\\). In procedure (i) - characterizing the diagonal elements of the superoperator - the stabilizer group has two generators. Therefore it has \\(d^{2}\\) elements and the code space consists of a _single_ quantum state (i.e., \\(k=0\\)). In procedure (ii) - characterizing the off-diagonal elements of the superoperator - the stabilizer group has a single generator, thus it has \\(d\\) elements, and the code space is two-dimensional. That is, we effectively encode a logical qudit (i.e., \\(k=1\\)) into two physical qudits. In next sections, we develop the procedures (i) and (ii) in detail for a single qudit with \\(d\\) being a prime,and in the appendix A we address the generalization to systems with \\(d\\) being an arbitrary power of a prime. ## III Characterization of quantum dynamical population To characterize the diagonal elements of the superoperator, or the population of the unitary error basis, we use a non-degenerate stabilizer code. We prepare the principal qudit, \\(A\\), and an ancilla qudit, \\(B\\), in a common \\(+1\\) eigenstate \\(\\left|\\phi_{c}\\right\\rangle\\) of the two unitary operators \\(E_{i}^{A}E_{j}^{B}\\) and \\(E_{i^{\\prime}}^{A}E_{j^{\\prime}}^{B}\\), such that \\([E_{i}^{A}E_{j}^{B},E_{i^{\\prime}}^{A}E_{j^{\\prime}}^{B}]=0\\) (e.g. \\(X^{A}X^{B}\\) and \\(Z^{A}(Z^{B})^{d-1}\\)). Therefore, simultaneous measurement of these stabilizer generators at the end of the dynamical process reveals arbitrary single qudit errors on the system \\(A\\). The possible outcomes depend on whether a specific operator in the operator-sum representation of the quantum dynamics commutes with \\(E_{i}^{A}E_{j}^{B}\\) and \\(E_{i^{\\prime}}^{A}E_{j^{\\prime}}^{B}\\), with the eigenvalue \\(+1\\), or with one of the eigenvalues \\(\\omega,\\omega^{2},\\ldots,\\omega^{d-1}\\). The projection operators corresponding to outcomes \\(\\omega^{k}\\) and \\(\\omega^{k^{\\prime}}\\), where \\(k\\),\\(k^{\\prime}=0,1,\\ldots,d-1\\), have the form \\(P_{k}=\\frac{1}{d}\\sum_{l=0}^{d-1}\\omega^{-lk}(E_{i}^{A}E_{j}^{B})^{l}\\) and \\(P_{k^{\\prime}}=\\frac{1}{d}\\sum_{l^{\\prime}=0}^{d-1}\\omega^{-l^{\\prime}k^{ \\prime}}(E_{i^{\\prime}}^{A}E_{j^{\\prime}}^{B})^{l^{\\prime}}\\). The joint probability distribution of the commuting Hermitian operators \\(P_{k}\\) and \\(P_{k^{\\prime}}\\) on the output state \\(\\mathcal{E}(\\rho)=\\sum_{m,n}\\chi_{mn}\\;E_{m}\\rho E_{n}^{\\dagger},\\) where \\(\\rho=\\left|\\phi_{c}\\right\\rangle\\left\\langle\\phi_{c}\\right|,\\) is: \\[\\mathrm{Tr}[P_{k}P_{k^{\\prime}}\\mathcal{E}(\\rho)]=\\frac{1}{d^{2}}\\sum_{m,n=0} ^{d-1}\\chi_{mn}\\sum_{l=0}^{d-1}\\sum_{l^{\\prime}=0}^{d-1}\\omega^{-lk}\\omega^{- l^{\\prime}k^{\\prime}}\\mathrm{Tr}[\\;E_{n}^{\\dagger}(E_{i}^{A})^{l}(E_{i^{\\prime}} ^{A})^{l^{\\prime}}E_{m}(E_{j}^{B})^{l}(E_{j^{\\prime}}^{B})^{l^{\\prime}}\\rho].\\] Using \\(E_{i}E_{m}=\\omega^{i_{m}}E_{m}E_{i}\\) and the relation \\((E_{i}^{A}E_{j}^{B})^{l}(E_{i}^{A}E_{j^{\\prime}}^{B})^{l^{\\prime}}\\rho=\\rho\\), we obtain: \\[\\mathrm{Tr}[P_{k}P_{k^{\\prime}}\\mathcal{E}(\\rho)]=\\frac{1}{d^{2}}\\sum_{m,n=0} ^{d^{2}-1}\\chi_{mn}\\sum_{l=0}^{d-1}\\sum_{l^{\\prime}=0}^{d-1}\\omega^{(i_{m}-k) }\\omega^{(i_{m}^{\\prime}-k^{\\prime})l^{\\prime}}\\delta_{mn},\\] where we have used the QED condition for nondegenerate codes: \\[\\mathrm{Tr}[E_{n}^{\\dagger}E_{m}\\rho]=\\left\\langle\\phi_{c}\\right|E_{n}^{ \\dagger}E_{m}\\left|\\phi_{c}\\right\\rangle=\\delta_{mn},\\] i.e., the fact that different errors should take the code space to orthogonal subspaces, in order for errors to be unambiguously detectable, see Fig. 3. Now, using the discrete Fourier transform identities \\(\\sum_{l=0}^{d-1}\\omega^{(i_{m}^{\\prime}-k^{\\prime})l}=d\\delta_{i_{m},k}\\) and \\(\\sum_{l^{\\prime}=0}^{d-1}\\omega^{(i_{m}^{\\prime}-k^{\\prime})l^{\\prime}}=d \\delta_{i_{m}^{\\prime},k^{\\prime}}\\), we obtain: \\[\\mathrm{Tr}[P_{k}P_{k^{\\prime}}\\mathcal{E}(\\rho)]=\\sum_{m=0}^{d^{2}-1}\\chi_{ mm}\\;\\delta_{i_{m},k}\\delta_{i_{m}^{\\prime},k^{\\prime}}=\\chi_{m_{0}m_{0}}. \\tag{4}\\] Here, \\(m_{0}\\) is defined through the relations \\(i_{m_{0}}=k\\) and \\(i_{m_{0}}^{\\prime}=k^{\\prime}\\), i.e., \\(E_{m_{0}}\\) is the unique error operator that anticommutes with the stabilizer operators with a fixed pair of Figure 2: A schematic diagram of Quantum Error Detection (QED). The projective measurements corresponding to eigenvalues of stabilizer generators are represented by arrows. For a non-degenerate QEC code, after the QED, the wavefunction of the multiqubit system collapses into one of the orthogonal subspaces each of which is associated with a single error operator. Therefore, all errors can be unambiguously discriminated. For degenerate codes, by performing QED the codespace also collapses into a set orthogonal subspaces. However, each subspace has multiple degeneracies among \\(k\\) error operators in a subset of the operator basis, i.e., \\(\\left\\{E_{m}\\right\\}_{m=1}^{k}\\subset\\left\\{E_{i}\\right\\}_{i=0}^{d^{2}-1}.\\) In this case, one cannot distinguish between different operators within a particular subset \\(\\left\\{E_{m}\\right\\}_{m=1}^{k_{0}}\\). eigenvalues \\(\\omega^{k}\\) and \\(\\omega^{k^{\\prime}}\\) corresponding to the experimental outcomes \\(k\\) and \\(k^{\\prime}\\). Since each \\(P_{k}\\) and \\(P_{k^{\\prime}}\\) operator has \\(d\\) eigenvalues, we have \\(d^{2}\\) possible outcomes, which gives us \\(d^{2}\\) linearly independent equations. Therefore, we can _characterize all the diagonal elements of the superoperator with a single ensemble measurement_ and \\(2d\\) detectors. In order to investigate the properties of the pure state \\(\\left|\\phi_{c}\\right\\rangle\\), we note that the code space is one-dimensional (i.e., it has only one vector) and can be Schmidt decomposed as \\(\\left|\\phi_{c}\\right\\rangle=\\sum_{k=0}^{d-1}\\lambda_{k}\\left|k\\right\\rangle_{A} \\left|k\\right\\rangle_{B}\\), where \\(\\lambda_{k}\\) are non-negative real numbers. Suppose \\(Z\\left|k\\right\\rangle=\\omega^{k}\\left|k\\right\\rangle\\); without loss of generality the two stabilizer generators of \\(\\left|\\phi_{c}\\right\\rangle\\) can be chosen to be \\((X^{A}X^{B})^{q}\\) and \\([Z^{A}(Z^{B})^{d-1}]^{p}\\). We then have \\(\\left\\langle\\phi_{c}\\right|(X^{A}X^{B})^{q}\\left|\\phi_{c}\\right\\rangle=1\\) and \\(\\left\\langle\\phi_{c}\\right|[Z^{A}(Z^{B})^{d-1}]^{p}\\left|\\phi_{c}\\right\\rangle=1\\) for any \\(q\\) and \\(p\\), where \\(0\\leq q,p<d\\). This results in the set of equations \\(\\sum_{k=0}^{d-1}\\lambda_{k}\\lambda_{k+q}=1\\) for all \\(q\\), which have only one positive real solution: \\(\\lambda_{0}=\\lambda_{1}=\\ldots=\\lambda_{k}=1/\\sqrt{d}\\); i.e., the stabilizer state, \\(\\left|\\phi_{c}\\right\\rangle\\), is a _maximally entangled state_ in the Hilbert space of the two qudits. In the remaining parts of this paper, we first develop an algorithm for extracting optimal information about the dynamical coherence of a \\(d\\)-level quantum system (with \\(d\\) being a prime), through a single experimental configuration, in Sec. IV, V and VI. Then, we further develop the algorithm to obtain complete information about the off-diagonal elements of the superoperator by repeating the same scheme for different input states, Sec. VII. In Sec. A, we address the generalization of the DCQD algorithm for qudit systems with \\(d\\) being a power of a prime. In the first step, in the next section, we establish the required notation by introducing some lemmas and definitions. ## IV Basic Lemmas and Definitions **Lemma 1**: Let \\(0\\leq q,p,q^{\\prime},p^{\\prime}<d\\), where \\(d\\) is prime. Then, for given \\(q\\), \\(p\\), \\(q^{\\prime}\\) and \\(k(\\mathrm{mod}\\ d)\\), there is a unique \\(p^{\\prime}\\) that solves \\(pq^{\\prime}-qp^{\\prime}=k\\ (\\mathrm{mod}\\ d)\\). **Proof:** We have \\(pq^{\\prime}-qp^{\\prime}=k\\ (\\mathrm{mod}\\ d)=k+td\\), where \\(t\\) is an integer. The possible solutions for \\(p^{\\prime}\\) are indexed by \\(t\\) as \\(p^{\\prime}(t)=(pq^{\\prime}-k-td)/q.\\) We now show that if \\(p^{\\prime}(t_{1})\\) is a solution for a specific value \\(t_{1}\\), there exists no other integer \\(t_{2}\ eq t_{1}\\) such that \\(p^{\\prime}(t_{2})\\) is another independent solution to this equation, i.e., \\(p^{\\prime}(t_{2})\ eq p^{\\prime}(t_{1})(\\mathrm{mod}\\ d).\\) First, note that if \\(p^{\\prime}(t_{2})\\) is another solution then we have \\(p^{\\prime}(t_{1})=p^{\\prime}(t_{2})+(t_{2}-t_{1})d/q.\\) Since \\(d\\) is prime, there are two possibilities: a) \\(q\\) divides \\((t_{2}-t_{1}),\\) then \\((t_{2}-t_{1})d/q=\\pm nd,\\) where \\(n\\) is a positive integer; therefore we have \\(p^{\\prime}(t_{2})=p^{\\prime}(t_{1})(\\mathrm{mod}\\ d),\\) which contradicts our assumption that \\(p^{\\prime}(t_{2})\\) is an independent solution from \\(p^{\\prime}(t_{1})\\). b) \\(q\\) does not divide \\((t_{2}-t_{1}),\\) then \\((t_{2}-t_{1})d/q\\) is not a integer, which is unacceptable. Thus, we have \\(t_{2}=t_{1},\\) i.e., the solution \\(p^{\\prime}(t)\\) is unique. Note that the above argument does not hold if \\(d\\) is not prime, and therefore, for some \\(q^{\\prime}\\) there could be more than one \\(p^{\\prime}\\) that satisfies \\(pq^{\\prime}-qp^{\\prime}\\equiv k\\ (\\mathrm{mod}\\ d)\\). In general, the validity of this lemma relies on the fact that \\(\\mathbb{Z}_{d}\\) is a field only for prime \\(d\\). **Lemma 2**: For any unitary error operator basis \\(E_{i}\\) acting on a Hilbert space of dimension \\(d\\), where \\(d\\) is a prime and \\(i=0,1,\\ldots,d^{2}-1\\), there are \\(d\\) unitary error operator basis elements, \\(E_{j}\\), that anticommute with \\(E_{i}\\) with a specific eigenvalue \\(\\omega^{k}\\), i.e., \\(E_{i}E_{j}=\\omega^{k}E_{j}E_{i}\\), where \\(k=0,\\ldots,d-1\\). **Proof:** We have \\(E_{i}E_{j}=\\omega^{pq^{\\prime}-qp^{\\prime}}E_{j}E_{i}\\), where \\(0\\leq q\\),\\(p\\),\\(q^{\\prime}\\),\\(p^{\\prime}<d\\), and \\(pq^{\\prime}-qp^{\\prime}\\equiv k\\ (\\mathrm{mod}\\ d)\\). Therefore, for fixed \\(q\\), \\(p\\), and \\(k\\ (\\mathrm{mod}\\ d)\\) we need to show that there are \\(d\\) solutions (\\(q^{\\prime}\\),\\(p^{\\prime}\\)). According to Lemma 1, for any \\(q^{\\prime}\\) there is only one \\(p^{\\prime}\\) that satisfies \\(pq^{\\prime}-qp^{\\prime}=k\\ (\\mathrm{mod}\\ d)\\); but \\(q^{\\prime}\\)can have \\(d\\) possible values, therefore there are \\(d\\) possible pairs of \\((q^{\\prime}\\),\\(p^{\\prime})\\). **Definition 1**: We introduce \\(d\\) different subsets, \\(W_{k}^{i}\\), \\(k=0,1,\\ldots,d-1\\), of a unitary error operator basis \\(\\{E_{j}\\}\\) (i.e. \\(W_{k}^{i}\\subset\\{E_{j}\\}\\)). Each subset contains \\(d\\) members which all anticommute with a particular basis element \\(E_{i}\\), where \\(i=0,1,\\ldots,d^{2}-1,\\) with fixed eigenvalue \\(\\omega^{k}.\\) The subset \\(W_{0}^{i}\\) which includes \\(E_{0}\\) and \\(E_{i}\\) is in fact an Abelian subgroup of the single-qudit Pauli group, \\(G_{1}\\). ## V Characterization of Quantum Dynamical Coherence For characterization of the coherence in a quantum dynamical process acting on a qudit system, we prepare a two-qudit quantum system in a non-separable eigenstate \\(\\left|\\phi_{ij}\\right\\rangle\\) of a unitary operator \\(S_{ij}=E_{i}^{A}E_{j}^{B}\\). We then subject the qudit \\(A\\) to the unknown dynamical map, and measure the sole stabilizer operator \\(S_{ij}\\) at the output state. Here, the state \\(\\left|\\phi_{ij}\\right\\rangle\\) is in fact a degenerate code space, since all the operators \\(E_{n}^{A}\\) that anticommute with \\(E_{i}^{A},\\) with a particular eigenvalue \\(\\omega^{k}\\), perform the same transformation on the code space and cannot be distinguished by the stabilizer measurement. If we express the spectral decomposition of \\(S_{ij}=E_{i}^{A}E_{j}^{B}\\) as \\(S_{ij}=\\sum_{k}\\omega^{k}P_{k}\\), the projection operator corresponding to the outcome \\(\\omega^{k}\\) can be written as \\(P_{k}=\\frac{1}{d}\\sum_{l=0}^{d-1}\\omega^{-lk}(E_{i}^{A}E_{j}^{B})^{l}.\\) The post-measurement state of the system, up a normalization factor, will be: \\[P_{k}\\mathcal{E}(\\rho)P_{k}=\\frac{1}{d^{2}}\\sum_{m,n=0}^{d^{2}-1}\\chi_{mn}\\sum_{l=0 }^{d-1}\\sum_{l^{\\prime}=0}^{d-1}\\omega^{-lk}\\omega^{l^{\\prime}k}[(E_{i}^{A}E_{j}^ {B})^{l}E_{m}\\rho E_{n}^{\\dagger}(E_{i}^{A\\dagger}E_{j}^{B\\dagger})^{l^{\\prime}}].\\]Using the relations \\(E_{i}E_{m}=\\omega^{i_{m}}E_{m}E_{i}\\), \\(E_{n}^{\\dagger}E_{i}^{\\dagger}=\\omega^{-i_{n}}E_{i}^{\\dagger}E_{n}^{\\dagger}\\) and \\((E_{i}^{A}E_{j}^{B})^{l}\\rho(E_{i}^{A\\dagger}E_{j}^{B\\dagger})^{l^{\\prime}}=\\rho\\) we have: \\[P_{k}\\mathcal{E}(\\rho)P_{k}=\\frac{1}{d^{2}}\\sum_{l=0}^{d-1}\\omega^{(i_{m}-k)l} \\sum_{l^{\\prime}=0}^{d-1}\\omega^{(k-i_{n})l^{\\prime}}\\sum_{m,m=0}^{d^{2}-1} \\chi_{mn}E_{m}\\rho E_{n}^{\\dagger}.\\] Now, using the discrete Fourier transform properties \\(\\sum_{l=0}^{d-1}\\omega^{(i_{m}-k)l}=d\\delta_{i_{m},k}\\) and \\(\\sum_{l^{\\prime}=0}^{d-1}\\omega^{(k-i_{n})l^{\\prime}}=d\\delta_{i_{n},k}\\), we obtain: \\[P_{k}\\mathcal{E}(\\rho)P_{k}=\\sum_{m}\\chi_{mm}\\;E_{m}^{A}\\rho E_{m}^{A\\dagger}+ \\sum_{m<n}(\\chi_{mn}\\;E_{m}^{A}\\rho E_{n}^{A\\dagger}+\\chi_{mn}^{*}\\;E_{n}^{A} \\rho E_{m}^{A\\dagger}). \\tag{5}\\] Here, the summation runs over all \\(E_{m}^{A}\\) and \\(E_{n}^{B}\\) that belong to the same \\(W_{k}^{i}\\); see Lemma 2. I.e., the simulation is over all unitary operator basis elements \\(E_{m}^{A}\\) and \\(E_{n}^{B}\\) that anti-commute with \\(E_{i}^{A}\\) with a particular eigenvalue \\(\\omega^{k}\\). Since the number of elements in each \\(W_{k}\\) is \\(d\\), the state of the two-qudit system after the projective measurement comprises \\(d+2[d(d-1)/2]=d^{2}\\) terms. The probability of getting the outcome \\(\\omega^{k}\\) is: \\[\\mathrm{Tr}[P_{k}\\mathcal{E}(\\rho)]=\\sum_{m}\\chi_{mm}+2\\sum_{m<n}\\mathrm{Re}[ \\chi_{mn}\\;\\mathrm{Tr}(E_{n}^{A\\dagger}E_{m}^{A}\\rho)]. \\tag{6}\\] Therefore, the normalized post-measurement states are \\(\\rho_{k}=P_{k}\\mathcal{E}(\\rho)P_{k}/\\mathrm{Tr}[P_{k}\\mathcal{E}(\\rho)]\\). These \\(d\\) equations provide us with information about off-diagonal elements of the superoperator iff \\(\\mathrm{Tr}[(E_{n}^{A})^{\\dagger}E_{m}^{A}\\rho]\ eq 0\\). Later we will derive some general properties of the state \\(\\rho\\) such that this condition can be satisfied. Next we measure the expectation value of any other unitary operator basis element \\(T_{rs}=E_{r}^{A}E_{s}^{B}\\) on the output state, such that \\(E_{r}^{A}\ eq I\\), \\(E_{s}^{B}\ eq I\\), \\(T_{rs}\\in N(S)\\) and \\(T_{rs}\ eq(S_{ij})^{a}\\), where \\(0\\leq a<d\\). Let us write the spectral decomposition of \\(T_{rs}\\) as \\(T_{rs}=\\sum\\limits_{k^{\\prime}}\\omega^{k^{\\prime}}P_{k^{\\prime}}\\). The joint probability distribution of the commuting Hermitian operators \\(P_{k}\\) and \\(P_{k^{\\prime}}\\) on the output state \\(\\mathcal{E}(\\rho)\\) is \\(\\mathrm{Tr}[P_{k^{\\prime}}P_{k}\\mathcal{E}(\\rho)]\\). The average of these joint probability distributions of \\(P_{k}\\) and \\(P_{k^{\\prime}}\\) over different values of \\(k^{\\prime}\\) becomes: \\(\\sum_{k^{\\prime}}\\omega^{k^{\\prime}}\\mathrm{Tr}[P_{k^{\\prime}}P_{k}\\mathcal{E }(\\rho)]=\\mathrm{Tr}[T_{rs}P_{k}\\mathcal{E}(\\rho)]=\\mathrm{Tr}(T_{rs}\\rho_{k})\\), which can be explicitly written as: \\[\\mathrm{Tr}(T_{rs}\\rho_{k}) = \\sum_{m}\\chi_{mm}\\;\\mathrm{Tr}(E_{m}^{A\\dagger}E_{r}^{A}E_{s}^{B }E_{m}^{A}\\rho)+\\sum_{m<n}[\\chi_{mn}\\;\\mathrm{Tr}(E_{n}^{A\\dagger}E_{r}^{A}E_{ s}^{B}E_{m}^{A}\\rho)+\\chi_{mn}^{*}\\;\\mathrm{Tr}(E_{m}^{A\\dagger}E_{r}^{A}E_{ s}^{B}E_{n}^{A}\\rho)].\\] Using \\(E_{r}^{A}E_{m}^{A}=\\omega^{r_{m}}E_{m}^{A}E_{r}^{A}\\) and \\(E_{r}^{A}E_{n}^{A}=\\omega^{r_{n}}E_{n}^{A}E_{r}^{A}\\) this becomes: \\[\\mathrm{Tr}(T_{rs}\\rho_{k}) = \\frac{1}{\\mathrm{Tr}[P_{k}\\mathcal{E}(\\rho)]}\\left(\\sum_{m}\\omega^ {r_{m}}\\chi_{mm}\\;\\mathrm{Tr}(T_{rs}\\rho)+\\sum_{m<n}\\left[\\omega^{r_{m}}\\chi_{ mn}\\;\\mathrm{Tr}(E_{n}^{A\\dagger}E_{m}^{A}T_{rs}\\rho)+\\omega^{r_{n}}\\chi_{mn}^{*}\\; \\mathrm{Tr}(E_{m}^{A\\dagger}E_{n}^{A}T_{rs}\\rho)\\right]\\right) \\tag{7}\\] Therefore, we have an additional set of \\(d\\) equations to identify the off-diagonal elements of the superoperator, provided that \\(\\mathrm{Tr}(E_{n}^{A\\dagger}E_{m}^{A}T_{rs}\\rho)\ eq 0\\). Suppose we now measure another Figure 3: A diagram of the error-detection measurement for estimating quantum dynamical population. The arrows represent the projection operators \\(P_{k}P_{k^{\\prime}}\\) corresponding to different eigenvalues of the two stabilizer generators \\(S\\) and \\(S^{\\prime}\\). These projective measurements result in a projection of the wavefunction of the two-qudit systems, after experiencing the dynamical map, into one of the orthogonal subspaces each of which is associated to a specific error operator basis. By calculating the joint probability distribution of all possible outcomes, \\(P_{k}P_{k^{\\prime}}\\), for \\(k,k^{\\prime}=0,\\ldots,d\\), we obtain all \\(d^{2}\\) diagonal elements of the superoperator in a single ensemble measurement. unitary operator \\(T_{r^{\\prime}s^{\\prime}}=E^{A}_{r^{\\prime}}E^{B}_{s^{\\prime}}\\) that commutes with \\(S_{ij}\\), i.e. \\(T_{r^{\\prime}s^{\\prime}}\\in N(S)\\), and also commutes with \\(T_{rs}\\), and satisfies the relations \\(T_{r^{\\prime}s^{\\prime\\prime}}\ eq T^{b}_{rs}{S_{ij}}^{a}\\) (where \\(0\\leq a\\),\\(b<d\\)), \\(E^{A}_{r}\ eq I\\) and \\(E^{B}_{s}\ eq I\\). Such a measurement results in \\(d\\) equations for \\(\\mathrm{Tr}(T_{r^{\\prime}s^{\\prime}}\\rho_{k})\\), similar to those for \\(\\mathrm{Tr}(T_{rs}\\rho_{k})\\). However, for these equations to be useful for characterization of the dynamics, one needs show that they are all linearly independent. In the next section, we find the maximum number of independent and commutating unitary operators \\(T_{rs}\\) such that their expectation values on the output state, \\(\\mathrm{Tr}(T_{rs}\\rho_{k})\\), result in linearly independent equations to be \\(d-1\\), see Fig. 4. I.e., we find an optimal Abelian set of unitary operators such that the joint probability distribution functions of their eigenvalues and stabilizer eigenvalues at the output state are linearly independent. ## VI Linear independence and optimality of measurements Before presenting the proof of linear independence of the functions \\(\\mathrm{Tr}(T_{rs}\\rho_{k})\\) and of the optimality of the DCQD algorithm, we need to introduce the following lemmas and definitions. **Lemma 3**: If a stabilizer group, \\(S\\), has a single generator, the order of its normalizer group, \\(N(S)\\), is \\(d^{3}\\). **Proof:** Let us consider the sole stabilizer generator \\(S_{12}=E^{A}_{1}E^{B}_{2}\\), and a typical normalizer element \\(T_{1^{\\prime}2^{\\prime}}=E^{A}_{1}E^{B}_{2^{\\prime}}\\), where \\(E^{A}_{1}=X^{q_{1}}Z^{p_{1}}\\), \\(E^{B}_{2}=X^{q_{2}}Z^{p_{2}}\\), \\(E^{A}_{1^{\\prime}}=X^{q_{1^{\\prime}}}Z^{p_{1^{\\prime}}}\\) and \\(E^{B}_{2}=X^{q_{2^{\\prime}}}Z^{p_{2^{\\prime}}}\\). Since \\(S_{12}\\) and \\(T_{1^{\\prime}2^{\\prime}}\\) commute, we have \\(S_{12}T_{1^{\\prime}2^{\\prime}}=\\omega^{\\sum_{i=1}^{p_{i}}q^{\\prime}_{i^{\\prime }}-q_{i^{\\prime}}p^{\\prime}_{i^{\\prime}}T_{1^{\\prime}2^{\\prime}}S_{12}\\), where \\(\\sum_{i=1}^{2}p_{i}q^{\\prime}_{i^{\\prime}}-q_{i}p^{\\prime}_{i^{\\prime}}\\equiv 0 \\;(\\mathrm{mod}\\ d)\\). We note that for any particular code with a single stabilizer generator, all \\(q_{1}\\),\\(p_{1}\\),\\(q_{2}\\) and \\(p_{2}\\) are fixed. Now, by Lemma 1, for given values of \\(q^{\\prime}_{1}\\),\\(p^{\\prime}_{1}\\) and \\(q^{\\prime}_{2}\\) there is only one value for \\(p^{\\prime}_{2}\\) that satisfies the above equation. However, each of \\(q^{\\prime}_{1}\\),\\(p^{\\prime}_{1}\\) and \\(q^{\\prime}_{2}\\) can have \\(d\\) different values. Therefore, there are \\(d^{3}\\) different normalizer elements \\(T_{1^{\\prime}2^{\\prime}}\\). **Lemma 4**: Each Abelian subgroup of a normalizer, which includes the stabilizer group \\(\\{S^{a}_{ij}\\}\\) as a proper subgroup, has order \\(d^{2}\\). **Proof:** Suppose \\(T_{rs}\\) is an element of \\(N(S)\\), i.e., it commutes with \\(S_{ij}\\). Moreover, all unitary operators of the form \\(T^{b}_{rs}{S_{ij}}^{a}\\), where \\(0\\leq a\\),\\(b<d\\), also commute. Therefore, any Abelian subgroup of the normalizer, \\(A\\subset N(S)\\), which includes \\(\\{S^{a}_{ij}\\}\\) as a proper subgroup, is at least order of \\(d^{2}\\). Now let \\(T_{r^{\\prime}s^{\\prime}}\\) be any other normalizer element, i.e., \\(T_{r^{\\prime}s^{\\prime}}\ eq T^{b}_{rs}S^{a}_{ij}\\) with \\(0\\leq a\\),\\(b<d\\), which belongs to the same Abelian subgroup \\(A\\). In this case, any operator of the form \\(T^{b^{\\prime}}_{r^{\\prime}s^{\\prime}}T^{b}_{rs}S^{a}_{ij}\\) would also belong to \\(A\\). Then all elements of the normalizer should commute or \\(A=N(S)\\), which is unacceptable. Thus, either \\(T_{r^{\\prime}s^{\\prime}}=T^{b}_{rs}S^{a}_{ij}\\) or \\(T_{r^{\\prime}s^{\\prime}}\ otin A\\), i.e., the order of the Abelian subgroup \\(A\\) is at most \\(d^{2}\\). **Lemma 5**: There are \\(d+1\\) Abelian subgroups, \\(A\\), in the normalizer \\(N(S)\\). **Proof:** Suppose that the number of Abelian subgroups which includes the stabilizer group as a proper subgroup is \\(n\\). Using Lemmas 3 and 4, we have: \\(d^{3}=nd^{2}-(n-1)d\\), where the term \\((n-1)d\\) has been subtracted from the total number of elements of the normalizer due to the fact that the elements of the stabilizer group are common to all Abelian subgroups. Solving this equation for \\(n\\), we find that \\(n=\\frac{d^{2}-1}{d-1}=d+1\\). **Lemma 6**: The basis of eigenvectors defined by \\(d+1\\) Abelian subgroups of \\(N(S)\\) are mutually unbiased. **Proof:** It has been shown [30] that if a set of \\(d^{2}-1\\) traceless and mutually orthogonal \\(d\\times d\\) unitary matrices can be partitioned into \\(d+1\\) subsets of equal size, such that the \\(d-1\\) unitary operators in each subset commute, then the basis of eigenvectors corresponding to these subsets are mutually unbiased. We note that, based on Lemmas 3, 4 and 5, and in the code space (i.e., up to multiplication by the stabilizer elements \\(\\{S^{a}_{ij}\\}\\)), the normalizer \\(N(S)\\) has \\(d^{2}-1\\) nontrivial elements, and each Abelian subgroup \\(A\\), has \\(d-1\\) nontrivial commuting operators. Thus, the bases of eigenvectors defined by \\(d+1\\) Abelian subgroups of \\(N(S)\\) are mutually unbiased. **Lemma 7**: Let \\(C\\) be a cyclic subgroup of \\(A\\), i.e., \\(C\\subset A\\subset N(S)\\). Then, for any fixed \\(T\\in A\\), the number of distinct left (right) cosets, \\(TC\\)\\((CT)\\), in each \\(A\\) is \\(d\\). **Proof:** We note that the order of any cyclic subgroup \\(C\\subset A\\), such as \\(T^{b}_{rs}\\) with \\(0\\leq b<d\\), is \\(d\\). Therefore, by Lemma 4, the number of distinct cosets in each \\(A\\) is \\(\\frac{d^{2}}{d}=d\\). **Definition 2**: We denote the cosets of an (invariant) cyclic subgroup, \\(C_{a}\\), of an Abelian subgroup of the normalizer, \\(A_{v}\\), by \\(A_{v}/C_{a}\\), where \\(v=1,2,\\ldots,d+1\\). We also represent generic members of \\(A_{v}/C_{a}\\) as \\(T^{b}_{rs}S^{a}_{ij}\\), where \\(0\\leq a\\),\\(b<d\\). The members of a specific coset \\(A_{v}/C_{a_{0}}\\) are denoted as \\(T^{b}_{rs}S^{a_{0}}_{ij}\\), where \\(a_{0}\\) represents a fixed power of stabilizer generator \\(S_{ij}\\), that labels a particular coset \\(A_{v}/C_{a_{0}}\\), and \\(b\\)\\((0\\leq b<d)\\) labels different members of that particular coset. **Lemma 8**: The elements of a coset, \\(T^{b}_{rs}S^{a_{0}}_{ij}\\) (where \\(T_{rs}=E^{A}_{r}E^{B}_{s}\\), \\(S_{ij}=E^{A}_{i}E^{B}_{j}\\) and \\(0\\leq b<d\\)) anticommute with \\(E^{A}_{i}\\) with different eigenvalues \\(\\omega^{k}\\). I.e., there are no two different members of a coset, \\(A_{v}/C_{a_{0}}\\), that anticommute with \\(E^{A}_{i}\\) with the same eigenvalue. **Proof:** First we note that for each \\(T^{b}_{rs}=(E^{A}_{s})^{b}(E^{B}_{s})^{b}\\), the unitary operators acting only on the principal subsystem, \\((E^{A}_{r})^{b}\\), must satisfy either (a) \\((E^{A}_{r})^{b}=E^{A}_{i}\\) or (b) \\((E^{A}_{r})^{b}\ eq E^{A}_{i}\\). In the case (a), and due to \\([T_{rs}S_{ij}]=0\\), we should also have \\((E^{B}_{s})^{b}=E^{B}_{j}\\), which results in \\(T^{b}_{rs}=S_{ij}\\); i.e., \\(T^{b}_{rs}\\) is a stabilizer and not a normalizer. This is unacceptable. In the case (b), in particular for \\(b=1\\), we have \\(E^{A}_{r}E^{A}_{s}=\\omega^{r_{i}}E^{A}_{s}E^{A}_{r}\\). Therefore, for arbitrary \\(b\\) we have \\((E^{A}_{r})^{b}E^{A}_{i}=\\omega^{br_{i}}E^{A}_{i}(E^{A}_{r})^{b}\\). Since \\(0\\leq b<d\\), we conclude that \\(\\omega^{br_{i}}\ eq\\omega^{b^{\\prime}r_{i}}\\) for any two different values of \\(b\\) and \\(b^{\\prime}\\). As a consequence of this lemma, different \\((E^{A}_{r})^{b}\\), for \\(0\\leq b<d\\), belong to different \\(W^{b}_{i}\\)'s. **Lemma 9**: For any fixed unitary operator \\(E^{A}_{r}\\in W^{i}_{k}\\), where \\(k\ eq 0\\), and any other two independent operators \\(E^{A}_{m}\\) and \\(E^{A}_{n}\\) that belong to the same \\(W^{i}_{k}\\), we always have \\(\\omega^{m}\ eq\\omega^{m_{n}}\\), where \\(E^{A}_{r}E^{A}_{m}=\\omega^{r_{n}}E^{A}_{m}E^{A}_{r}\\) and \\(E^{A}_{r}E^{A}_{n}=\\omega^{r_{n}}E^{A}_{n}E^{A}_{r}\\). **Proof:** We need to prove for operators \\(E^{A}_{r}\\),\\(E^{A}_{m}\\),\\(E^{A}_{n}\\)\\(\\in W^{i}_{k}\\) (where \\(k\ eq 0\\)), that we always have: \\(E^{A}_{m}\ eq E^{A}_{n}\\Longrightarrow\\omega^{r_{n}}\ eq\\omega^{r_{n}}\\). Let us prove the converse: \\(\\omega^{r_{m}}=\\omega^{r_{n}}\\Longrightarrow E^{A}_{m}=E^{A}_{n}\\). We define \\(E^{A}_{i}=X^{q_{i}}Z^{p_{i}}\\), \\(E^{A}_{r}=X^{q_{r}}Z^{p_{r}}\\), \\(E^{A}_{m}=X^{q_{m}}Z^{p_{m}}\\), \\(E^{A}_{n}=X^{q_{m}}Z^{p_{m}}\\), \\(E^{A}_{n}=X^{q_{m}}Z^{p_{m}}\\). Based on the definition of subsets \\(W^{i}_{k}\\) with \\(k\ eq 0\\), we have: \\(p_{i}q_{m}-q_{i}p_{m}\\equiv p_{i}q_{n}-q_{i}p_{n}=k\\pmod{d}=k+td\\) (I), where \\(t\\) is an integer number. We need to show if \\(p_{r}q_{m}-q_{r}p_{m}\\equiv p_{r}q_{n}-q_{r}p_{n}=k^{\\prime}(\\bmod d)=k^{ \\prime}+t^{\\prime}d\\) (II), then \\(E^{A}_{m}=E^{A}_{n}\\). We divide the equations (I) by \\(q_{i}q_{m}\\) or \\(q_{i}q_{n}\\) to get: \\(\\frac{p_{i}}{q_{i}}=\\frac{k+td}{q_{i}q_{m}}+\\frac{p_{m}}{q_{m}}=\\frac{k+td}{q_{ i}q_{n}}+\\frac{p_{m}}{q_{i}}\\) (I'). We also divide the equations (II) by \\(q_{r}q_{m}\\) or \\(q_{r}q_{n}\\) to get: \\(\\frac{p_{r}}{q_{r}}=\\frac{k^{\\prime}+t^{\\prime}d}{q_{r}q_{m}}+\\frac{p_{m}}{q_{ m}}=\\frac{k^{\\prime}+t^{\\prime}d}{q_{r}q_{n}}+\\frac{p_{m}}{q_{m}}\\) (II'). By subtracting the equation (II') from (I') we get: \\(q_{n}(\\frac{k+td}{q_{i}}-\\frac{k^{\\prime}+t^{\\prime}d}{q_{r}})=q_{m}(\\frac{k+td }{q_{i}}-\\frac{k^{\\prime}+t^{\\prime}d}{q_{r}})\\) (1). Similarly, we can obtain the equation \\(p_{n}(\\frac{k+td}{p_{i}}-\\frac{k^{\\prime}+t^{\\prime}d}{p_{r}})=p_{m}(\\frac{k+td }{p_{i}}-\\frac{k^{\\prime}+t^{\\prime}d}{p_{r}})\\) (2). Note that the expressions within the brackets in both equations (1) or (2) cannot be simultaneously zero, because it will result in \\(p_{i}q_{r}-q_{i}p_{r}=0\\), which is unacceptable for \\(k\ eq 0\\). Therefore, the expression within the brackets in at least one of the equations (1) or (2) is non-zero. This results in \\(q_{n}=q_{m}\\) and/or \\(p_{n}=p_{m}\\). Consequently, considering the equation (I), we have \\(E^{A}_{m}=E^{A}_{n}\\). ### Linear independence of the joint distribution functions **Theorem 1**: _The expectation values of normalizer elements on a post-measurement state, \\(\\rho_{k}\\), are linearly independent if these elements are the \\(d-1\\) nontrivial members of a coset \\(A_{v}/C_{a_{0}}\\). I.e., for two independent operators \\(T_{rs}\\), \\(T_{r^{\\prime}s^{\\prime}}\\in A_{v}/C_{a_{0}}\\), we have \\(\\mathrm{Tr}(T_{rs}\\rho_{k})\ eq c\\;\\mathrm{Tr}(T_{r^{\\prime}s^{\\prime}}\\rho_{ k})\\), where \\(c\\) is an arbitrary complex number._ **Proof:** We know that the elements of a coset can be written as \\(T^{b}_{rs}S^{a_{0}}_{ij}=(E^{A}_{r}E^{B}_{s})^{b}S^{a_{0}}_{ij}\\), where \\(b=1,2,\\ldots,d-1\\). We also proved that \\((E^{A}_{r})^{b}\\) belongs to different \\(W^{i}_{k}\\) (\\(k\ eq 0\\)) for different values of \\(b\\) (see Lemma 8). Therefore, according to Lemma 9 and regardless of the outcome of \\(k\\) (after measuring the stabilizer \\(S_{ij}\\)), there exists one member in the coset \\(A_{v}/C_{a_{0}}\\) that has different eigenvalues \\(\\omega^{r_{m}}\\) with all (independent) members \\(E^{A}_{m}\\in W^{i}_{k}\\). The expectation value of \\(T^{b}_{rs}S^{a_{0}}_{ij}\\) is: \\[\\mathrm{Tr}(T^{b}_{rs}S^{a_{0}}_{ij}\\rho_{k}) = \\sum_{m}\\chi_{mm}\\;\\mathrm{Tr}(E^{A\\,\\dagger}_{m}T^{b}_{rs}S^{a_{ 0}}_{ij}E^{A}_{m}\\rho),+\\sum_{m<n}[\\chi_{mn}\\;\\mathrm{Tr}(E^{A\\,\\dagger}_{n}T^{ b}_{rs}S^{a_{0}}_{ij}E^{A}_{m}\\rho)+\\chi^{*}_{mn}\\;\\mathrm{Tr}(E^{A\\, \\dagger}_{m}T^{b}_{rs}S^{a_{0}}_{ij}E^{A}_{n}\\rho)], \\tag{8}\\] \\[\\mathrm{Tr}(T^{b}_{rs}\\rho_{k}) = \\sum_{m}\\omega^{br_{m}}\\chi_{mm}\\;\\mathrm{Tr}(T^{b}_{rs}\\rho)+ \\sum_{m<n}[\\omega^{br_{m}}\\chi_{mn}\\;\\mathrm{Tr}(E^{A\\,\\dagger}_{n}E^{A}_{m}T^{ b}_{rs}\\rho)+\\omega^{br_{n}}\\chi^{*}_{mn}\\;\\mathrm{Tr}(E^{A\\,\\dagger}_{m}E^{A}_{n}T^{ b}_{rs}\\rho)], \\tag{9}\\] where \\(\\omega^{r_{m}}\ eq\\omega^{r_{n}}\ eq\\ldots\\) for all elements \\(E^{A}_{m},E^{A}_{n},\\ldots\\) that belong to a specific \\(W^{i}_{k}\\). Therefore, for two independent members of a coset denoted by \\(b\\) and \\(b^{\\prime}\\) (i.e., \\(b\ eq b^{\\prime}\\)), we have \\((\\omega^{b^{\\prime}r_{m}},\\omega^{b^{\\prime}r_{n}},\\ldots)\ eq c\\;(\\omega^{br_{m}},\\omega^{br_{n}},\\ldots)\\) for all values of \\(0\\leq b\\),\\(b^{\\prime}<d\\), and any complex number \\(c\\). We also note that we have \\(\\mathrm{Tr}(E^{A\\,\\dagger}_{n}E^{B}_{m}T^{b}_{rs}\\rho)\ eq c\\;\\mathrm{Tr}(E^{A\\, \\dagger}_{n}E^{A}_{m}T^{b^{\\prime}}_{rs}\\rho)\\), since \\(T^{b^{\\prime}b}_{rs}\\) is a normalizer, not a stabilizer element, and its action on the state cannot be expressed as a global phase. Thus, for any two independent members of a coset \\(A_{v}/C_{a_{0}}\\), we always have \\(\\mathrm{Tr}(T^{b^{\\prime}}_{rs}\\rho_{k})\ eq c\\;\\mathrm{Tr}(T^{b}_{rs}\\rho_{k})\\). In summary, after the action of the unknown dynamical process, we measure the eigenvalues of the stabilizer generator, \\(E^{A}_{i}E^{B}_{j}\\), that has \\(d\\) eigenvalues for \\(k=0,1,\\ldots,d-1\\) and provides \\(d\\) linearly independent equations for the real and imaginary parts of \\(\\chi_{mn}\\). This is due to that the outcomes corresponding to different eigenvalues of a unitary operator Figure 4: A diagram of the error-detection measurement for estimating quantum dynamical coherence: we measure the sole stabilizer generator at the output state, by applying projection operators corresponding to its different eigenvalues \\(P_{k}\\). We also measure \\(d-1\\) commuting operators that belong to the normalizer group. Finally, we calculate the probability of each stabilizer outcome, and joint probability distributions of the normalizers and the stabilizer outcomes. Optimally, we can obtain \\(d^{2}\\) linearly independent equations by appropriate selection of the normalizer operators as it is shown in the next section. are independent. We also measure expectation values of all the \\(d-1\\) independent and commuting normalizer operators \\(T_{rs}^{b}S_{ij}^{a_{0}}\\in A_{v}/C_{a_{0}}\\), on the post-measurement state \\(\\rho_{k}\\), which provides \\((d-1)\\) linearly independent equations for each outcome \\(k\\) of the stabilizer measurements. Overall, we obtain \\(d+d(d-1)=d^{2}\\) linearly independent equations for characterization of the real and imaginary parts of \\(\\chi_{mn}\\) by a single ensemble measurement. In the following, we show that the above algorithm is optimal. I.e., there does not exist any other possible strategy that can provide more than \\(\\log_{2}d^{2}\\) bits of information by a single measurement on the output state \\(\\mathcal{E}(\\rho)\\). ### Optimality **Theorem 2**: _The maximum number of commuting normalizer elements that can be measured simultaneously to provide linear independent equations for the joint distribution functions \\(\\operatorname{Tr}(T_{rs}^{b}S_{ij}^{a}\\rho_{k})\\) is \\(d-1\\)._ **Proof:** Any Abelian subgroup of the normalizer has order \\(d^{2}\\) (see Lemma 4). Therefore, the desired normalizer operators should all belong to a particular \\(A_{v}\\) and are limited to \\(d^{2}\\) members. We already showed that the outcomes of measurements for \\(d-1\\) elements of a coset \\(A_{v}/C_{a}\\), represented by \\(T_{rs}^{b}S_{ij}^{a}\\) (with \\(b\ eq 0\\)), are independent (see Theorem 1). Now we show that measuring any other operator, \\(T_{rs}^{b}S_{ij}^{a}\\), from any other coset \\(A_{v}/C_{a^{\\prime}}\\), results in linearly dependent equations for the functions \\(w=\\)tr\\((T_{rs}^{b}S_{ij}^{a}\\rho_{k})\\) and \\(w^{\\prime}=\\)tr\\((T_{rs}^{b}S_{ij}^{a^{\\prime}}\\rho_{k})\\) as the following: \\[w=\\operatorname{Tr}(T_{rs}^{b}S_{ij}^{a}\\rho_{k})=\\sum_{m}\\chi_{ mm}\\operatorname{Tr}(E_{m}^{A\\dagger}T_{rs}^{b}S_{ij}^{a}E_{m}^{A}\\rho)+ \\sum_{m<n}[\\chi_{mn}\\operatorname{Tr}(E_{n}^{A\\dagger}T_{rs}^{b}S_{ij}^{a}E_{ m}^{A}\\rho)+\\chi_{mn}^{*}\\operatorname{Tr}(E_{m}^{A\\dagger}T_{rs}^{b}S_{ij}^{a}E_{ n}^{A}\\rho)]\\] \\[w^{\\prime}=\\operatorname{Tr}(T_{rs}^{b}S_{ij}^{a^{\\prime}}\\rho_ {k})=\\sum_{m}\\chi_{mm}\\operatorname{Tr}(E_{m}^{A\\dagger}T_{rs}^{b}S_{ij}^{a^{ \\prime}}E_{m}^{A}\\rho)+\\sum_{m<n}[\\chi_{mn}\\operatorname{Tr}(E_{n}^{A\\dagger} T_{rs}^{b}S_{ij}^{a^{\\prime}}E_{m}^{A}\\rho)+\\chi_{mn}^{*}\\operatorname{tr}(E_{m}^{A \\dagger}T_{rs}^{b}S_{ij}^{a^{\\prime}}E_{n}^{A}\\rho)].\\] Using the commutation relations \\(T_{rs}^{b}S_{ij}^{a}E_{m}^{A}=\\omega^{br_{m}+ai_{m}}E_{m}^{A}T_{rs}^{b}S_{ij}^ {a}\\), we obtain: \\[w=\\sum_{m}\\omega^{br_{m}+ai_{m}}\\chi_{mm}\\operatorname{Tr}(T_{rs }^{b}\\rho)+\\sum_{m<n}[\\omega^{br_{m}+ai_{m}}\\chi_{mn}\\operatorname{Tr}(E_{n}^ {A\\dagger}E_{m}^{A}T_{rs}^{b}\\rho)+\\omega^{br_{n}+ai_{n}}\\chi_{mn}^{*} \\operatorname{Tr}(E_{m}^{A\\dagger}E_{n}^{A}T_{rs}^{b}\\rho)]\\] \\[w^{\\prime}=\\sum_{m}\\omega^{br_{m}+a^{\\prime}i_{m}}\\chi_{mm} \\operatorname{tr}(T_{rs}^{b}\\rho)+\\sum_{m<n}[\\omega^{br_{m}+a^{\\prime}i_{m}} \\chi_{mn}\\operatorname{Tr}(E_{n}^{A\\dagger}E_{m}^{A}T_{rs}^{b}\\rho)+\\omega^{ br_{n}+a^{\\prime}i_{n}}\\chi_{mn}^{*}\\operatorname{Tr}(E_{m}^{A\\dagger}E_{n}^{A}T_{ rs}^{b}\\rho)],\\] where we also used the fact that both \\(S_{ij}^{a}\\) and \\(S_{ij}^{a^{\\prime}}\\) are stabilizer elements. Since all of the operators \\(E_{m}^{A}\\) belong to the same \\(W_{k}^{i}\\), we have \\(i_{m}=i_{n}=k\\), and obtain: \\[w=\\omega^{ak}\\left(\\sum_{m}\\omega^{br_{m}}\\chi_{mm}\\operatorname{ Tr}(T_{rs}^{b}\\rho)+\\sum_{m<n}[\\omega^{br_{m}}\\chi_{mn}\\operatorname{Tr}(E_{n}^{A \\dagger}E_{m}^{A}T_{rs}^{b}\\rho)+\\omega^{br_{n}}\\chi_{mn}^{*}\\operatorname{Tr} (E_{m}^{A\\dagger}E_{n}^{A}T_{rs}^{b}\\rho)]\\right)\\] \\[w^{\\prime}=\\omega^{a^{\\prime}k}\\left(\\sum_{m}\\omega^{br_{m}}\\chi _{mm}\\operatorname{Tr}(T_{rs}^{b}\\rho)+\\sum_{m<n}[\\omega^{br_{m}}\\chi_{mn} \\operatorname{Tr}(E_{n}^{A\\dagger}E_{m}^{A}T_{rs}^{b}\\rho)+\\omega^{br_{n}}\\chi_ {mn}^{*}\\operatorname{Tr}(E_{m}^{A\\dagger}E_{n}^{A}T_{rs}^{b}\\rho)]\\right).\\] Thus, we have \\(w^{\\prime}~{}=\\omega^{(a^{\\prime}-a)k}w\\), and consequently the measurements of operators from other cosets \\(A_{v}/C_{a^{\\prime}}\\) do not provide any new information about \\(\\chi_{mn}\\) beyond the corresponding measurements from the coset \\(A_{v}/C_{a}\\). For another proof of the optimality, based on fundamental limitation of transferring information between two parties given by the Holevo bound see Ref. [26]. In principle, one can construct a set of _non-Abelian_ normalizer measurements, from different \\(A_{v}\\), where \\(v=1,2,\\ldots,d+1\\), to obtain information about the off-diagonal elements \\(\\chi_{mn}\\). However, determining the eigenvalues of a set of noncommuting operators cannot be done via a single measurement. Moreover, as mentioned above, by measuring the stabilizer and \\(d-1\\) Abelian normalizers, one can obtain \\(\\log_{2}d^{2}\\) bits of classical information, which is the maximum allowed by the Holevo bound [31]. Therefore, other strategies involving non-Abelian, or a mixture of Abelian and non-Abelian normalizer measurements, cannot improve the above scheme. It should be noted that there are several possible alternative sets of Abelian normalizers that are equivalent for this task. we address this issue in the next lemma. **Lemma 10**: The number of alternative sets of Abelian normalizer measurements that can provide optimal information about quantum dynamics, in one ensemble measurement, is \\(d^{2}\\). **Proof:** We have \\(d+1\\) Abelian normalizers \\(A_{v}\\) (see Lemma 5). However, there are \\(d\\) of them that contain unitary operators that act nontrivially on both qudit systems \\(A\\) and \\(B\\), i.e., \\(T_{rs}^{b}=(E_{r}^{A}E_{s}^{B})^{b}\\), where \\(E_{r}^{A}\ eq I\\), \\(E_{s}^{B}\ eq I\\). Moreover, in each \\(A_{v}\\) we have \\(d\\) cosets (see Lemma 5) that can be used for optimal characterization of \\(\\chi_{mn}\\). Overall, we have \\(d^{2}\\) possible sets of Abelian normalizers that are equivalent for our purpose. In the next section, we develop the algorithm further to obtain complete information about the off-diagonal elements of the superoperator by repeating the above scheme for different input states. ## VII Repeating the algorithm for other stabilizer states we have shown that by performing one ensemble measurement one can obtain \\(d^{2}\\) linearly independent equations for \\(\\chi_{mn}\\). However, a complete characterization of quantum dynamics requires obtaining \\(d^{4}-d^{2}\\) independent real parameters of the superoperator (or \\(d^{4}\\) for non-trace preserving maps). we next show how one can obtain complete information by appropriately rotating the input state and repeating the above algorithm for a complete set of rotations. **Lemma 11**: The number of independent eigenkets for the error operator basis \\(\\{E_{j}\\}\\), where \\(j=1,2,\\ldots,d^{2}-1\\), is \\(d+1\\). These eigenkets are mutually unbiased. **Proof:** We have \\(d^{2}-1\\) unitary operators,\\(E_{i}\\). We note that the operators \\(E_{i}^{a}\\) for all values of \\(1\\leq a\\leq d-1\\) commute and have a common eigenket. Therefore, overall we have \\((d^{2}-1)/(d-1)=d+1\\) independent eigenkets. Moreover, it has been shown [30] that if a set of \\(d^{2}-1\\) traceless and mutually orthogonal \\(d\\times d\\) unitary matrices can be partitioned into \\(d+1\\) subsets of equal size, such that the \\(d-1\\) unitary operators in each subset commute, then the basis of eigenvectors defined by these subsets are mutually unbiased. Let us construct a set of \\(d+1\\) stabilizer operators \\(E_{i}^{A}E_{j}^{B}\\), such that the following conditions hold: (a) \\(E_{i}^{A},E_{j}^{B}\ eq I\\), (b) \\((E_{i}^{A})^{a}\ eq E_{i}^{A}\\) for \\(i\ eq i^{\\prime}\\) and \\(1\\leq a\\leq d-1\\). Then, by preparing the eigenstates of these \\(d+1\\) independent stabilizer operators, one at a time, and measuring the eigenvalues of \\(S_{ij}\\) and its corresponding \\(d-1\\) normalizer operators \\(T_{rs}^{b}S_{ij}^{a}\\in A_{v}/C_{a}\\), one can obtain \\((d+1)d^{2}\\)_linearly independent_ equations to characterize the superoperator's off-diagonal elements. The linear independence of these equations can be understood by noting that the eigenstates of all operators \\(E_{i}^{A}\\) of these \\(d+1\\) stabilizer operator \\(S_{ij}\\) are mutually unbiased (i.e., the measurements in these mutual unbiased bases are maximally non-commuting). For example the bases \\(\\{|0\\rangle,\\!|1\\rangle\\}\\), \\(\\{(|+\\rangle_{X^{i}}|-\\rangle_{X}\\}\\) and \\(\\{|+\\rangle_{Y^{i}}|-\\rangle_{Y}\\}\\) (the eigenstates of the Pauli operators \\(Z\\), \\(X\\), and \\(Y\\)) are _mutually unbiased_, i.e., the inner products of each pair of elements in these bases have the same magnitude. Then measurements in these bases are maximally non-commuting [32]. To obtain complete information about the quantum dynamical coherence, we again prepare the eigenkets of the above \\(d+1\\) stabilizer operators \\(E_{i}^{A}E_{j}^{B}\\), but after the stabilizer measurement we calculate the expectation values of the operators \\(T_{r^{\\prime}s^{\\prime}}^{b}S_{ij}^{a}\\) belonging to other Abelian subgroups \\(A_{v^{\\prime}}/C_{a}\\) of the normalizer, i.e., \\(A_{v^{\\prime}}\ eq A_{v}\\). According to Lemma 6 the bases of different Abelian subgroups of the normalizer are mutually unbiased, therefore, the expectation values of \\(T_{r^{\\prime}s^{\\prime}}^{b}S_{ij}^{a}\\) and \\(T_{rs}^{b}S_{ij}^{a}\\) from different Abelian subgroups \\(A_{v^{\\prime}}\\) and \\(A_{v}\\) are independent. In order to make the stabilizer measurements also independent we choose a different superposition of logical basis in the preparation of \\(d+1\\) possible stabilizer state in each run. Therefore in each of these measurements we can obtain at most \\(d^{2}\\) linearly independent equations. By repeating these measurements for \\(d-1\\) different \\(A_{v}\\) over all \\(d+1\\) possible input stabilizer state, we obtain \\((d+1)(d-1)d^{2}=d^{4}-d^{2}\\) linearly independent equations, which suffice to fully characterize all independent parameters of the superoperator's off-diagonal elements. In the next section, we address the general properties of these \\(d+1\\) stabilizer states ## VIII General constraints on the stabilizer states The restrictions on the stabilizer states \\(\\rho\\) can be expressed as follows: **Condition 1**: The state \\(\\rho=\\left|\\phi_{ij}\\right\\rangle\\left\\langle\\phi_{ij}\\right|\\) is a non-separable pure state in the Hilbert space of the two-qudit system \\(\\mathcal{H}\\). I.e., \\(\\left|\\psi_{ij}\\right\\rangle_{AB}\ eq\\left|\\phi\\right\\rangle_{A}\\otimes\\left| \\varphi\\right\\rangle_{B}\\). **Condition 2**: The state \\(\\left|\\phi_{ij}\\right\\rangle\\) is a stabilizer state with a sole stabilizer generator \\(S_{ij}=E_{i}^{A}E_{j}^{B}\\). I.e., it satisfies \\(S_{ij}^{a}\\left|\\phi_{ij}\\right\\rangle=\\omega^{ak}\\left|\\phi_{ij}\\right\\rangle\\), where \\(k\\in\\{0,1,\\ldots,d-1\\}\\) denotes a fixed eigenvalue of \\(S_{ij}\\), and \\(a=1,\\ldots,d-1\\) labels \\(d-1\\) nontrivial members of the stabilizer group. The second condition specifies the stabilizer subspace, \\(V_{S}\\), that the state \\(\\rho\\) lives in, which is the subspace fixed by all the elements of the stabilizer group with a fixed eigenvalues \\(k\\). More specifically, an arbitrary state in the entire Hilbert space \\(\\mathcal{H}\\) can be written as \\(\\left|\\phi\\right\\rangle=\\sum\\limits_{u,u^{\\prime}=0}^{d-1}\\alpha_{uu^{\\prime}} \\left|u\\right\\rangle_{A}\\left|u^{\\prime}\\right\\rangle_{B}\\) where \\(\\{\\left|u\\right\\rangle\\}\\) and \\(\\{\\left|u^{\\prime}\\right\\rangle\\}\\) are bases for the Hilbert spaces of the qudits \\(A\\) and \\(B\\), such that \\(X^{q}\\left|u\\right\\rangle=\\left|u+q\\right\\rangle\\) and \\(Z^{p}\\left|u\\right\\rangle=\\omega^{pu}\\left|u\\right\\rangle\\). However, we can expand \\(\\left|\\phi\\right\\rangle\\) in another basis as \\(\\left|\\phi\\right\\rangle=\\sum\\limits_{v,v^{\\prime}=0}^{d-1}\\beta_{vv^{\\prime}} \\left|v\\right\\rangle_{A}\\left|v^{\\prime}\\right\\rangle_{B}\\), such that \\(X^{q}\\left|v\\right\\rangle=\\omega^{qv}\\left|v\\right\\rangle\\) and \\(Z^{p}\\left|v\\right\\rangle=\\left|v+p\\right\\rangle\\). Let us consider a stabilizer state fixed under the action of a unitary operator \\(E_{i}^{A}E_{j}^{B}=(X^{A})^{q}(X^{B})^{q^{\\prime}}(Z^{A})^{p}(Z^{B})^{p^{\\prime}}\\) with eigenvalue \\(\\omega^{k}\\). Regardless of the basis chosen to expand \\(\\left|\\phi_{ij}\\right\\rangle\\), we should always have \\(S_{ij}\\left|\\phi_{ij}\\right\\rangle=\\omega^{k}\\left|\\phi_{ij}\\right\\rangle\\). Consequently, we have the constraints \\(pu\\oplus p^{\\prime}u^{\\prime}=k\\), for the stabilizer subspace \\(V_{S}\\) spanned by the \\(\\{\\left|u\\right\\rangle\\otimes\\left|u^{\\prime}\\right\\rangle\\}\\) basis, and \\(q(v\\oplus p)\\oplus q^{\\prime}(v^{\\prime}\\oplus p^{\\prime})=k\\), if \\(V_{S}\\) is spanned by \\(\\{\\left|v\\right\\rangle\\otimes\\left|v^{\\prime}\\right\\rangle\\}\\) basis, where \\(\\oplus\\) is addition \\(\\mathrm{mod}(d)\\)From these relations, and also using the fact that the bases \\(\\{\\left|v\\right\\rangle\\}\\) and \\(\\{\\left|u\\right\\rangle\\}\\) are related by a unitary transformation, one can find the general properties of \\(V_{S}\\) for a given stabilizer generator \\(E_{i}^{A}E_{j}^{B}\\) and a given \\(k\\). We have already shown that the stabilizer states \\(\\rho\\) should also satisfy the set of conditions \\(\\text{Tr}[E_{n}^{A\\dagger}E_{m}^{A}\\rho]\ eq 0\\) and \\(\\text{Tr}(E_{n}^{A\\dagger}E_{m}^{A\\dagger}T_{rs}^{b}\\rho)\ eq 0\\) for all operators \\(E_{m}^{A}\\) belonging to the same \\(W_{k}^{i}\\), where \\(T_{rs}^{b}\\) (\\(0<b\\leq d-1\\)) are the members of a particular coset \\(A_{v}/C_{a}\\) of an Abelian subgroup, \\(A_{v}\\), of the normalizer \\(N(S)\\). These relations can be expressed more compactly as: **Condition 3**: For stabilizer state \\(\\rho=\\left|\\phi_{ij}\\right\\rangle\\left\\langle\\phi_{ij}\\right|\\equiv\\left|\\phi_ {c}\\right\\rangle\\left\\langle\\phi_{c}\\right|\\) and for all \\(E_{m}^{A}\\in W_{k}^{i}\\) we have: \\[\\left\\langle\\phi_{c}\\right|E_{n}^{A\\dagger}E_{m}^{A}T_{rs}^{b}\\left|\\phi_{c} \\right\\rangle\ eq 0, \\tag{10}\\] where here \\(0\\leq b\\leq d-1\\). Before developing the implications of the above formula for the stabilizer states we give the following definition and lemma. **Definition 3**: Let \\(\\{\\left|l\\right\\rangle_{L}\\}\\) be the logical basis of the code space that is fixed by the stabilizer generator \\(E_{i}^{A}E_{j}^{B}\\). The stabilizer state in that basis can be written as \\(\\left|\\phi_{c}\\right\\rangle=\\sum\\limits_{l=0}^{d-1}\\alpha_{l}\\left|l\\right\\rangle _{L}\\), and all the normalizer operators, \\(T_{rs}\\), can be generated from tensor products of logical operations \\(\\overline{X}\\) and \\(\\overline{Z}\\) defined as \\(\\overline{Z}\\left|l\\right\\rangle_{L}=\\omega^{l}\\left|l\\right\\rangle_{L}\\) and \\(\\overline{X}\\left|l\\right\\rangle_{L}=\\left|l+1\\right\\rangle\\). For example: \\(\\left|l\\right\\rangle_{L}=\\left|k\\right\\rangle\\left|k\\right\\rangle\\), \\(\\overline{Z}=Z\\otimes I\\) and \\(\\overline{X}=X\\otimes X\\), where \\(X\\left|k\\right\\rangle=\\left|k+1\\right\\rangle\\) and \\(Z\\left|k\\right\\rangle=\\omega^{k}\\left|k\\right\\rangle\\). **Lemma 12**: For a stabilizer generator \\(E_{i}^{A}E_{j}^{B}\\) and all unitary operators \\(E_{m}^{A}\\in W_{k}^{i}\\), we always have \\(E_{n}^{A\\dagger}E_{m}^{A}=\\omega^{c}\\overline{Z}^{a}\\), where \\(\\overline{Z}\\) is the logical \\(Z\\) operation acting on the code space and \\(a\\) and \\(c\\) are integers. **Proof:** Let us consider \\(E_{i}^{A}=X^{q_{i}}Z^{p_{i}}\\), and two generic operators \\(E_{n}^{A}\\) and \\(E_{m}^{A}\\) that belong to \\(W_{k}^{i}\\): \\(E_{m}^{A}=X^{q_{m}}Z^{p_{m}}\\) and \\(E_{n}^{A}=X^{q_{n}}Z^{p_{n}}\\). From the definition of \\(W_{k}^{i}\\) see Definition 1) we have \\(p_{i}q_{m}-q_{i}p_{m}=p_{i}q_{n}-q_{i}p_{n}=k\\left(0\\right)\\left(d\\right)=k+td\\). We can suppose these two equations to get: \\(q_{m}-q_{n}=q_{i}(p_{m}q_{n}-q_{m}p_{n})/(k+td)\\) and \\(p_{m}-p_{n}=p_{i}(p_{m}q_{n}-q_{m}p_{n})/(k+td)\\). We also define \\(p_{m}q_{n}-q_{m}p_{n}=k^{\\prime}+t^{\\prime}d\\). Therefore, we obtain \\(q_{m}-q_{n}=q_{i}a\\) and \\(p_{m}-p_{n}=p_{i}a\\), where we have introduced \\[a=(k^{\\prime}+t^{\\prime}d)/(k+td). \\tag{11}\\] Moreover, we have \\(E_{n}^{A\\dagger}=X^{(t^{\\prime\\prime}d-q_{n})}Z^{(t^{\\prime\\prime}d-p_{n})}\\) for some other integer \\(t^{\\prime\\prime}\\). Then we get: \\[E_{n}^{A\\dagger}E_{m}^{A} = \\omega^{c}X^{(t^{\\prime\\prime}d+q_{m}-q_{n})}Z^{(t^{\\prime\\prime} d+p_{m}-p_{n})}\\] \\[= \\omega^{c}X^{(q_{m}-q_{n})}Z^{(p_{m}-q_{n})}\\] \\[= \\omega^{c}(X^{q_{i}}Z^{p_{i}})^{a},\\] where \\(c=(t^{\\prime\\prime}d-p_{n})(t^{\\prime\\prime}d+q_{m}-q_{n})\\). However, \\(X^{q_{i}}Z^{p_{i}}\\otimes I\\) acts as logical \\(\\overline{Z}\\) on the code subspace, which is the eigenstate of \\(E_{i}^{A}E_{j}^{B}\\). Thus, we obtain \\(E_{n}^{A\\dagger}E_{m}^{A}=\\omega^{c}\\overline{Z}^{a}\\). Based on the above lemma, for the case of \\(b=0\\) we obtain \\[\\left\\langle\\phi_{c}\\right|E_{n}^{A\\dagger}E_{m}^{A}\\left|\\phi_{c}\\right\\rangle =\\omega^{c}\\left\\langle\\phi_{c}\\right|\\overline{Z}^{a}\\left|\\phi_{c}\\right\\rangle =\\omega^{c}\\sum\\limits_{l=0}^{d-1}\\omega^{al}\\left|\\alpha_{l}\\right|^{2}.\\] Therefore, our constraint in this case becomes \\(\\sum\ olimits_{k=0}^{d-1}\\omega^{al}\\left|\\alpha_{l}\\right|^{2}\ eq 0\\), which is not satisfied if the stabilizer state is maximally entangled. For \\(b\ eq 0\\), we note that \\(T_{rs}^{b}\\) are in fact the normalizers. By considering the general form of the normalizer elements as \\(T_{rs}^{b}=(\\overline{X}^{q}\\overline{Z}^{p})^{b}\\), where \\(q\\), \\(p\\in\\{0,1,\\ldots,d-1\\}\\), we obtain: \\[\\left\\langle\\phi_{c}\\right|E_{n}^{A\\dagger}E_{m}^{A}T_{rs}^{b} \\left|\\phi_{c}\\right\\rangle = \\omega^{c}\\left\\langle\\phi_{c}\\right|\\overline{Z}^{a}(\\overline{X} ^{q}\\overline{Z}^{p})^{b}\\left|\\phi_{c}\\right\\rangle\\] \\[= \\omega^{c}\\sum\\limits_{k=0}^{d-1}\\omega^{a(l+bq)}\\omega^{bpl} \\alpha_{l}^{s}\\alpha_{l+bq}\\] \\[= \\omega^{(c+abq)}\\sum\\limits_{l=0}^{d-1}\\omega^{(a+bp)l}\\alpha_{l}^ {s}\\alpha_{l+bq}.\\] Overall, the constraints on the stabilizer state, due to condition (iii), can be summarized as: \\[\\sum\\limits_{l=0}^{d-1}\\omega^{(a+bp)l}\\alpha_{l}^{s}\\alpha_{l+bq}\ eq 0 \\tag{12}\\] This inequality should hold for all \\(b\\in\\{0,1,\\ldots,d-1\\}\\), and all \\(a\\) defined by Eq. (11), however, for a particular coset \\(A_{v}/C_{a}\\) the values of \\(q\\) and \\(p\\) are fixed. One important property of the stabilizer code, implied by the above formula with \\(b=0\\), is that it should always be a _nonmaximally entangled state_. In the next section, by utilizing the quantum Hamming bound, we show that the minimum number of physical qudits, \\(n\\), needed for encoding the required stabilizer state is in fact _two_. ## IX Minimum number of required physical qudits In order to characterize off-diagonal elements of a superoperator we have to use degenerate stabilizer codes, in order to preserve the coherence between operator basis elements. Degenerate stabilizer codes do not have a classical analog [1]. Due to this fact, the classical techniques used to prove bounds for non-degenerate error-correcting codes cannot be applied to degenerate codes. In general, it is yet unknown if there are degenerate codes that exceed the quantum Hamming bound [1]. However, due to the simplicity of the stabilizer codes used in the DCQD algorithm and their symmetry, it is possible to generalize the quantum Hamming bound for them. Let us consider a stabilizer code that is used for encoding \\(k\\) logical qudits into \\(n\\) physical qudits such that we can correct any subset of \\(t\\) or fewer errors on any \\(n_{e}\\leqslant n\\) of the physical qudits. Suppose that \\(0\\leqslant j\\leqslant t\\) errors occur. Therefore, there are \\(\\binom{n_{e}}{j}\\) possible locations, and in each location there are \\((d^{2}-1)\\)different operator basis elements that can act as errors. The total possible number of errors is \\(\\sum_{j=0}^{t}\\left(\\begin{smallmatrix}n_{e}\\\\ j\\end{smallmatrix}\\right)(d^{2}-1)^{j}.\\) If the stabilizer code is non-degenerate each of these errors should correspond to an orthogonal \\(d^{k}\\)-dimensional subspace; but if the code is uniformly \\(g\\)-fold degenerate (i.e., with respect to all possible errors), then each set of \\(g\\) errors can be fit into an orthogonal \\(d^{k}\\)-dimensional subspace. All these subspaces must be fit into the entire \\(d^{n}\\)-dimensional Hilbert space. This leads to the following inequality: \\[\\sum_{j=0}^{t}\\left(\\begin{array}{c}n_{e}\\\\ j\\end{array}\\right)\\frac{(d^{2}-1)^{j}d^{k}}{g}\\leq d^{n}. \\tag{13}\\] We are always interested in finding the errors on one physical qudit. Therefore, we have \\(n_{e}=1,\\)\\(j\\in\\{0,1\\}\\) and \\(\\left(\\begin{smallmatrix}cn_{e}\\\\ j\\end{smallmatrix}\\right)=1,\\) and Eq. (13) becomes \\(\\sum_{j=0}^{1}\\frac{(d^{2}-1)^{j}d^{k}}{g}\\leq d^{n}.\\) In order to characterize diagonal elements, we use a nondegenerate stabilizer code with \\(n=2,\\)\\(k=0\\) and \\(g=1,\\) and we have \\(\\sum_{j=0}^{1}(d^{2}-1)^{j}=d^{2}\\). For off-diagonal elements, we use a degenerate stabilizer code with \\(n=2,\\)\\(k=1\\) and \\(g=d,\\) and we have \\(\\sum_{j=0}^{1}\\frac{(d^{2}-1)^{j}d}{d}=d^{2}.\\) Therefore, in the both cases the upper-bound of the quantum Hamming bound is satisfied by our codes. Note that if instead we use \\(n=k,\\) i.e., if we encode \\(n\\) logical qudits into \\(n\\) separable physical qubits, we get \\(\\sum_{j=0}^{1}\\frac{(d^{2}-1)^{j}}{g}\\leq 1.\\) This can only be satisfied if \\(g=d^{2},\\) in which case we cannot obtain any information about the errors. The above argument justifies Condition (i) of the stabilizer state being nonseparable. Specifically, it explains why alternative encodings such as \\(n=k=2\\) and \\(n=k=1\\) are excluded from our discussions. However, if we encode zero logical qubits into one physical qubit, i.e., \\(n=1,\\)\\(k=0,\\) then, by using a \\(d\\)-fold degenerate code, we can obtain \\(\\sum_{j=0}^{1}\\frac{(d^{2}-1)^{j}}{d}=d\\) which satisfies the quantum Hamming bound and could be useful for characterizing off-diagonal elements. For this to be true, the code \\(\\left|\\phi_{c}\\right\\rangle\\) should also satisfy the set of conditions \\(\\left\\langle\\phi_{c}\\right|E_{n}^{A\\dagger}E_{m}^{A}\\left|\\phi_{c}\\right\\rangle\ eq 0\\) and \\(\\left\\langle\\phi_{c}\\right|E_{n}^{A\\dagger}E_{m}^{A\\dagger}T_{rs}^{b}\\left| \\phi_{c}\\right\\rangle\ eq 0.\\) Due to the \\(d\\)-fold degeneracy of the code, the condition \\(\\left\\langle\\phi_{c}\\right|E_{n}^{A\\dagger}E_{m}^{A}\\left|\\phi_{c}\\right\\rangle\ eq 0\\) is automatically satisfied. However, the condition \\(\\left\\langle\\phi_{c}\\right|E_{n}^{A\\dagger}E_{m}^{A\\dagger}T_{rs}^{b}\\left| \\phi_{c}\\right\\rangle\ eq 0\\) can never be satisfied, since the code space is one-dimensional, i.e., \\(d^{k}=1,\\) and the normalizer operators cannot be defined. I.e., there does not exist any nontrivial unitary operator \\(T_{rs}^{b}\\) that can perform logical operations on the one-dimensional code space. we have demonstrated how we can characterize quantum dynamics using the most general form of the relevant stabilizer states and generators. In the next section, we choose a standard form of stabilizers, in order to simplify the algorithm and to derive a standard form of the normalizer. ## X Standard form of stabilizer and normalizer operators Let us choose the set \\(\\left\\{\\left|0\\right\\rangle,\\left|1\\right\\rangle, ,\\left|k-1\\right\\rangle\\right\\}\\) as a standard basis, such that \\(Z\\left|k\\right\\rangle=\\omega^{k}\\left|k\\right\\rangle\\) and \\(X\\left|k\\right\\rangle=\\left|k+1\\right\\rangle\\). In order to characterize the quantum dynamical population, we choose the standard stabilizer generators to be \\((X^{A}X^{B})^{q}\\) and \\([Z^{A}(Z^{B})^{d-1}]^{p}\\). Therefore, the maximally entangled input states can be written as \\(\\left|\\varphi_{c}\\right\\rangle=\\frac{1}{\\sqrt{d}}\\sum\\limits_{k=0}^{d-1}\\left| k\\right\\rangle_{A}\\left|k\\right\\rangle_{B}\\). In order to characterize the quantum dynamical coherence we choose the sole stabilizer operator as \\([E_{i}^{A}(E_{i}^{B})^{d-1}]^{a},\\) which has an eigenket of the form \\(\\left|\\varphi_{c}\\right\\rangle=\\sum\\limits_{i=0}^{d-1}\\alpha_{i}\\left|i\\right\\rangle _{A}\\left|i\\right\\rangle_{B},\\) where \\(E_{i}\\left|i\\right\\rangle=\\omega^{i}\\left|i\\right\\rangle\\) and \\(\\left|i\\right\\rangle\\) represents one of \\(d+1\\) mutually unbiased basis states in the Hilbert space of one qudit. The normalizer elements can be written as \\(T_{qp}^{b}=(\\overline{X^{q}Z^{p}})^{b}\\in A_{v_{0}}/C_{a_{0}},\\) for all \\(0<b\\leq d-1,\\) where \\(\\overline{X}=\\overline{E_{i}}\\otimes\\overline{E_{i}}\\), \\(\\overline{Z}=E_{i}\\otimes I,\\)\\(\\overline{E_{i}}\\left|i\\right\\rangle=\\left|i+1\\right\\rangle\\) and \\(E_{i}\\left|i\\right\\rangle=\\omega^{i}\\left|i\\right\\rangle;\\) and \\(A_{v_{0}}/C_{a_{0}}\\) represents a fixed coset of a particular Abelian subgroup, \\(A_{v_{0}},\\) of the normalizer \\(N(S)\\). For example, for a stabilizer generator of the form \\([E_{i}^{A}(E_{i}^{B})^{d-1}]^{a}=[Z^{A}(Z^{B})^{d-1}]^{p}\\) we prepare its eigenket \\(\\left|\\varphi_{c}\\right\\rangle=\\sum\\limits_{k=0}^{d-1}\\alpha_{k}\\left|k\\right\\rangle _{A}\\left|k\\right\\rangle_{B},\\) and the normalizers become \\(T_{qp}^{b}=(\\overline{X^{q}Z^{p}})^{b},\\) where \\(\\overline{X}=X\\otimes X\\) and \\(\\overline{Z}=Z\\otimes I.\\) Using this notations for stabilizer and the normalizer operators, we provide an overall outline for the DCQD algorithm in the next section. ## XI Algorithm: Direct characterization of quantum dynamics The DCQD algorithm for the case of a qudit system is summarized as follows (see also Figs. 5 and 6.): _Inputs:_ (1) An ensemble of two-qudit systems, \\(A\\) and \\(B\\), prepared in the state \\(\\left|0\\right\\rangle_{A}\\otimes\\left|0\\right\\rangle_{B}\\). (2) An arbitrary unknown CP quantum dynamical map \\(\\mathcal{E},\\) whose action can be expressed by \\(\\mathcal{E}(\\rho)=\\sum_{m,n=0}^{d^{2}-1}\\chi_{mn}\\)\\(E_{m}^{A}\\rho E_{n}^{A\\dagger},\\) where \\(\\rho\\) denotes the state of the primary system and the ancilla. _Output:_\\(\\mathcal{E},\\) given by a set of measurement outcomes in the procedures (a) and (b) below: _Procedure(a):_ Characterization of quantum dynamical population (diagonal elements \\(\\chi_{mm}\\) of \\(\\chi\\)), see Fig. 5. 1. Prepare \\(\\left|\\varphi_{0}\\right\\rangle=\\left|0\\right\\rangle_{A}\\otimes\\left|0\\right\\rangle_ {B}\\), a pure initial state. 2. Transform it to \\(\\left|\\varphi_{c}\\right\\rangle=\\frac{1}{\\sqrt{d}}\\sum\\limits_{k=0}^{d-1}\\left| k\\right\\rangle_{A}\\left|k\\right\\rangle_{B}\\), a maximally entangled state of the two qudits. This state has the stabilizer operators \\(E_{i}^{A}E_{j}^{B}=(X^{A}X^{B})^{q}\\) and \\(E_{i^{\\prime}}^{A}E_{j^{\\prime}}^{B}=[Z^{A}(Z^{B})^{d-1}]^{p}\\) for \\(0<p\\),\\(q\\leq d-1.\\) 3. Apply the unknown quantum dynamical map to the qudit \\(A\\): \\(\\mathcal{E}(\\rho)=\\sum_{m,n=0}^{d^{2}-1}\\chi_{mn}\\)\\(E_{m}^{A}\\rho E_{n}^{A\\dagger},\\) where \\(\\rho=\\left|\\phi_{c}\\right\\rangle\\left\\langle\\phi_{c}\\right|\\). 4. Perform a projective measurement \\(P_{k}P_{k^{\\prime}}:\\mathcal{E}(\\rho)\\mapsto P_{k}P_{k^{\\prime}}\\mathcal{E}(\\rho)P_{ k}P_{k^{\\prime}}\\), where \\(P_{k}=\\frac{1}{d}\\sum_{l=0}^{d-1}\\omega^{-lk}(E_{i}^{A}E_{j}^{B})^{l}\\), and \\(P_{k^{\\prime}}=\\frac{1}{d}\\sum_{l^{\\prime}=0}^{d-1}\\omega^{-l^{\\prime}k^{\\prime }}(E_{i}^{A}E_{j}^{B})^{l^{\\prime}}\\), and calculate the joint probability distributions of the outcomes \\(k\\) and \\(k^{\\prime}\\): \\[\\mathrm{Tr}[P_{k}P_{k^{\\prime}}\\mathcal{E}(\\rho)]=\\chi_{mm}.\\] _Number of ensemble measurements for Procedure (a)_: \\(1\\). _Procedure (b)_: Characterization of quantum dynamical coherence (off-diagonal elements \\(\\chi_{mn}\\) of \\(\\chi\\)), see Fig. 6. 1. Prepare \\(\\left|\\varphi_{0}\\right\\rangle=\\left|0\\right\\rangle_{A}\\otimes\\left|0\\right\\rangle_ {B}\\), a pure initial state. 2. Transform it to \\(\\left|\\varphi_{c}\\right\\rangle=\\sum\\limits_{i=0}^{d-1}\\alpha_{i}\\left|i\\right \\rangle_{A}\\left|i\\right\\rangle_{B}\\), a non-maximally entangled state of the two qudits. This state has stabilizer operators \\([E_{i}^{A}(E_{i}^{B})^{d-1}]^{a}\\). 3. Apply the unknown quantum dynamical map to the qudit \\(A\\): \\(\\mathcal{E}(\\rho)=\\sum_{m,n=0}^{d^{2}-1}\\chi_{mn}\\)\\(E_{m}^{A}\\rho E_{n}^{A\\dagger}\\), where \\(\\rho=\\left|\\phi_{c}\\right\\rangle\\left\\langle\\phi_{c}\\right|\\). 4. Perform a projective measurement \\[P_{k}:\\mathcal{E}(\\rho)\\mapsto\\rho_{k}=P_{k}\\mathcal{E}(\\rho)P_{k}=\\sum\\limits _{m}\\chi_{mm}\\)\\(E_{m}^{A}\\rho E_{m}^{A\\dagger}+\\sum\\limits_{m<n}(\\chi_{mn}\\)\\(E_{m}^{A}\\rho E_{n}^{A\\dagger}+\\chi_{mn}^{*}\\)\\(E_{n}^{A}\\rho E_{m}^{A\\dagger}),\\] where \\(P_{k}=\\frac{1}{d}\\sum_{l=0}^{d-1}\\omega^{-lk}(E_{i}^{A}E_{j}^{B})^{l}\\) and \\(E_{m}^{A}=X^{q_{m}}Z^{p_{m}}\\in W_{k}^{i}\\), and calculate the probability of outcome \\(k\\): \\[\\mathrm{Tr}[P_{k}\\mathcal{E}(\\rho)] =\\sum\\limits_{m}\\chi_{mm}+2\\sum\\limits_{m<n}\\mathrm{Re}[\\chi_{mn }\\;\\mathrm{Tr}(E_{n}^{A\\dagger}E_{m}^{A}\\rho)]. \\tag{14}\\] 5. Measure the expectation values of the normalizer operators \\(T_{qp}^{b}=(\\overline{X^{n}Z^{p}})^{b}\\in A_{v_{0}}/C_{a_{0}}\\), for all \\(0<b\\leq d-1\\), where \\(\\overline{X}=\\widetilde{E_{i}}\\otimes\\widetilde{E_{i}}\\), \\(\\overline{Z}=E_{i}\\otimes I\\), \\(E_{i}\\left|i\\right\\rangle=\\omega^{i}\\left|i\\right\\rangle\\), \\(\\widetilde{E_{i}}\\left|i\\right\\rangle=\\left|i+1\\right\\rangle\\), where \\(A_{v_{0}}/C_{a_{0}}\\) represents a fixed coset of a particular Abelian subgroup, \\(A_{v_{0}}\\), of the normalizer \\(N(S)\\). \\[\\mathrm{Tr}(T_{qp}^{b}\\rho_{k}) = \\sum\\limits_{m}\\omega^{pq_{m}-qp_{m}}\\chi_{mm}\\;\\mathrm{Tr}(T_{ rs}^{b}\\rho)+\\sum\\limits_{m<n}[\\omega^{pq_{m}-qp_{m}}\\chi_{mn}\\;\\mathrm{Tr}(E_{n}^{A \\dagger}E_{m}^{A}T_{rs}^{b}\\rho)+\\omega^{pq_{m}-qp_{m}}\\chi_{mn}^{*}\\; \\mathrm{Tr}(E_{m}^{A\\dagger}E_{n}^{A}T_{rs}^{b}\\rho)].\\] 6. Repeat the steps (1)-(5) \\(d+1\\) times, by preparing the eigenkets of other stabilizer operator \\([E_{i}^{A}(E_{i}^{B})^{d-1}]^{a}\\) for all \\(i\\in\\{1,2, ,d+1\\}\\), such that states \\(\\left|i\\right\\rangle_{A}\\left|i\\right\\rangle_{B}\\) in the step (2) belong to a mutually unbiased basis. 7. Repeat the step (6) up to \\(d-1\\) times, each time choosing normalizer elements \\(T_{qp}^{b}\\) from a different Abelian subgroup \\(A_{v}/C_{a}\\), such that these measurements become maximally non-commuting. _Number of ensemble measurements for Procedure (b)_: \\((d+1)(d-1)\\). _Overall number of ensemble measurements_: \\(d^{2}\\). Note that at the end of each measurement in Figs. 5 and 6, the output state - a maximally entangled state, \\(\\left|\\varphi_{E}\\right\\rangle=\\sum\\limits_{i=0}^{d-1}\\left|i\\right\\rangle_{A} \\left|i\\right\\rangle_{B}\\) - is the common eigenket of the stabilizer gen erator and its commuting normalizer operators. For the procedure (a), this state can be directly used for other measurements. This is indicated by the dashed lines in Fig. 5. For the procedure (b), the state \\(\\left|\\varphi_{E}\\right\\rangle\\) can be unitarily transformed to another member of the same input stabilizer code, \\(\\left|\\varphi_{C}\\right\\rangle=\\sum\\limits_{i=0}^{d-1}\\alpha_{i}\\left|i\\right\\rangle _{A}\\left|i\\right\\rangle_{B}\\), before another measurement. Therefore, all the required ensemble measurements, for measuring the expectation values of the stabilizer and normalizer operators, can always be performed in a temporal sequence on the same pair of qudits. In the previous sections, we have explicitly shown how the DCQD algorithm can be developed for qudit systems when \\(d\\) is prime. In the appendix A, we demonstrate that the DCQD algorithm can be generalized to other \\(N\\)-dimensional quantum systems with \\(N\\) being a power of a prime. ## XII Summary For convenience, we provide a summary of the DCQD algorithm. The DCQD algorithm for a qudit, with \\(d\\) being a prime, was developed by utilizing the concept of an error operator basis. An arbitrary operator acting on a qudit can be expanded over an orthonormal and unitary operator basis \\(\\{E_{0}\\),\\(E_{1}\\), ,\\(E_{d^{2}-1}\\}\\), where \\(E_{0}=I\\) and \\(\\text{tr}(E_{i}^{\\dagger}E_{j})=d\\delta_{ij}\\). Any element \\(E_{i}\\) can be generated from tensor products of \\(X\\) and \\(Z\\), where \\(X\\left|k\\right\\rangle=\\left|k+1\\right\\rangle\\) and \\(Z\\left|k\\right\\rangle=\\omega^{k}\\left|k\\right\\rangle\\), such that the relation \\(XZ=\\omega^{-1}ZX\\) is satisfied [28]. Here \\(\\omega\\) is a \\(d\\)th root of unity and \\(X\\) and \\(Z\\) are the generalizations of Pauli operators to higher dimension. _Characterization of Dynamical Population.-_ A measurement scheme for determining the quantum dynamical population, \\(\\chi_{mm}\\), in a single experimental configuration. Let us prepare a maximally entangled state of the two qudits \\(\\left|\\varphi_{C}\\right\\rangle=\\frac{1}{\\sqrt{d}}\\sum\\limits_{k=0}^{d-1}\\left| k\\right\\rangle_{A}\\left|k\\right\\rangle_{B}\\). This state is stabilized under the action of stabilizer operators \\(S=X^{A}X^{B}\\) and \\(S^{\\prime}{}_{\\cdot}=Z^{A}(Z^{B})^{d-1}\\), and it is referred to as a _stabilizer state_[1; 28]. After applying the quantum map to the qudit \\(A\\), \\(\\mathcal{E}(\\rho)\\), where \\(\\rho=\\left|\\phi_{C}\\right\\rangle\\left\\langle\\phi_{C}\\right|\\), we can perform a projective measurement \\(P_{k}P_{k^{\\prime}}\\mathcal{E}(\\rho)P_{k}P_{k^{\\prime}}\\), where \\(P_{k}=\\frac{1}{d}\\sum_{l=0}^{d-1}\\omega^{-lk}S^{l}\\), \\(P_{k^{\\prime}}=\\frac{1}{d}\\sum_{l^{\\prime}=0}^{d-1}\\omega^{-l^{\\prime}k^{ \\prime}}S^{\\prime l^{\\prime}}\\), and \\(\\omega=e^{i2\\pi/d}\\). Then, we have the joint probability distributions of the outcomes \\(k\\) and \\(k^{\\prime}\\): \\(\\mathrm{Tr}[P_{k}P_{k^{\\prime}}\\mathcal{E}(\\rho)]=\\chi_{mm}\\), where the elements \\(\\chi_{mm}\\) represent the population of error operators that anticommute with stabilizer generators \\(S\\) and \\(S^{\\prime}\\) with eigenvalues \\(\\omega^{k}\\) and \\(\\omega^{k^{\\prime}}\\), respectively. Therefore, with a _single_ experimental configuration we can identify all diagonal elements of superoperator. _Characterization of Dynamical Coherence.-_ For measuring the quantum dynamical coherence, we create a _nonmaximally_ entangled state of the two qudits \\(\\left|\\varphi_{C}\\right\\rangle=\\sum\\limits_{i=0}^{d-1}\\alpha_{i}\\left|i\\right\\rangle _{A}\\left|i\\right\\rangle_{B}\\). This state has the sole stabilizer operator \\(S=E_{i}^{A}(E_{i}^{B})^{d-1}\\) (for detailed restrictions on the coefficients \\(\\alpha_{i}\\) see Sec. VIII). After applying the dynamical map to the qudit \\(A\\), \\(\\mathcal{E}(\\rho)\\), we perform a projective measurement \\(\\rho_{k}=P_{k}\\mathcal{E}(\\rho)P_{k}\\), and calculate the probability of the outcome \\(k\\): \\(\\mathrm{Tr}[P_{k}\\mathcal{E}(\\rho)]=\\sum_{m}\\chi_{mm}+2\\sum_{m<n}\\mathrm{Re}[ \\chi_{mn}\\;\\mathrm{Tr}(E_{n}^{A\\dagger}E_{m}^{A\\rho})]\\); where \\(E_{m}^{A}\\) are all the operators in the operator basis, \\(\\{E_{j}^{A}\\}\\), that anticommute with the operator \\(E_{i}^{A}\\) with the same eigenvalue \\(\\omega^{k}\\). We also measure the expectation values of all independent operators \\(T_{rs}=E_{r}^{A}E_{s}^{B}\\) of the Pauli group (where \\(E_{r}^{A}\ eq I\\); \\(E_{s}^{B}\ eq I\\)) that simultaneously commute with the stabilizer generator \\(S\\): \\(\\mathrm{Tr}(T_{rs}\\rho_{k})\\). There are only \\(d-1\\) such operators \\(T_{rs}\\) that are independent of each other, within a multiplication by a stabilizer generator; and they belong to an Abelian subgroup of the normalizer group. The normalizer group is the group of unitary operators that preserve the stabilizer group by conjugation, i.e., \\(TST^{\\dagger}=S\\). We repeat this procedure \\(d+1\\) times, by preparing the eigenkets of other stabilizer operator \\(E_{i}^{A}(E_{i}^{B})^{d-1}\\) for all \\(i\\in\\{1,2,\\ldots,d+1\\}\\), such that states \\(\\left|i\\right\\rangle_{A}\\) in input states belong to a mutually unbiased basis [32]. Also, we can change the measurement basis \\(d-1\\) times, each time choosing normalizer elements \\(T_{rs}\\) from a different Abelian subgroup of the normalizer, such that their eigenstates form a mutually unbiased basis in the code space. Therefore, we can completely characterize quantum dynamical coherence by \\((d+1)(d-1)\\) different measurements, and the overall number number of experimental configuration for a qudit becomes \\(d^{2}\\). For \\(N\\)-dimensional quantum systems, with \\(N\\) a power of a prime, the required measurements are simply the tensor product of the corresponding measurements on individual qudits - see Appendix A. For quantum system whose dimension is not a power of a prime, the task can be accomplished by embedding the system in a larger Hilbert space whose dimension is a prime. ## XIII Outlook An important and promising advantage of DCQD is for use in _partial_ characterization of quantum dynamics, where in general, one cannot afford or does not need to carry out a full characterization of the quantum system under study, or when one has some _a priori_ knowledge about the dynamics. Using indirect methods of QPT in those situations is inefficient, because one has to apply the whole machinery of the scheme to obtain the relevant information about the system. On the other hand, the DCQD scheme has built-in applicability to the task of partial characterization of quantum dynamics. In general, one can substantially reduce the overall number of measurements, when estimating the coherence elements of the superoperator for only specific subsets of the operator basis and/or subsystems of interest. This fact has been demonstrated in Ref. [26] in a generic fashion, and several examples of partial characterization have also been presented. Specifically, it was shown that DCQD can be efficiently applied to (single- and two-qubit) Hamiltonian identification tasks. Moreover, it is demonstrated that the DCQD algorithm enables the simultaneous determination of coarse-grained (semiclassical) physical quantities, such as the longitudinal relaxation time \\(T_{1}\\) and the transversal relaxation (or dephasing) time \\(T_{2}\\) for a single qubit undergoing a general CP quantum map. The DCQD scheme can also be used for performing generalized quantum dense coding tasks. Other implications and applications of DCQD for partial QPT remain to be investigated and explored. An alternative representation of the DCQD scheme for higher-dimensional quantum systems, based on generalized Bell-state measurements will be presented in Ref. [33]. The connection of Bell-state measurements to stabilizer and normalizer measurements in DCQD for two-level systems, can be easily observed from Table II of Ref. [3]. Our presentation of the DCQD algorithm assumes ideal (i.e., error-free) quantum state preparation, measurement, and ancilla channels. However, these assumptions can all be relaxed in certain situations, in particular when the imperfections are already known. A discussion of these issues is beyond the scope of this work and will be the subject of a future publication [33]. There are a number of other directions in which the results presented here can be extended. One can combine the DCQD algorithm with the method of maximum likelihood estimation [35], in order to minimize the statistical errors in each experimental configuration invoked in this scheme. Moreover, a new scheme for _continuous_ characterization of quantum dynamics can be introduced, by utilizing weak measurements for the required quantum error detections in DCQD [36; 37]. Finally, the general techniques developed for direct characterization of quantum dynamics could be further utilized for control of open quantum systems [38]. ###### Acknowledgements. We thank J. Emerson, D. F. V. James, K. Khodjasteh, A. T. Rezakhani, A. Shabani, A. M. Steinberg, and M. Ziman for helpful discussions. This work was supported by NSERC (to M.M.), and NSF Grant No. CCF-0523675, ARO Grant W911NF-05-1-0440, and the Sloan Foundation (to D.A.L.). ## Appendix A Generalization to arbitrary open quantum systems Here, we first demonstrate that the overall measurements for a full characterization of the dynamics of an \\(n\\) qudit systems (with d being a prime) become the tensor product of the required measurements on individual qudits. One of the important examples of such systems is a QIP unit with \\(r\\) qubits, thus having a \\(2^{r}\\)-dimensional Hilbert space. Let us consider a quantum system consisting of \\(r\\) qudits, \\(\\rho=\\rho_{1}\\otimes\\rho_{2}\\otimes\\cdots\\otimes\\rho_{r}\\), with a Hilbert space of dimension \\(N=d^{r}\\). The output state of such a system after a dynamical map becomes \\(\\mathcal{E}(\\rho)=\\sum_{m,n=0}^{N^{2}-1}\\chi_{mn}\\)\\(E_{m}\\rho E_{n}^{\\dagger}\\) where here \\(\\{E_{m}\\}\\) are the unitary operator basis elements of an \\(N\\)-dimensional Hilbert space. These unitary operator basis elements can be written as \\(E_{m}=X^{q_{m1}}Z^{p_{m1}}\\otimes X^{q_{2}}Z^{p_{m2}}\\otimes\\cdots\\otimes Z^{ q_{mm}}\\). The unitary operator basis elements of \\(\\cdots\\otimes X^{q_{m_{r}}}Z^{p_{m_{r}}}\\)[34]. Therefore, we have: \\[\\mathcal{E}(\\rho) = \\sum_{m,n=0}^{N^{2}-1}\\chi_{mn}(X^{q_{m_{1}}}Z^{p_{m_{1}}}\\otimes \\ldots\\otimes X^{q_{m_{n}}}Z^{p_{m_{n}}})\\rho_{1}\\otimes\\ldots\\otimes\\rho_{n}(X ^{q_{n_{1}}}Z^{p_{n_{1}}}\\otimes\\ldots\\otimes X^{q_{n_{r}}}Z^{p_{n_{r}}})^{\\dagger}\\] \\[= \\sum_{m_{1},\\ldots,m_{r},n_{1},\\ldots,n_{r}=0}^{d^{2}-1}\\chi_{(m _{1}\\ldots m_{r})(n_{1}\\ldots n_{r})}(E_{m_{1}}\\rho_{1}E_{n_{1}}^{\\dagger}) \\otimes\\ldots(E_{m_{s}}\\rho_{s}E_{n_{s}}^{\\dagger})\\ldots\\otimes(E_{m_{r}} \\rho_{r}E_{n_{r}}^{\\dagger})\\] \\[= \\sum_{m_{1}\\ldots m_{r},n_{1}\\ldots n_{r}=0}^{d^{2}-1}\\chi_{(m_{ 1}\\ldots m_{r})(n_{1}\\ldots n_{r})}(E_{m}\\rho E_{n}^{\\dagger})_{s}^{\\otimes^{ r}},\\] where we have introduced \\(E_{m_{s}}=X^{q_{m_{s}}}Z^{p_{m_{s}}}\\) and \\(\\chi_{mn}=\\chi_{(m_{1},\\ldots,m_{r})(n_{1},\\ldots,n_{r})}\\). I.e., \\(m=(m_{1},\\ldots,m_{s},\\ldots,m_{r})\\) and \\(n=(n_{1},\\ldots,n_{s},\\ldots,n_{r})\\), and the index \\(s\\) represents a generic qudit. Let us first investigate the tensor product structure of the DCQD algorithm for characterization of the diagonal elements of the superoperator. We prepare the eigenstate of the stabilizer operators \\((E_{i}^{A}E_{j}^{B})_{s}^{\\otimes^{r}}\\) and \\((E_{i}^{A}E_{j}^{B})_{s}^{\\otimes^{r}}\\). For each qudit, the projection operators corresponding to outcomes \\(\\omega^{k}\\) and \\(\\omega^{k^{\\prime}}\\) (where \\(k,k^{\\prime}=0,1,\\ldots,d-1\\)), have the form \\(P_{k}=\\frac{1}{4}\\sum_{l=0}^{d-1}\\omega^{-lk}(E_{i}^{A}E_{j}^{B})^{l}\\) and \\(P_{k^{\\prime}}=\\frac{1}{4}\\sum_{l^{\\prime}=0}^{d-1}\\omega^{-l^{\\prime}k^{ \\prime}}(E_{i}^{A}E_{j^{\\prime}}^{B})^{l^{\\prime}}\\). The joint probability distribution of the commuting Hermitian operators \\(P_{k_{1}},P_{k^{\\prime}_{1}},P_{k_{2}},P_{k^{\\prime}_{2}},\\ldots,P_{k_{r}},P_{ k^{\\prime}_{r}}\\) on the output state \\(\\mathcal{E}(\\rho)\\) is: \\[\\mathrm{Tr}[(P_{k}P_{k^{\\prime}})_{s}^{\\otimes^{r}}\\mathcal{E}( \\rho)] = \\frac{1}{(d^{2})^{r}}\\sum_{m_{1},\\ldots,m_{r},n_{1},\\ldots,n_{r}=0 }^{d^{2}-1}\\chi_{(m_{1},\\ldots,m_{r})(n_{1},\\ldots,n_{r})}\\times\\] \\[\\left(\\sum_{l=0}^{d-1}\\sum_{l^{\\prime}=0}^{d-1}\\omega^{-lk} \\omega^{-l^{\\prime}k^{\\prime}}\\mathrm{Tr}[\\;E_{i}^{\\dagger}(E_{i}^{A})^{l}(E_ {i^{\\prime}}^{A})^{l^{\\prime}}E_{m}(E_{j}^{B})^{l}(E_{j^{\\prime}}^{B})^{l^{ \\prime}}\\rho]\\right)_{s}^{\\otimes^{r}}\\] By introducing \\(E_{i}E_{m}=\\omega^{i_{m}}E_{m}E_{i}\\) for each qudit and using the relation \\([(E_{i}^{A}E_{j}^{B})^{l}(E_{i^{\\prime}}^{A}E_{j^{\\prime}}^{B})^{l^{\\prime}} \\rho]_{s}=\\rho_{s}\\) we obtain: \\[\\mathrm{Tr}[(P_{k}P_{k^{\\prime}})_{s}^{\\otimes^{r}}\\mathcal{E}( \\rho)] = \\frac{1}{(d^{2})^{r}}\\sum_{m_{1},\\ldots,m_{r},n_{1},\\ldots,n_{r}=0 }^{d^{2}-1}\\chi_{(m_{1},\\ldots,m_{r})(n_{1},\\ldots,n_{r})}\\times\\left(\\sum_{l=0} ^{d-1}\\sum_{l^{\\prime}=0}^{d-1}\\omega^{(i_{m}-k)l}\\omega^{(i_{m}^{\\prime}-k^{ \\prime})l^{\\prime}}\\mathrm{Tr}[\\;E_{n}^{\\dagger}E_{m}\\rho]\\right)_{s}^{\\otimes^ {r}}\\] Using the QEC condition for nondegenerate codes, \\(\\mathrm{Tr}[E_{n}^{\\dagger}E_{m}\\rho]_{s}=(\\delta_{mn})_{s},\\) and also using the discrete Fourier transform identities \\(\\sum_{l=0}^{d-1}\\omega^{(i_{m}-k)l}=d\\delta_{i_{m},k}\\) and \\(\\sum_{l^{\\prime}=0}^{d-1}\\omega^{(i_{m}^{\\prime}-k^{\\prime})l^{\\prime}}=d\\delta _{i_{m}^{\\prime},k^{\\prime}}\\) for each qudit, we get: \\[\\mathrm{Tr}[(P_{k}P_{k^{\\prime}})_{s}^{\\otimes^{r}}\\mathcal{E}( \\rho)] = \\sum_{m_{1},\\ldots,m_{r},n_{1},\\ldots,n_{r}=0}^{d^{2}-1}\\chi_{(m_{ 1},\\ldots,m_{r})(n_{1},\\ldots,n_{r})}(\\delta_{i_{m},k}\\delta_{i_{m}^{\\prime},k^{ \\prime}}\\delta_{mn})_{s}^{\\otimes^{r}}\\] \\[= \\chi_{(m_{01},\\ldots,m_{0r})(m_{01},\\ldots,m_{0r})},\\] where for each qudit, the index \\(m_{0}\\) is defined through the relations \\(i_{m_{0}}=k\\) and \\(i_{m_{0}}^{\\prime}=k^{\\prime}\\), etc. I.e., \\(E_{m_{0}}\\) is the unique error operator that anticommutes with the stabilizer operators of each qudit with a fixed pair of eigenvalues \\(\\omega^{k}\\) and \\(\\omega^{k^{\\prime}}\\) corresponding \\begin{table} \\begin{tabular}{l l c c c c} Scheme & \\(\\text{dim}(\\mathcal{H})^{a}\\) & \\(N_{\\text{input}}\\) & \\(N_{\\text{output}}\\)1 & measurements & required interactions \\\\ \\hline SQPT & \\(d^{n}\\) & \\(d^{2}n\\) & \\(d^{4n}\\) & 1-body & single-body \\\\ AAPT & \\(d^{2n}\\) & 1 & \\(d^{4n}\\) & joint 1-body & single-body \\\\ AAPT (MUB) & \\(d^{2n}\\) & 1 & \\(d^{2n}+1\\) & MUB & many-body \\\\ AAPT (POVM) & \\(d^{4n}\\) & 1 & 1 & POVM & many-body \\\\ DCQD & \\(d^{2n}\\) & \\([(d+1)+1]^{n}\\) & \\(d^{2n}\\) & StabilizerNormalizer & single- and two-body \\\\ \\end{tabular} \\end{table} Table 1: Required physical resources for the QPT schemes: Standard Quantum Process Tomography (SQPT), Ancilla-Assisted Process Tomography using separable joint measurements (AAPT), using mutual unbiased bases measurements (MUB), using generalized measurements (POVM), see Ref. [3], and Direct Characterization of Quantum Dynamics (DCQD). The overall number of measurements is reduced quadratically in the DCQD algorithm with respect to the separable methods of QPT. This comes at the expense of requiring entangled input states, and two-qudit measurements of the output states. The non-separable AAPT schemes require many-body interactions that are not available experimentally [3]. to experimental outcomes \\(k\\) and \\(k^{\\prime}\\). Since \\(P_{k}\\) and \\(P_{k^{\\prime}}\\) operator have \\(d\\) eigenvalues, we have \\(d^{2}\\) possible outcomes for each qudit, which overall yields \\((d^{2})^{r}\\) equations that can be used to characterize all the diagonal elements of the superoperator with a single ensemble measurement and \\((2d)^{r}\\) detectors. Note that in the above ensemble measurement we can obtain \\(\\log_{2}d^{4r}\\) bits of classical information, which is optimal according to the Holevo bound for an \\(2r\\)-qudit system of dimension \\(d^{2}\\). Similarly, the off-diagonal elements of superoperators can be identified by a tensor product of the operations in the DCQD algorithm for each individual qudit, see Ref. [26]. A comparison of the required physical resources for \\(n\\) qudits is given in Table 1. For a \\(d\\)-dimensional quantum system where \\(d\\) is neither a prime nor a power of a prime, we can always imagine another \\(d^{\\prime}\\)-dimensional quantum system such that \\(d^{\\prime}\\) is prime, and embed the principal qudit as a subspace into that system. For example, the energy levels of a six-level quantum system can be always regarded as the first six energy levels of a virtual seven-level quantum system, such that the matrix elements for coupling to the seventh level are practically zero. Then, by considering the algorithm for characterization of the virtual seven-level system, we can perform only the measurements required to characterize superoperator elements associated with the first six energy levels. ## References * (1) M. A. Nielsen and I. L. Chuang, _Quantum Computation and Quantum Information_ (Cambridge University Press, Cambridge, UK, 2000). * (2) G. M. D'Ariano, M. G. A. Paris, and M. F. Sacchi, Advances in Imaging and Electron Physics Vol. **128**, 205 (2003). * (3) M. Mohseni, A. T. Rezakhani, and D. A. Lidar, quant-ph/0702131. * (4) I. L. Chuang and M. A. Nielsen, J. Mod. Opt. **44**, 2455 (1997). * (5) J. J. Poyatos, J. I. Cirac, and P. Zoller, Phys. Rev. Lett. **78**, 390 (1997). * (6) A. M. Childs, I. L. Chuang, and D. W. Leung, Phys. Rev. A **64**, 012314 (2001). * (7) N. Boulant, T. F. Havel, M. A. Pravia, and D. G. Cory, Phys. Rev. A **67**, 042322 (2003). * (8) Y. S. Weinstein, T. F. Havel, J. Emerson, and N. Boulant, M. Saraceno, S. Lloyd, and D. G. Cory, J. Chem. Phys. **121**, 6117 (2004). * (9) M. W. Mitchell, C. W. Ellenor, S. Schneider, and A. M. Steinberg, Phys. Rev. Lett. **91**, 120402 (2003). * (10) J. L. O'Brien, G. J. Pryde, A. Gilchrist, D. F. V. James, N. K. Langford, T. C. Ralph, and A. G. White, Phys. Rev. Lett. **93**, 080502 (2004). * (11) S. H. Myrskog, J. K. Fox, M. W. Mitchell, and A. M. Steinberg, Phys. Rev. A **72**, 013615 (2005). * (12) M. Howard, J. Twamley, C. Wittmann, T. Gaebel, F. Jelezko, and J. Wrachtrup, New J. Phys. **8**, 33 (2006). * (13) G. M. D'Ariano and P. Lo Presti, Phys. Rev. Let. **86**, 4195 (2001). * (14) D. W. Leung, PhD Thesis (Stanford University, 2000); J. Math. Phys. **44**, 528 (2003). * (15) J. B. Altepeter, D. Branning, E. Jeffrey, T. C. Wei, P. G. Kwiat, R. T. Thew, J. L. O'Brien, M. A. Nielsen, and A. G. White, Phys. Rev. Lett. **90**, 193601 (2003). * (16) G. M. D'Ariano and P. Lo Presti, Phys. Rev. Lett. **91**, 047902 (2003). * (17) F. De Martini, A. Mazzei, M. Ricci, and G. M. D'Ariano, Phys. Rev. A **67**, 062307 (2003). * (18) A. K. Ekert, C. M. Alves, D. K. L. Oi, M. Horodecki, P. Horodecki, and L. C. Kwek, Phys. Rev. Lett. **88**, 217901 (2002). * (19) P. Horodecki and A. Ekert, Phys. Rev. Lett. **89**, 127902 (2002). * (20) F. A. Bovino, G. Castagnoli, A. Ekert, P. Horodecki, C. M. Alves, and A. V. Sergienko, Phys. Rev. Lett. **95**, 240407 (2005). * (21) V. Buzek, G. Drobny, R. Derka, G. Adam, and H. Wiedemann, quant-ph/9805020; M. Ziman, M. Plesch, and V. Buzek, Eur. Phys. J. D **32**, 215 (2005). * (22) J. Emerson, Y. S. Weinstein, M. Saraceno, S. Lloyd, and D. G. Cory, Science **302**, 2098 (2003); J. Emerson, R. Alicki, and K. Zyczkowski, J. Opt. B: Quantum Semiclass. Opt. **7** S347 (2005). * (23) H. F. Hofmann, Phys. Rev. Lett. **94**, 160504 (2005). * (24) C. H. Bennett, A. W. Harrow, and S. Lloyd, Phys. Rev. A **73**, 032336 (2006). * (25) M. Mohseni and D. A. Lidar, Phys. Rev. Lett. **97**, 170501 (2006). * (26) M. Mohseni, PhD Thesis (University of Toronto, 2007). * (27) A. E. Ashikhmin and E. Knill, IEEE Trans. Inf. Theo. **47** 3065 (2001); E. Knill, quant-ph/9608048. * (28) D. Gottesman, Chaos, Solitons, and Fractals **10**, 1749 (1999). * (29) D. Gottesman, PhD Thesis (California Institute of Technology, 1997), quant-ph/9705052. * (30) S. Bandyopadhyay, P. O. Boykin, V. Roychowdhury, and F. Vatan, Algorithmica **34**, 512 (2002). * (31) A. S. Holevo, Probl. Inform. Transm. **9**, 110 (1973). * (32) W. K. Wootters and B. D. Fields, Ann. Phys. **191**, 363 (1989). * (33) M. Mohseni and A. T. Rezakhani, in preparation (2007). * (34) K. S. Gibbons, M. J. Hoffman, and W. K. Wootters, Phys. Rev. A **70**, 062101 (2004). * (35) R. Kosut, I. A. Walmsley, and H. Rabitz, quant-ph/0411093. * (36) C. Ahn, A. C. Doherty, A. J. Landahl, Phys. Rev. A **65**, 042301 (2002). * (37) O. Oreshkov and T. A. Brun, Phys. Rev. Lett. **95**, 110409 (2005). * (38) M. Mohseni, A. T. Rezakhani, and A. Aspuru-Guzik, in preparation (2007).
The characterization of the dynamics of quantum systems is a task of both fundamental and practical importance. A general class of methods which have been developed in quantum information theory to accomplish this task is known as quantum process tomography (QPT). In an earlier paper [M. Mohseni and D. A. Lidar, Phys. Rev. Lett. **97**, 170501 (2006)] we presented a new algorithm for Direct Characterization of Quantum Dynamics (DCQD) of two-level quantum systems. Here we provide a generalization by developing a theory for direct and complete characterization of the dynamics of arbitrary quantum systems. In contrast to other QPT schemes, DCQD relies on quantum error-detection techniques and does not require any quantum state tomography. We demonstrate that for the full characterization of the dynamics of \\(n\\)\\(d\\)-level quantum systems (with \\(d\\) a power of a prime), the minimal number of required experimental configurations is reduced quadratically from \\(d^{\\rm in}\\) in separable QPT schemes to \\(d^{2n}\\) in DCQD. pacs: 03.65.Wj,03.67.-a,03.67.Pp
Condense the content of the following passage.
arxiv-format/0601094v1.md
# Casimir effect for curved geometries: PFA validity limits Holger Gies Institut fur Theoretische Physik, Philosophenweg 16, 69120 Heidelberg, Germany Klaus Klingmuller Institut fur Theoretische Physik, Philosophenweg 16, 69120 Heidelberg, Germany ###### pacs: 42.50.Lc,03.70.+k,11.10.-z Measurements of the Casimir force [1] have reached a precision level of 1% [2; 3; 4; 5; 6; 7; 8]. Further improvements are currently aimed at with intense efforts, owing to the increasing relevance of these quantum forces for nano- and micro-scale mechanical systems; also, Casimir precision measurements play a major role in the search for new sub-millimeter forces, resulting in important constraints for new physics [9; 10; 11; 12; 13]. On this level of precision, corrections owing to material properties, thermal fluctuations and geometry dependencies have to be accounted for [14; 15; 16; 17]. In order to reduce material corrections such as surface roughness and finite conductivity which are difficult to control with high precision, force measurements at larger surface separations up to the micron range are intended. Though this implies stronger geometry dependence, this latter effect is, in principle, under clean theoretical control, since it follows directly from quantum field theory [18]. Straightforward computations of geometry dependencies are conceptually complicated, since the relevant information is subtly encoded in the fluctuation spectrum. Analytic solutions can usually be found only for highly symmetric geometries. This problem is particularly prominent, since current and future precision measurements predominantly rely on configurations involving curved surfaces, such as a sphere above a plate. As a general recipe, the proximity force approximation (PFA) [19] has been the standard tool for estimating curvature effects for non-planar geometries in all experiments so far. The fact that the PFA is uncontrolled with unknown validity limits makes this approach highly problematic. Therefore, a technique is needed that facilitates Casimir computations from field-theoretic first principles. For this purpose, _worldline numerics_ has been developed [20], combining the string-inspired approach to quantum field theory [21] with Monte Carlo methods. As a main advantage, the worldline algorithm can be formulated for arbitrary geometries, resulting in a numerical estimate of the exact answer [22]. For the sphere-plate and cylinder-plate configurations, also new analytic methods are currently developed and latest results including exact solutions are given in [23; 24]. In either case, quantitatively accurate results for the experimentally relevant parameter ranges are missing so far. In this Letter, we use worldline numerics [20; 22] to examine the Casimir effect in a sphere-plate and cylinder-plate geometry for a fluctuating scalar field, obeying Dirichlet boundary conditions (\"Dirichlet scalar\"). We compute the Casimir interaction energies that give rise to forces between the rigid surfaces. Thereby, we quantitatively determine validity bounds for the PFA. Apart from numerical discretization, for which a careful error management on the 0.1% level is performed, no quantum-field-theoretic approximation is needed. We emphasize that the Casimir energies for the Dirichlet scalar should not be taken as an estimate for those for the electromagnetic (EM) field, leaving especially the sphere-plate case as a pressing open problem. Nevertheless, the validity constraints that we derive for the PFA hold independently of that, since the PFA approach makes no reference to the nature of the fluctuating field. If an experiment is performed outside the PFA validity ranges determined below, any comparison of the data with theory using the PFA has no firm basis. _Casimir curvature effects. -_ An intriguing property of the Casimir effect has always been its geometry dependence. As long as the typical curvature radii \\(R_{i}\\) of the surfaces are large compared to the surface separation \\(a\\), the PFA is assumed to provide for a good approximation. In this approach, the curved surfaces are viewed as a superposition of infinitesimal parallel plates [17; 19]. The Casimir interaction energy is obtained by an integration of the parallel-plate energy applied to the infinitesimal elements. Part of the curvature effect is introduced by the choice of a suitable integration measure which is generally ambiguous, as discussed, e.g., in [25]. For the case of a sphere with radius \\(R\\) at a (minimal) distance \\(a\\) from a plate, the PFA result at next-to-leading order reads \\[E_{\\rm PFA}(a,R) = E_{\\rm PFA}^{(0)}(a,R)\\left(1-\\genfrac{\\{}{\\}}{0.0pt}{}{1}{3} \\frac{a}{R}+{\\cal O}((\\frac{a}{R})^{2})\\right)\\!, \\tag{1}\\] \\[E_{\\rm PFA}^{(0)}(a,R)=-c_{\\rm PP}\\frac{\\pi^{3}}{1440}\\frac{R}{ a^{2}}, \\tag{2}\\] where the upper (lower) coefficient in braces holds for the so-called plate-based (sphere-based) PFA. They represent two limiting cases of the PFA and have often been assumed to span the error bars for the true result. Furthermore, \\(c_{\\rm PP}=2\\) for an EM field or a complex scalar, and \\(c_{\\rm PP}=1\\) for real scalar field fluctuation. Heuristically, the PFA is in contradiction with Heisenberg's uncertainty principle, since the quantum fluctuations are assumed to probe the surfaces only locally at each infinitesimal element. However, fluctuations are not localizable, but at least probe the surface in a whole neighborhood. In this manner, the curvature information enters the fluctuation spectrum. This quantum mechanism is immediately visible in the worldline formulation of the Casimir problem. Therein, the sum over fluctuations is mapped onto a Feynman path integral. Each path (worldline) can be viewed as a random spacetime trajectory of a quantum fluctuation. Owing to a generic spatial extent of the worldlines, the path integral directly samples the curvature properties of the surfaces [22]. For the Dirichlet scalar, the worldline representation of the Casimir interaction energy boils down to [22; 26] \\[E_{\\rm Casimir}=-\\frac{1}{2}\\frac{1}{(4\\pi)^{2}}\\int_{0}^{\\infty}\\frac{dT}{T ^{3}}\\,e^{-m^{2}T}\\,\\left<\\Theta_{\\Sigma}[x]\\right>_{x}. \\tag{3}\\] The expectation value in (3) has to be taken with respect to an ensemble of closed worldlines, \\[\\langle\\dots\\rangle_{x}:=\\int_{x(T)=x(0)}{\\cal D}x\\,\\dots e^{-\\frac{1}{4}\\int _{0}^{T}d\\tau\\dot{x}^{2}}, \\tag{4}\\] with implicit normalization \\(\\langle 1\\rangle_{x}=1\\). In Eq. (3), \\(\\Theta_{\\Sigma}[x]=1\\) if a worldline \\(x\\) intersects both surfaces \\(\\Sigma=\\Sigma_{1}+\\Sigma_{2}\\), and \\(\\Theta_{\\Sigma}[x]=0\\) otherwise. A worldline with \\(\\Theta_{\\Sigma}[x]=1\\) represents a boundary-condition violating fluctuation. Its removal from the set of admissible fluctuations contributes to the negative Casimir interaction energy. We evaluate the worldline integral with Monte Carlo techniques, generating an ensemble of \\(n_{\\rm L}\\) worldlines with the v loop algorithm [22]. Each worldline is characterized by \\(N\\) points after discretizing its propertime. In this work, we have used ensembles with up to \\(n_{\\rm L}=2.5\\cdot 10^{5}\\) and \\(N=4\\cdot 10^{6}\\). Details of the algorithmic improvements used for this work will be given elsewhere [27]. Further advanced field-theoretic methods have been developed for Casimir calculations during the past years. Significant improvements compared to the PFA have been achieved by the semiclassical approximation [28], a functional-integral approach using boundary auxiliary fields [29], and the optical approximation [25]. These methods are especially useful for analyzing particular geometries by purely or partly analytical means; in the general case, approximations are often necessary but difficult to control. Hence, our results can also shed light on the quality of such approximations. _Sphere above plate._ - We consider a sphere with radius \\(R\\) above an infinite plate at a (minimal) distance \\(a\\). A contour plot of the energy density along a radial plane obtained by a pointwise evaluation of Eq. (3) is shown in Fig. 1. This density is related to the density of worldlines with a given center-of-mass that intersect both surfaces. Figure 2 presents a global view on the Casimir interaction energy for a wide range of the curvature parameter \\(a/R\\); the energy is normalized to the zeroth order of the PFA formula (2), \\(E_{\\rm PFA}^{(0)}\\). For small \\(a/R\\) (\"large spheres\"), our worldline result (crosses with error bars) and the full sphere- and plate-based PFA estimates (dashed-dotted lines) show reasonable agreement, settling at the zeroth-order PFA \\(E_{\\rm PFA}^{(0)}\\). The first field-theoretic confirmation of this result has been obtained within the semi-classical approximation in [28]. The full PFA departs on the percent level from \\(E_{\\rm PFA}^{(0)}\\) for \\(a/R\\gtrsim 0.01\\), exhibiting a relative energy decrease. By contrast, our worldline result first stays close to \\(E_{\\rm PFA}^{(0)}\\) and then increases towards larger energy values relative to \\(E_{\\rm PFA}^{(0)}\\). This observation confirms earlier worldline studies [22] and agrees with the optical approximation [25] in this curvature regime. For larger curvature \\(a/R\\gtrsim 0.1\\) (\"smaller spheres\"), Figure 1: Contour plot of the negative Casimir interaction energy density for a sphere of radius \\(R\\) above an infinite plate; the sphere-plate separation \\(a\\) has been chosen as \\(a=R\\) here. The plot results from a pointwise evaluation of Eq. (3) using worldlines with a common center of mass. Figure 2: Casimir interaction energy of a sphere with radius \\(R\\) and an infinite plate vs. the curvature parameter \\(a/R\\). The energy is normalized to the zeroth-order PFA formula (2), \\(E_{\\rm PFA}^{(0)}\\). For larger curvature parameter, the PFA estimate (dot-dashed line) differs qualitatively from the worldline result (crosses with error bars). Here, we observe good agreement of our result with the exact solution of [23] which is available for \\(a/R\\gtrsim 0.1\\) (dashed line). we observe a strong increase relative to \\(E_{\\rm PFA}^{(0)}\\)[26]. Here, our data satisfactorily agrees with the exact solution found recently for this regime [23] (dashed line). The latter work also provides for an exact asymptotic limit for \\(a/R\\to\\infty\\), resulting in \\(180/\\pi^{4}\\) for our normalization. Our worldline data confirms this limit in Fig. 2. Two important lessons can be learned from this plot: first, the PFA already fails to predict the correct sign of the curvature effects beyond zeroth order, see also [30]. Second, the relation between the Casimir effect for Dirichlet scalars and that for the EM field is strongly geometry dependent. For the parallel-plate case, Casimir forces only differ by the number of degrees of freedom, cf. the coefficient \\(c_{\\rm PP}\\) in Eq. (1). For large curvature, the Casimir energy for the Dirichlet scalar scales with \\(a^{-2}\\), whereas that for the EM field obeys the Casimir-Polder law \\(\\sim a^{-4}\\)[32; 33]. We emphasize that this difference does not affect our conclusions about the validity limits of the PFA, because the PFA makes no reference to the fluctuating field other than the coefficient \\(c_{\\rm PP}\\). For a quantitative determination of the PFA validity limits, Fig. 3 displays the zeroth-order normalized energy for small curvature parameter \\(a/R\\). Here, our result has an accuracy of \\(0.1\\%\\) (jack-knife analysis). The error is dominated by the Monte Carlo sampling and the ordinary-integration accuracy; the error from the worldline discretization is found negligible in this regime, implying a sufficient proximity to the continuum limit. In addition to our numerical error band, we consider the region between the sphere- and the plate-based PFA as the PFA error band. We identify the \\(0.1\\%\\) accuracy limit of the PFA with the curvature parameter \\(a/R|_{0.1\\%}\\) where the two bands do no longer overlap. We obtain \\[\\frac{a}{R}\\big{|}_{0.1\\%}^{\\rm PFA}\\leq 0.00073 \\tag{5}\\] as the corresponding validity range for the curvature parameter. For instance, for a typical sphere with \\(R=200\\mu\\)m and an experimental accuracy goal of \\(0.1\\%\\), the PFA should not be used for \\(a\\gtrsim 150\\)nm. We conclude that the PFA should be dropped from the analysis of future experiments. For the \\(1\\%\\) accuracy limit of the PFA, we increase the band of our worldline estimate by this size and again determine the curvature parameter for which there is no intersection with the PFA band anymore. We obtain \\[\\frac{a}{R}\\big{|}_{1\\%}^{\\rm PFA}\\leq 0.00755. \\tag{6}\\] For a sphere with \\(R=200\\mu\\)m and an experimental accuracy goal of \\(1\\%\\), the PFA holds for \\(a<1.5\\mu\\)m. This result confirms the use of the PFA for the data analysis of the corresponding experiments performed so far. In order to study the asymptotic expansion of the normalized energy, we fit our data to a second-order polynomial for \\(a/R<0.1\\) and include the exactly known result for \\(a/R\\to 0\\). We obtain \\(p(x)\\simeq 1+0.35x-1.92x^{2}\\pm 0.19x\\sqrt{1-137.2x+5125x^{2}}\\), where \\(x=a/R\\). The fit result is plotted in Fig. 3 (dashed lines), which illustrates that \\(E\\simeq E_{\\rm PFA}^{(0)}\\,p(a/R)\\) is a satisfactory approximation to the Casimir energy for \\(a/R<0.1\\), replacing the PFA (1). The inlay in this figure displays the same curves with a linear \\(a/R\\) axis, illustrating that the lowest-order curvature effect is linear in \\(a/R\\). Given the results of the PFA (1), the semiclassical approximation [28], \\(p_{sc}(x)\\simeq 1-0.17x\\), cf. [23], and the optical approximation [25], \\(p_{opt}(x)\\simeq 1+0.05x\\), the latter appears to estimate curvature effects more appropriately. _Cylinder above plate._ - The cylinder-plate configuration is a promising tool for high-precision experiments [31], since the force signal increases linearly with the cylinder length. Figure 4 shows the corresponding Casimir interaction energy versus the curvature parameter. The energy axis is again normalized to the zeroth-order PFA result, \\(E_{\\rm PFA}^{(0)}(a,R)=-c_{\\rm PP}\\frac{3\\pi}{4\\sqrt{2}}\\,\\frac{R^{1/2}}{a^{3/ 2}}\\). The qualitative conclusions for the validity of the PFA are similar to that for the sphere above a plate: beyond leading order, the PFA even predicts the wrong sign of the curvature effects. Quantitatively, the PFA validity limits are a factor \\(\\sim 3\\) larger than Eqs. (5),(6), owing to the absence of curvature along the cylinder axis. The most important difference to the sphere-plate case arises for large \\(a/R\\). Here, the data is compatible with a log-like increase relative to \\(E_{\\rm PFA}^{(0)}\\), implying a surprisingly weak decrease of the Casimir force for large curvature \\(a/R\\to\\infty\\). Our result agrees nicely with the very recent exact result [24] which is available for \\(a/R\\gtrsim 0.1\\). The data thus confirms the observation of [24] that the resulting Casimir force has the weakest possible decay, \\(F\\sim 1/[a^{3}\\ln(a/R)]\\), for asymptotically large curvature parameter \\(a/R\\to\\infty\\). In summary, we have computed Casimir interaction energies for the sphere-plate and cylinder-plate configu Figure 3: Magnified view of Fig. 2 for small \\(a/R\\). The \\(0.1\\%\\) validity range of the PFA is characterized by curvature parameters, where the error band of our worldline results and the PFA band (blue-shaded/in between the dot-dashed lines) overlap, see Eq. (5). The dashed lines depict a constraint polynomial fit of the worldline result, \\(p(a/R)=1+0.35(a/R)-1.92(a/R)^{2}\\), and its standard deviation. The inlay displays the same curves with a linear \\(a/R\\) axis ration with Dirichlet boundary conditions from first principles for a wide range of curvature parameters \\(a/R\\). In general, we observe that curvature effects and geometry dependencies are intriguingly rich, implying that naive estimates can easily be misguiding. In particular, predictions based on the PFA are only reliable in the asymptotic no-curvature limit. Its quantitative validity bounds given above and thus genuine Casimir curvature effects are in reach of currently planned experiments. Beyond the Dirichlet scalar investigated here, it is well possible, e.g., for the EM field, that some cancellation of curvature effects occurs between modes obeying different boundary conditions. In fact, such a partial cancellation between TE and TM modes of the separable cylinder-plate geometry can be observed in the recent exact result for the EM field for small curvature [24]. Casimir calculations for the EM field in non-separable geometries, such as the important sphere-plate case, therefore remain a prominent open problem. The authors are grateful to T. Emig, R.L. Jaffe, A. Scardicchio, and A. Wirzba for useful discussions. The authors acknowledge support by the DFG under contract Gi 328/1-3 (Emmy-Noether program) and Gi 328/3-2. ## References * (1) H.B.G. Casimir, Kon. Ned. Akad. Wetensch. Proc. **51**, 793 (1948). * (2) S. K. Lamoreaux, Phys. Rev. Lett. **78**, 5 (1997). * (3) U. Mohideen and A. Roy, Phys. Rev. Lett. **81**, 4549 (1998); * (4) A. Roy, C. Y. Lin and U. Mohideen, Phys. Rev. D **60**, 111101 (1999). * (5) T. Ederth, Phys. Rev. A **62**, 062104 (2000) * (6) H.B. Chan, V.A. Aksyuk, R.N. Kleiman, D.J. Bishop and F. Capasso, Science 291, 1941 (2001). * (7) G. Bressi, G. Carugno, R. Onofrio and G. Ruoso, Phys. Rev. Lett. **88**, 041804 (2002). * (8) F. Chen, U. Mohideen, G.L. Klimchitskaya and V.M. Mostepanenko, Phys. Rev. Lett. **88**, 101801 (2002). * (9) M. Bordag, B. Geyer, G. L. Klimchitskaya and V. M. Mostepanenko, Phys. Rev. D **58**, 075003 (1998), _ibid._**60**, 055004 (1999), _ibid._**62**, 011701 (2000). * (10) J. C. Long, H. W. Chan and J. C. Price, Nucl. Phys. B **539**, 23 (1999). * (11) V. M. Mostepanenko and M. Novello, Phys. Rev. D **63**, 115003 (2001). * (12) K. A. Milton, R. Kantowski, C. Kao and Y. Wang, Mod. Phys. Lett. A **16**, 2281 (2001). * (13) R.S. Decca, E. Fischbach, G.L. Klimchitskaya, D.E. Krause, D.L. Lopez and V.M. Mostepanenko, Phys. Rev. D **68**, 116003 (2003), R.S. Decca, D. Lopez, H.B. Chan, E. Fischbach, D.E. Krause and C.R. Jamell, Phys. Rev. Lett. **94**, 240401 (2005). * (14) G.L. Klimchitskaya, A. Roy, U. Mohideen, and V.M. Mostepanenko, Phys. Rev. A **60**, 3487 (1999). * (15) A. Lambrecht and S. Reynaud, Eur. Phys. J. D **8**, 309 (2000). * (16) V.B. Bezerra, G.L. Klimchitskaya, and V.M. Mostepanenko, Phys. Rev. A 62, 014102 (2000). * (17) M. Bordag, U. Mohideen and V. M. Mostepanenko, Phys. Rept. **353**, 1 (2001). * (18) N. Graham, R. L. Jaffe, V. Khemani, M. Quandt, M. Scandurra and H. Weigel, Nucl. Phys. B **645**, 49 (2002). * (19) B.V. Derjaguin, I.I. Abrikosova, E.M. Lifshitz, Q.Rev. **10**, 295 (1956); J. Blocki, J. Randrup, W.J. Swiatecki, C.F. Tsang, Ann. Phys. (N.Y.) **105**, 427 (1977). * (20) H. Gies and K. Langfeld, Nucl. Phys. B **613**, 353 (2001); Int. J. Mod. Phys. A **17**, 966 (2002). * (21) see, e.g., C. Schubert, Phys. Rept. **355**, 73 (2001). * (22) H. Gies, K. Langfeld and L. Moyaerts, JHEP **0306**, 018 (2003); arXiv:hep-th/0311168. * (23) A. Bulgac, P. Magierski and A. Wirzba, arXiv:hep-th/0511056; A. Wirzba, A. Bulgac and P. Magierski, arXiv:quant-ph/0511057. * (24) T. Emig, R. L. Jaffe, M. Kardar, A. Scardicchio, cond-mat/0601055. * (25) A. Scardicchio and R. L. Jaffe, Nucl. Phys. B **704**, 552 (2005); Phys. Rev. Lett. **92**, 070402 (2004). * (26) H. Gies and K. Klingmuller, arXiv:hep-th/0511092. * (27) H. Gies and K. Klingmuller, in preparation. * (28) M. Schaden and L. Spruch, Phys. Rev. A **58**, 935 (1998); Phys. Rev. Lett. **84** 459 (2000) * (29) R. Golestanian and M. Kardar, Phys. Rev. A **58**, 1713 (1998); T. Emig, A. Hanke and M. Kardar, Phys. Rev. Lett. **87** (2001) 260402; T. Emig and R. Buscher, Nucl. Phys. B **696**, 468 (2004). * (30) I. Brevik, E.K. Dahl and G.O. Myhr, J. Phys. A **38**, L49 (2005). * (31) M. Brown-Hayes, D.A.R. Dalvit, F.D. Mazzitelli, W.J. Kim and R. Onofrio, Phys. Rev. A **72**, 052102 (2005). * (32) H.B.G. Casimir and D. Polder, Phys. Rev. **73**, 360 (1948). * (33) V. Druzhinina and M. DeKieviet, Phys. Rev. Lett. **91**, 193202 (2003). Figure 4: Casimir interaction energy (normalized to \\(E_{\\rm{PPA}}^{(0)}\\)) of an infinitely long cylinder with radius \\(R\\) at a distance \\(a\\) above an infinite plate vs. the curvature parameter \\(a/R\\). The inlay shows a magnified view for small values of \\(a/R\\).
We compute Casimir interaction energies for the sphere-plate and cylinder-plate configuration induced by scalar-field fluctuations with Dirichlet boundary conditions. Based on a high-precision calculation using worldline numerics, we quantitatively determine the validity bounds of the proximity force approximation (PFA) on which the comparison between all corresponding experiments and theory are based. We observe the quantitative failure of the PFA on the 1% level for a curvature parameter \\(a/R>0.00755\\). Even qualitatively, the PFA fails to predict reliably the correct sign of genuine Casimir curvature effects. We conclude that data analysis of future experiments aiming at a precision of 0.1% must no longer be based on the PFA.
Provide a brief summary of the text.
arxiv-format/0601098v2.md
# Equation of state for isospin asymmetric nuclear matter using Lane potential D.N. Basu\\({}^{1}\\), P. Roy Chowdhury\\({}^{2}\\) and C. Samanta\\({}^{2,3}\\) \\({}^{1}\\)Variable Energy Cyclotron Centre, 1/AF Bidhan Nagar, Kolkata 700 064, India \\({}^{2}\\)Saha Institute of Nuclear Physics, 1/AF Bidhan Nagar, Kolkata 700 064, India \\({}^{3}\\)Physics Department, Virginia Commonwealth University, Richmond, VA 23284-2000, U.S.A. E-mail:[email protected] ## 1 Introduction The equation of state (EOS) of dense isospin asymmetric nuclear matter determines most of the gross properties of neutron stars and hence it is of considerable interest in astrophysics. Nuclear matter is an idealized system of nucleons interacting strongly through nuclear forces but without Coulomb forces and is translationally invariant with a fixed ratio of neutrons to protons. The nuclear EOS, which is the energy per nucleon E/A = \\(\\epsilon\\) of nuclear matter as a function of nucleonic density \\(\\rho\\), can be used to obtain the bulk properties of nuclear matter such as the nuclear incompressibility [1],[2] the energy density and the pressure needed for neutron star calculations and the velocity of sound in nuclear medium for predictions of shock wave generation and propagation. The EOS is also of fundamental importance in the theories of nucleus-nucleus collisions at energies where the nuclear incompressibility \\(K\\) comes into play as well as in the theories of supernova explosions [3]. In the present work we obtain an EOS for nuclear matter using the M3Y-Reid-Elliott effective interaction supplemented by a zero range pseudo-potential along with the density dependence. The density dependence parameters of the interaction are obtained by reproducing the saturation energy per nucleon and the saturation density of cold infinite spin and isospin symmetric nuclear matter (SNM). One of the density dependence parameter, which can be interpreted as the isospin averaged nucleon-nucleon interaction cross section in ground state symmetric nuclear medium, is also used to provide estimate for the nuclear mean free path. EOS for the isospin asymmetric nuclear matter is then calculated by adding to the isoscalar part, the Lane [4] or the isovector component [5] of the M3Y interaction that do not contribute to the EOS of SNM. These EOS are then used to calculate the pressure, the energy density and the velocity of sound in symmetric as well as isospin asymmetric nuclear matter and pure neutron matter (PNM). The M3Y interaction was derived by fitting its matrix elements in an oscillator basis to those elements of the G-matrix obtained with the Reid-Elliott soft-core NN interaction. The ranges of the M3Y forces were chosen to ensure a long-range tail of the one-pion exchange potential as well as a short range repulsive part simulating the exchange of heavier mesons [6]. The real part of the nuclear interaction potential obtained by folding in the density distribution functions of two interacting nuclei with the density dependent M3Y effective interaction supplemented by a zero-range pseudo-potential (DDM3Y) was shown to provide good descriptions for medium and high energy \\(\\alpha\\) and heavy ion elastic scatterings [7],[8],[9]. The zero-range pseudo-potential represented the single-nucleon exchange term while the density dependence accounted for the higher order exchange effects andthe Pauli blocking effects. The real part of the proton-nucleus interaction potential obtained by folding in the density distribution function of interacting nucleus with the DDM3Y effective interaction is found to provide good descriptions of elastic and inelastic scatterings of high energy protons [10] and proton radioactivity [11]. Since the density dependence of the effective projectile-nucleon interaction was found to be fairly independent of the projectile [12], as long as the projectile-nucleus interaction was amenable to a single-folding prescription, the density dependent effects on the nucleon-nucleon interaction were factorized into a target term times a projectile term and used successfully in case of \\(\\alpha\\) radioctivity of nuclei [13] including superheavies [14] and the cluster radioactivity [13]. ## 2 The density dependent effective nucleon-nucleon interaction : isoscalar and isovector components The central part of the effective interaction between two nucleons 1 and 2 can be written as [7] \\[v_{12}(s)=v_{00}(s)+v_{01}(s)\\tau_{1}.\\tau_{2}+v_{10}(s)\\sigma_{1}.\\sigma_{2}+v _{11}(s)\\sigma_{1}.\\sigma_{2}\\ \\tau_{1}.\\tau_{1} \\tag{1}\\] where \\(\\tau_{1},\\tau_{2}\\) are the isospins and \\(\\sigma_{1},\\sigma_{2}\\) are the spins of nucleons 1,2. In case of SNM only the first term, the isoscalar term, contributes whereas for the isospin asymmetric-spin symmetric nuclear matter only first two terms, the isoscalar and the isovector (Lane) terms, contribute and for the spin-isospin asymmetric nuclear matter all the four terms of Eq.(1) contribute. Considering only the isospin asymmetric-spin symmetric nuclear matter, the neutron-neutron, proton-proton, neutron-proton and proton-neutron interactions, _viz._\\(v_{nn},v_{pp},v_{np}\\) and \\(v_{pn}\\), respectively, can be given by the following: \\[v_{nn}=v_{pp}=v_{00}+v_{01},\\ \\ \\ \\ v_{np}=v_{pn}=v_{00}-v_{01} \\tag{2}\\] The general expression for the density dependent effective NN interaction potential is written as [11] \\[v_{00}(s,\\rho,\\epsilon)=t_{00}^{M3Y}(s,\\epsilon)g(\\rho,\\epsilon),\\ \\ \\ \\ v_{01}(s,\\rho, \\epsilon)=t_{01}^{M3Y}(s,\\epsilon)g(\\rho,\\epsilon) \\tag{3}\\] where the isoscalar \\(t_{00}^{M3Y}\\) and the isovector \\(t_{01}^{M3Y}\\) components of M3Y interaction potentials [7] supplemented by zero range potentials are given by the following: \\[t_{00}^{M3Y}(s,\\epsilon)=7999\\frac{\\exp(-4s)}{4s}-2134\\frac{\\exp(-2.5s)}{2.5s }-276(1-\\alpha\\epsilon)\\delta(s) \\tag{4}\\]and \\[t_{01}^{M3Y}(s,\\epsilon)=-4886\\frac{\\exp(-4s)}{4s}+1176\\frac{\\exp(-2.5s)}{2.5s}+228 (1-\\alpha\\epsilon)\\delta(s) \\tag{5}\\] where \\(s\\) is the distance between two interacting nucleons and the energy dependence parameter \\(\\alpha=0.005MeV^{-1}\\). The zero-range potentials of Eqs.(4,5) represent the single-nucleon exchange term. The density dependent part appearing in Eqs.(3) [15] has been taken to be of a general form \\[g(\\rho,\\epsilon)=C(1-\\beta(\\epsilon)\\rho^{n}) \\tag{6}\\] which takes care of the higher order exchange effects and the Pauli blocking effects. This density dependence changes sign at high densities which is of crucial importance in fulfilling the saturation condition as well as giving different \\(K_{0}\\) values with different values of \\(n\\) for the nuclear EOS [15]. The value of the parameter \\(n=2/3\\) was originally taken by Myers in the single folding calculation [16]. In fact \\(n=2/3\\) also has a physical meaning because then \\(\\beta\\) can be interpreted as an 'in medium' effective nucleon-nucleon interaction cross-section \\(\\sigma_{0}\\) while the density dependent term represents interaction probability. This value of \\(\\beta\\) along with nucleonic density of infinite nuclear matter \\(\\rho_{0}\\) can also provide the nuclear mean free path \\(\\lambda=1/(\\rho_{0}\\sigma_{0})\\). Moreover, it also worked well in the single folding calculations for inelastic and elastic scatterings of high energy protons [10], proton radioactivity [11] and in the double folding calculations with the factorized density dependence for \\(\\alpha\\) radioactivity of nuclei [13] including superheavies [14] and the cluster radioactivity [13]. ## 3 Symmetric and isospin asymmetric nuclear matter calculations The isospin asymmetry \\(X\\) can be conveniently defined as \\[X=\\frac{\\rho_{n}-\\rho_{p}}{\\rho_{n}+\\rho_{p}},\\ \\ \\ \\ \\rho=\\rho_{n}+\\rho_{p}, \\tag{7}\\] where \\(\\rho_{n}\\), \\(\\rho_{p}\\) and \\(\\rho\\) are the neutron, proton and nucleonic densities respectively. The asymmetry parameter \\(X\\) can have values between -1 to +1, corresponding to pure proton matter and pure neutron matter respectively, while for SNM it becomes zero. For a single neutron interacting with rest of nuclear matter with isospin asymmetry \\(X\\), the interaction energy per unit volume at \\(s\\) is given by the following: \\[\\rho_{n}v_{nn}(s)+\\rho_{p}v_{np}(s)= \\rho_{n}[v_{00}(s)+v_{01}(s)]+\\rho_{p}[v_{00}(s)-v_{01}(s)] \\tag{8}\\] \\[= [v_{00}(s)+v_{01}(s)X]\\rho,\\]while in case of a single proton interacting with rest of nuclear matter with isospin asymmetry \\(X\\), the interaction energy per unit volume at \\(s\\) is given by the following: \\[\\rho_{n}v_{pn}(s)+\\rho_{p}v_{pp}(s)= \\rho_{n}[v_{00}(s)-v_{01}(s)]+\\rho_{p}[v_{00}(s)+v_{01}(s)] \\tag{9}\\] \\[= [v_{00}(s)-v_{01}(s)X]\\rho,\\] Summing the contributions for protons and neutrons and integrating over the entire volume of the infinite nuclear matter and multiplying by the factor \\(\\frac{1}{2}\\) to ignore the double counting in the process, the potential energy per nucleon \\(\\epsilon_{pot}\\) can be obtained by dividing the total potential energy by the total number of nucleons, \\[\\epsilon_{pot}=\\frac{g(\\rho,\\epsilon)\\rho J_{v}}{2}, \\tag{10}\\] where \\[J_{v}=J_{v00}+X^{2}J_{v01}=\\int\\int\\int[t_{00}^{M3Y}(s,\\epsilon)+t_{01}^{M3Y} (s,\\epsilon)X^{2}]d^{3}s. \\tag{11}\\] Assuming interacting Fermi gas of neutrons and protons, the kinetic energy per nucleon \\(\\epsilon_{kin}\\) turns out to be \\[\\epsilon_{kin}=[\\frac{3\\hbar^{2}k_{F}^{2}}{10m}]F(X),\\ \\ \\ \\ F(X)=[\\frac{(1+X)^{5/3}+(1-X)^{5/3}}{2}], \\tag{12}\\] where \\(m\\) is the nucleonic mass equal to 938.91897 \\(MeV/c^{2}\\) and \\(k_{F}\\), which becomes equal to Fermi momentum in case of the SNM, is given by the following: \\[k_{F}^{3}=1.5\\pi^{2}\\rho, \\tag{13}\\] The two parameters of Eq.(6), \\(C\\) and \\(\\beta\\), are determined by reproducing the saturation conditions. It is worthwhile to mention here that due to attractive character of the M3Y forces the saturation condition for cold nuclear matter is not fulfilled. However, the realistic description of nuclear matter properties can be obtained with this density dependent M3Y effective interaction. Therefore, the density dependence parameters have been obtained by reproducing the saturation energy per nucleon and the saturation nucleonic density of the cold SNM. The energy per nucleon \\(\\epsilon=\\epsilon_{kin}+\\epsilon_{pot}\\) obtained for the cold SNM for which \\(X=0\\) is given by the following:\\[\\epsilon=[\\frac{3\\hbar^{2}k_{F}^{2}}{10m}]+\\frac{g(\\rho,\\epsilon)\\rho J_{v00}}{2} \\tag{14}\\] where \\(J_{v00}\\) represents the volume integral of the isoscalar part of the M3Y interaction supplemented by the zero-range potential having the form \\[J_{v00}(\\epsilon)=\\int\\int\\int t_{00}^{M3Y}(s,\\epsilon)d^{3}s=7999\\frac{4\\pi}{ 4^{3}}-2134\\frac{4\\pi}{2.5^{3}}-276(1-\\alpha\\epsilon) \\tag{15}\\] The Eq.(14) can be rewritten with the help of Eq.(6) as \\[\\epsilon=[\\frac{3\\hbar^{2}k_{F}^{2}}{10m}]+[\\frac{\\rho J_{v00}C(1-\\beta\\rho^{n} )}{2}] \\tag{16}\\] and differentiated with respect to \\(\\rho\\) to yield equation \\[\\frac{\\partial\\epsilon}{\\partial\\rho}=[\\frac{\\hbar^{2}k_{F}^{2}}{5m\\rho}]+ \\frac{J_{v00}C}{2}[1-(n+1)\\beta\\rho^{n}] \\tag{17}\\] The equilibrium density of the cold SNM is determined from the saturation condition \\(\\frac{\\partial\\epsilon}{\\partial\\rho}=0\\). Then Eq.(16) and Eq.(17) with the saturation condition can be solved simultaneously for fixed values of the saturation energy per nucleon \\(\\epsilon_{0}\\) and the saturation density \\(\\rho_{0}\\) of the cold SNM to obtain the values of the density dependence parameters \\(\\beta\\) and C. Density dependence parameters \\(\\beta\\) and C, thus obtained, can be given by the following: \\[\\beta=\\frac{[(1-p)\\rho_{0}^{-n}]}{[(3n+1)-(n+1)p]}, \\tag{18}\\] where \\[p=\\frac{[10m\\epsilon_{0}]}{[\\hbar^{2}k_{F_{0}}^{2}]}, \\tag{19}\\] and \\[k_{F_{0}}=[1.5\\pi^{2}\\rho_{0}]^{1/3}, \\tag{20}\\] \\[C=-\\frac{[2\\hbar^{2}k_{F_{0}}^{2}]}{5mJ_{v00}\\rho_{0}[1-(n+1)\\beta\\rho_{0}^{n} ]}, \\tag{21}\\] respectively. It is quite obvious that the density dependence parameter \\(\\beta\\) obtained by this method depends only on the saturation energy per nucleon \\(\\epsilon_{0}\\), the saturation density \\(\\rho_{0}\\) and the index \\(n\\) of the density dependent part but not on the parameters of the M3Y interaction while the other density dependence parameter \\(C\\) depends on the parameters of the M3Y interaction also through the volume integral \\(J_{v00}\\). The incompressibility \\(K_{0}\\) of the cold SNM which is defined as \\[K_{0}=k_{F}^{2}\\frac{\\partial^{2}\\epsilon}{\\partial k_{F}^{2}}\\mid_{k_{F}=k_{F_{ 0}}}=9\\rho^{2}\\frac{\\partial^{2}\\epsilon}{\\partial\\rho^{2}}\\mid_{\\rho=\\rho_{0}} \\tag{22}\\] can be theoretically obtained using Eq.(13), Eq.(17) and Eq.(22) as \\[K_{0}=[-(\\frac{3\\hbar^{2}k_{F_{0}}^{2}}{5m})]-[\\frac{9J_{v00}Cn(n+1)\\beta\\rho_ {0}^{n+1}}{2}] \\tag{23}\\] Since the product \\(J_{v00}C\\) appears in the above equation, a cursory glance reveals that the incompressibility \\(K_{0}\\) depends only upon the saturation energy per nucleon \\(\\epsilon_{0}\\), the saturation density \\(\\rho_{0}\\) and the index \\(n\\) of the density dependent part of the interaction but not on the parameters of the M3Y interaction. The energy per nucleon for nuclear matter with isospin asymmetry \\(X\\) can be rewritten as \\[\\epsilon= [\\frac{3\\hbar^{2}k_{F}^{2}}{10m}]F(X)+(\\frac{\\rho J_{v}C}{2})(1- \\beta\\rho^{n}) \\tag{24}\\] \\[= [\\frac{3\\hbar^{2}k_{F}^{2}}{10m}]F(X)-[\\frac{\\rho}{\\rho_{0}}][ \\frac{J_{v}}{J_{v00}}][\\frac{\\hbar^{2}k_{F_{0}}^{2}(1-\\beta\\rho^{n})}{5m[1-(n+ 1)\\beta\\rho_{0}^{n}]}]\\] where \\(J_{v}=J_{v00}+X^{2}J_{v01}\\) and \\(J_{v01}\\) represents the volume integral of the isovector part of the M3Y interaction supplemented by the zero-range potential having the form \\[J_{v01}(\\epsilon)=\\int\\int\\int t_{01}^{M3Y}(s,\\epsilon)d^{3}s=-4886\\frac{4\\pi }{4^{3}}+1176\\frac{4\\pi}{2.5^{3}}+228(1-\\alpha\\epsilon) \\tag{25}\\] The pressure \\(P\\) and the energy density \\(\\varepsilon\\) of nuclear matter with isospin asymmetry \\(X\\) can be given by the following: \\[P=\\rho^{2}\\frac{\\partial\\epsilon}{\\partial\\rho}=[\\frac{\\rho\\hbar^{2}k_{F}^{2} }{5m}]F(X)+[\\frac{\\rho^{2}J_{v}C}{2}][1-(n+1)\\beta\\rho^{n}], \\tag{26}\\] \\[\\varepsilon=\\rho(\\epsilon+mc^{2})=\\rho[(\\frac{3\\hbar^{2}k_{F}^{2}}{10m})F(X)+ (\\frac{\\rho J_{v}C}{2})(1-\\beta\\rho^{n})+mc^{2}], \\tag{27}\\]respectively, and thus the velocity of sound \\(v_{s}\\) in nuclear matter with isospin asymmetry \\(X\\) is given by the following: \\[\\frac{v_{s}}{c}=\\sqrt{\\frac{\\partial P}{\\partial\\varepsilon}}=\\sqrt{\\frac{[2\\rho \\frac{\\partial\\epsilon}{\\partial\\rho}-\\frac{\\hbar^{2}k_{F}^{2}}{15m}F(X)-\\frac{ J_{v}Cn(n+1)\\beta\\rho^{n+1}}{2}]}{[\\epsilon+mc^{2}+\\rho\\frac{\\partial\\epsilon}{ \\partial\\rho}]}} \\tag{28}\\] The incompressibilities for isospin asymmetric nuclear matter are evaluated at saturation densities \\(\\rho_{s}\\) with the condition \\(\\frac{\\partial\\epsilon}{\\partial\\rho}=0\\) which corresponds to vanishing pressure. The incompressibility \\(K\\) for isospin asymmetric nuclear matter is therefore expressed as the following: \\[K_{0}=[-(\\frac{3\\hbar^{2}k_{F}^{2}}{5m})]F(X)-[\\frac{9J_{v}Cn(n+1)\\beta\\rho_{s }^{n+1}}{2}] \\tag{29}\\] where \\(k_{F}\\) is now evaluated at saturation density \\(\\rho_{s}\\) using Eq.(13) and \\(J_{v}=J_{v00}+X^{2}J_{v01}\\). Calculations of energy per nucleon, pressure, energy density and velocity of sound for symmetric nuclear matter and neutron matter The calculations have been performed using the values of the saturation density \\(\\rho_{0}=0.1533fm^{-3}\\)[17] and the saturation energy per nucleon \\(\\epsilon_{0}=-15.26MeV\\)[18] for the SNM obtained from the co-efficient of the volume term of Bethe-Weizsacker mass formula which is evaluated by fitting the recent experimental and estimated atomic mass excesses from Audi-Wapstra-Thibault atomic mass table [19] by minimizing the mean square deviation. For a fixed value of \\(\\beta\\), the parameters \\(\\alpha\\) and \\(C\\) can have any possible simultaneous values as determined from SNM. Using the usual value of \\(\\alpha=0.005MeV^{-1}\\) for the parameter of energy dependence of the zero range potential, the values obtained for the density dependence parameters \\(C\\) and \\(\\beta\\) are presented in Table-1 for different values of the parameter \\(n\\) along with the corresponding values of the incompressibility \\(K_{0}\\). Smaller \\(n\\) values predict softer EOS while higher values predict stiffer EOS. The form of \\(C(1-\\beta\\rho^{n})\\) with \\(n=2/3\\) for the density dependence which is identical to that used for explaining the elastic and inelastic scattering [10] of protons and the proton [11], \\(\\alpha\\)[13],[14], cluster radioactivity phenomena [13] also agrees well with recent theoretical [20] and experimental [21] results for the nuclear incompressibility. In Table-2 incompressility of isospin asymmetric nuclear matter as a function of the isospin asymmetry parameter \\(X\\), using the usual value of n=2/3 and energy dependence parameter \\(\\alpha=0.005MeV^{-1}\\), is provided. In Tables-3-5 the theoretical estimates of the pressure \\(P\\) and velocity of sound \\(v_{s}\\) of SNM are listed as functions of nucleonic density \\(\\rho\\) and energy density \\(\\varepsilon\\) using the usual value of 0.005 \\(MeV^{-1}\\) for the parameter \\(\\alpha\\) of energy dependence, given in Eqs.(4,5), of the zero range potential and also the standard value of the parameter \\(n=2/3\\). As for any other non-relativistic EOS, present EOS also suffers from superluminosity at very high densities. According to present calculations the velocity of sound becomes imaginary for \\(\\rho\\leq 0.1fm^{-3}\\) and exceeds the velocity of light c at \\(\\rho\\geq 5.3\\rho_{0}\\) and the EOS obtained using \\(v_{14}+TNI\\)[22] also resulted in sound velocity becoming imaginary at same nuclear density and superluminous at about the same nuclear density. But in contrast, the incompressibility \\(K_{0}\\) of infinite SNM for the \\(v_{14}+TNI\\) was chosen to be 240 MeV while that by the present theoretical estimate is about 290 MeV which is in excellent agreement with the experimental value of \\(K_{0}=300\\pm 25\\) MeV obtained from the giant monopole resonance (GMR) [23] and with the the recent experimental determination of \\(K_{0}\\) based upon the production of hard photons in heavy ion collisions which led to the experimental estimate of \\(K_{0}=290\\pm 50\\) MeV [21]. In Tables-6-8 the theoretical estimates of the pressure \\(P\\) and velocity of sound \\(v_{s}\\) in case of PNM are listed as functions of nucleonic density \\(\\rho\\) and \\begin{table} \\begin{tabular}{|c|c|c|c|} \\hline \\(N\\) & \\(\\rho_{s}\\) & \\(K_{0}\\) \\\\ \\hline & \\(fm^{-3}\\) & \\(MeV\\) \\\\ \\hline 0.0 & 0.1533 & 293.4 \\\\ \\hline 0.1 & 0.1526 & 288.8 \\\\ \\hline 0.2 & 0.1503 & 275.3 \\\\ \\hline 0.3 & 0.1464 & 252.9 \\\\ \\hline 0.4 & 0.1403 & 221.7 \\\\ \\hline 0.5 & 0.1315 & 182.0 \\\\ \\hline \\end{tabular} \\end{table} Table 2: Incompressibility of isospin asymmetric nuclear matter using the usual value of n=2/3 and energy dependence parameter \\(\\alpha=0.005MeV^{-1}\\). \\begin{table} \\begin{tabular}{|c|c|c|c|} \\hline \\(N\\) & \\(\\rho_{s}\\) & \\(K_{0}\\) \\\\ \\hline & \\(fm^{-3}\\) & \\(MeV\\) \\\\ \\hline 0.0 & 0.1533 & 293.4 \\\\ \\hline 0.1 & 0.1526 & 288.8 \\\\ \\hline 0.2 & 0.1503 & 275.3 \\\\ \\hline 0.3 & 0.1464 & 252.9 \\\\ \\hline 0.4 & 0.1403 & 221.7 \\\\ \\hline 0.5 & 0.1315 & 182.0 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Incompressibility of SNM for different values of n using the usual value of energy dependence parameter \\(\\alpha=0.005MeV^{-1}\\) and using values of saturation density \\(\\rho_{0}=0.1533fm^{-3}\\) and saturation energy per nucleon \\(\\epsilon_{0}=-15.26MeV\\). energy density \\(\\varepsilon\\) using the usual value of 0.005 \\(MeV^{-1}\\) for the parameter of energy dependence, given in Eqs.(4,5), of the zero range potential and also the standard value of the parameter \\(n=2/3\\). In Fig.-1 the energy per nucleon \\(\\epsilon\\) of SNM and PNM are plotted as a function of \\(\\rho\\). The continuous lines represent the curves for the present calculations using saturation energy per nucleon of -15.26 MeV whereas the dotted lines represent the same using \\(v_{14}+TNI\\) interaction [22] and the dash-dotted lines represent the same for the A18 model using variational chain summation (VCS) [24] for the SNM and PNM. The minimum of the energy per nucleon equaling the saturation energy of -15.26 MeV for the present calculations occurs precisely at the saturation density \\(\\rho_{0}=0.1533fm^{-3}\\) while that for the A18(VCS) model occurs around \\(\\rho=0.28fm^{-3}\\) with a saturation energy of about -17.3 MeV. Fig.-2 presents the plots of the energy per nucleon \\(\\epsilon\\) of nuclear matter with different isospin asymmetry X as a function of \\(\\rho\\) for the present calculations. The pressure \\(P\\) of SNM and PNM are plotted in Fig.-3 as a function of \\(\\rho\\). The continuous lines represent the present calculations whereas the dotted lines represent the same using \\(v_{14}+TNI\\) interaction [22]. In Fig.-4 the velocity of sound \\(v_{s}\\) in SNM and PNM and the energy density \\(\\varepsilon\\) of SNM and PNM for the present calculations are plotted as functions of nucleonic density \\(\\rho\\). The continuous lines represent the velocity of sound in units of \\(10^{-2}c\\) whereas the dotted lines represent energy density in \\(MeVfm^{-3}\\). The theoretical estimate \\(K_{0}\\) of the incompressibility of infinite SNM obtained from present approach using DDM3Y is about \\(290MeV\\). The theoretical estimate of \\(K_{0}\\) from the refractive \\(\\alpha\\)-nucleus scattering is about 240 MeV-270 MeV [25] and that by infinite nuclear matter model (INM) [20] claims a well defined and stable value of \\(K_{0}=288\\pm 20\\) MeV and present theoretical estimate is in reasonably close agreement with the value obtained by INM which rules out any values lower than 200 MeV. Present estimate for the incompressibility \\(K_{0}\\) of the infinite SNM is in good agreement with the experimental value of \\(K_{0}=300\\pm 25\\) MeV obtained from the giant monopole resonance (GMR) [23] and with the the recent experimental determination of \\(K_{0}\\) based upon the production of hard photons in heavy ion collisions which led to the experimental estimate of \\(K_{0}=290\\pm 50\\) MeV [21]. However, the experimental values of \\(K_{0}\\) extracted from the isoscalar giant dipole resonance (ISGDR) are claimed to be smaller [26]. The general theoretical observation by Colo' et al. is that the non-relativistic [27] and the relativistic [28] mean field models predict for the \\(K_{0}\\) values which are significantly different from one another, namely \\(\\approx\\) 220-235 MeV and \\(\\approx\\) 250-270 MeV respectively. Considering the uncertainties in the extractions of \\(\\epsilon_{0}\\)[18] and values from the experimental masses and electron scattering, present non-relativistic mean field model estimate for the nuclear incompressibility \\(K_{0}\\) for SNM using DDM3Y interaction is rather close to the earlier theoretical prediction [22]. Figure 1: The energy per nucleon \\(\\epsilon\\) = E/A of SNM (spin and isospin symmetric nuclear matter) and PNM (pure neutron matter) as a function of \\(\\rho\\). The continuous lines represent curves for the present calculations using saturation energy per nucleon of -15.26 MeV whereas the dotted lines represent the same using \\(v_{14}+TNI\\) interaction [22] and the dash-dotted lines represent the same for the A18 model using variational chain summation (VCS) [24]. estimates obtained using relativistic mean field models. The density dependence parameter \\(\\beta=1.668fm^{2}\\), that has the dimension of cross section, can be interpreted as the isospin averaged effective nucleon-nucleon interaction cross section in ground state symmetric nuclear medium. For a nucleon in ground state nuclear matter \\(k_{F}\\approx 1.3\\ fm^{-1}\\) and \\(q_{0}\\sim\\hbar k_{F}c\\approx 260\\) MeV and the present result for the 'in medium' effective cross section is reasonably close to the value obtained from a rigorous Dirac-Brueckner-Hartree-Fock (HF) model. Figure 2: The energy per nucleon \\(\\epsilon=\\) E/A of nuclear matter with different isospin asymmetry X as a function of \\(\\rho\\) for the present calculations. [29] calculations corresponding to such \\(k_{F}\\) and \\(q_{0}\\) values which is \\(\\approx 12\\) mb. Using the value of the density dependence parameter \\(\\beta=1.668fm^{2}\\) corresponding to the standard value of the parameter \\(n=2/3\\) along with the nucleonic density of \\(0.1533fm^{-3}\\), the value obtained for the nuclear mean free path \\(\\lambda\\) is about \\(4fm\\) which is in excellent agreement [30] with other Figure 3: The pressure \\(P\\) of SNM (spin and isospin symmetric nuclear matter) and PNM (pure neutron matter) as a function of \\(\\rho\\). Continuous lines represent the present calculations whereas dotted lines represent the same using \\(v_{14}+TNI\\) interaction [22]. theoretical estimates. Figure 4: The velocity of sound \\(v_{s}\\) in SNM (spin and isospin symmetric nuclear matter) and PNM (pure neutron matter) and the energy density \\(\\varepsilon\\) of SNM and PNM as functions of nucleonic density \\(\\rho\\) for the present calculations. The continuous lines represent the velocity of sound in units of \\(10^{-2}c\\) whereas the dotted lines represent energy density in \\(MeVfm^{-3}\\). ## 5 Summary and conclusions In summary, we conclude that the present EOS is obtained using the isoscalar and Lane that is the isovector part of M3Y effective NN interaction. This interaction was derived by fitting its matrix elements in an oscillator basis to those elements of the G-matrix obtained with the Reid-Elliot soft-core NN interaction and has a profound theoretical standing. The value obtained for the nuclear mean free path is in excellent agreement [30] with other theoretical estimates. The present theoretical estimate of nuclear incompressibility for SNM is in reasonably close agreement with other theoretical estimates obtained by INM [20] model, using the Seyler-Blanchard interaction [17] or the relativistic Brueckner-Hartree-Fock (RBHF) theory [31]. This value is also in good agreement with the experimental estimates from GMR [23] as well as determination based upon the production of hard photons in heavy ion collisions [21]. The EOS for SNM and PNM are similar to those obtained by B. Friedman and V.R. Pandharipande using \\(v_{14}+TNI\\) interaction [22] and the RBHF theory. The EOS for the isospin asymmetric nuclear matter can be applied to study the cold compact steller objects such as neutron stars. ## References * [1] J.P. Blaizot, Phys. Rep. 65, 171 (1980). * [2] C. Samanta, D. Bandyopadhyay and J.N. De, Phys. Lett. B 217, 381 (1989). * [3] G.F. Bertsch and S. Das Gupta, Phys. Rep. 160, 189 (1988). * [4] A.M. Lane, Nucl. Phys. 35, 676 (1962). * [5] G.R. Satchler, Int. series of monographs on Physics, Oxford University Press, Direct Nuclear reactions, 470 (1983). * [6] G. Bertsch, J. Borysowicz, H. McManus, W.G. Love, Nucl. Phys. A 284, 399 (1977). * [7] G.R. Satchler and W.G. Love, Phys. Rep. 55, 183 (1979). * [8] A.M. Kobos, B.A. Brown, R. Lindsay and G.R. Satchler, Nucl. Phys. A 425, 205 (1984). * [9] H.J. Gils, Nucl. Phys. A 473, 111 (1987). * [10] D. Gupta and D.N. Basu, Nucl. Phys. A 748, 402 (2005). * [11] D.N. Basu, P. Roy Chowdhury and C. Samanta, Phys. Rev. C 72, 051601(R) (2005). * [12] D.K. Srivastava, D.N. Basu and N.K. Ganguly, Phys. Lett. 124B, 6 (1983). * [13] D.N. Basu, Phys. Lett. B 566, 90 (2003). * [14] P. Roy Chowdhury, C. Samanta and D. N. Basu, Phys. Rev. C 73, 014612 (2006). * [15] D.N. Basu, Int. Jour. Mod. Phys. E 14, 739 (2005). * [16] W.D. Myers, Nucl. Phys. A 204, 465 (1973). * [17] D. Bandyopadhyay, C. Samanta, S.K. Samaddar and J.N. De, Nucl. Phys. A 511, 1 (1990). * [18] P. Roy Chowdhury, C. Samanta and D. N. Basu, Mod. Phys. Letts. A 21, 1605 (2005). * [19] G. Audi, A.H. Wapstra and C. Thibault, Nucl. Phys. A 729, 337 (2003). * [20] L. Satpathy, V.S. Uma Maheswari and R.C. Nayak, Phys. Rep. 319, 85 (1999). * [21] Y. Schutz et. al., Nucl. Phys. A 599, 97c (1996). * [22] B. Friedman and V.R. Pandharipande, Nucl. Phys. A 361, 502 (1981). * [23] M.M. Sharma, W.T.A. Borghols, S. Brandenburg, S. Crona, A. van der Woude and M.N. Harakeh, Phys. Rev. C 38, 2562 (1988). * [24] A. Akmal, V.R. Pandharipande and D.G. Ravenhall, Phys. Rev. C 58, 1804 (1998). * [25] Dao T. Khoa, G.R. Satchler and W. von Oertzen, Phys. Rev. C 56, 954 (1997). * [26] U. Garg, Nucl. Phys. A 731, 3 (2004). * [27] G. Colo', N. Van Giai, J. Meyer, K. Bennaceur and P. Bonche, Phys. Rev. C 70, 024307 (2004). * [28] G. Colo' and N. Van Giai, Nucl. Phys. A 731, 15 (2004). * [29] F. Sammarruca and P. Krastev, Phys. Rev. C 73, 014001 (2006). * [30] B. Sinha, Phys. Rev. Lett. 50, 91 (1983). * [31] R. Brockmann and R. Machleidt, Phys. Rev. C 42, 1965 (1990). \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|} \\hline \\(\\rho\\) & \\(\\rho/\\rho_{0}\\) & \\(\\epsilon\\) & P & \\(\\varepsilon\\) & \\(v_{s}\\) \\\\ \\hline \\(fm^{-3}\\) & & \\(MeV\\) & \\(MeVfm^{-3}\\) & \\(MeVfm^{-3}\\) & in units of c \\\\ \\hline.01 &.6523E-01 & -.7537E+00 & -.1677E-01 &.9382E+01 &.0000E+00 \\\\.02 &.1305E+00 & -.2526E+01 & -.7232E-01 &.1873E+02 &.0000E+00 \\\\.03 &.1957E+00 & -.4312E+01 & -.1574E+00 &.2804E+02 &.0000E+00 \\\\.04 &.2609E+00 & -.6007E+01 & -.2617E+00 &.3732E+02 &.0000E+00 \\\\.05 &.3262E+00 & -.7576E+01 & -.3752E+00 &.4657E+02 &.0000E+00 \\\\.06 &.3914E+00 & -.9006E+01 & -.4885E+00 &.5579E+02 &.0000E+00 \\\\.07 &.4566E+00 & -.1029E+02 & -.5926E+00 &.6500E+02 &.0000E+00 \\\\.08 &.5219E+00 & -.1142E+02 & -.6786E+00 &.7420E+02 &.0000E+00 \\\\.09 &.5871E+00 & -.1241E+02 & -.7382E+00 &.8339E+02 &.0000E+00 \\\\.10 &.6523E+00 & -.1325E+02 & -.7633E+00 &.9257E+02 &.0000E+00 \\\\.11 &.7175E+00 & -.1394E+02 & -.7460E+00 &.1017E+03 &.6683E-01 \\\\.12 &.7828E+00 & -.1448E+02 & -.6787E+00 &.1109E+03 &.1016E+00 \\\\.13 &.8480E+00 & -.1488E+02 & -.5540E+00 &.1201E+03 &.1302E+00 \\\\.14 &.9132E+00 & -.1514E+02 & -.3645E+00 &.1293E+03 &.1560E+00 \\\\.15 &.9785E+00 & -.1525E+02 & -.1032E+00 &.1385E+03 &.1802E+00 \\\\.16 &.1044E+01 & -.1523E+02 &.2369E+00 &.1478E+03 &.2031E+00 \\\\.17 &.1109E+01 & -.1507E+02 &.6627E+00 &.1571E+03 &.2253E+00 \\\\.18 &.1174E+01 & -.1477E+02 &.1181E+01 &.1663E+03 &.2467E+00 \\\\.19 &.1239E+01 & -.1434E+02 &.1798E+01 &.1757E+03 &.2675E+00 \\\\.20 &.1305E+01 & -.1378E+02 &.2520E+01 &.1850E+03 &.2879E+00 \\\\.21 &.1370E+01 & -.1308E+02 &.3354E+01 &.1944E+03 &.3077E+00 \\\\.22 &.1435E+01 & -.1226E+02 &.4306E+01 &.2039E+03 &.3272E+00 \\\\.23 &.1500E+01 & -.1130E+02 &.5382E+01 &.2134E+03 &.3463E+00 \\\\.24 &.1566E+01 & -.1022E+02 &.6588E+01 &.2229E+03 &.3649E+00 \\\\.25 &.1631E+01 & -.9014E+01 &.7931E+01 &.2325E+03 &.3833E+00 \\\\.26 &.1696E+01 & -.7683E+01 &.9416E+01 &.2421E+03 &.4013E+00 \\\\.27 &.1761E+01 & -.6229E+01 &.1105E+02 &.2518E+03 &.4189E+00 \\\\.28 &.1826E+01 & -.4652E+01 &.1284E+02 &.2616E+03 &.4363E+00 \\\\.29 &.1892E+01 & -.2955E+01 &.1478E+02 &.2714E+03 &.4533E+00 \\\\.30 &.1957E+01 & -.1138E+01 &.1689E+02 &.2813E+03 &.4700E+00 \\\\.31 &.2022E+01 &.7985E+00 &.1917E+02 &.2913E+03 &.4864E+00 \\\\.32 &.2087E+01 &.2852E+01 &.2163E+02 &.3014E+03 &.5024E+00 \\\\.33 &.2153E+01 &.5023E+01 &.2427E+02 &.3115E+03 &.5182E+00 \\\\.34 &.2218E+01 &.7310E+01 &.2710E+02 &.3217E+03 &.5337E+00 \\\\.35 &.2283E+01 &.9711E+01 &.3012E+02 &.3320E+03 &.5489E+00 \\\\.36 &.2348E+01 &.1223E+02 &.3334E+02 &.3424E+03 &.5638E+00 \\\\.37 &.2414E+01 &.1486E+02 &.3676E+02 &.3529E+03 &.5785E+00 \\\\.38 &.2479E+01 &.1760E+02 &.4039E+02 &.3635E+03 &.5928E+00 \\\\.39 &.2544E+01 &.2045E+02 &.4423E+02 &.3742E+03 &.6069E+00 \\\\.40 &.2609E+01 &.2341E+02 &.4829E+02 &.3849E+03 &.6207E+00 \\\\ \\hline \\end{tabular} \\end{table} Table 3: \\(\\frac{\\rm Energy}{\\rm nucleon}\\)\\(\\epsilon\\), pressure \\(P\\), energy density \\(\\varepsilon\\) and velocity of sound \\(v_{s}\\) as functions of nucleonic density \\(\\rho\\) for SNM. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|} \\hline \\(\\rho\\) & \\(\\rho/\\rho_{0}\\) & \\(\\epsilon\\) & P & \\(\\varepsilon\\) & \\(v_{s}\\) \\\\ \\hline \\(fm^{-3}\\) & & \\(MeV\\) & \\(MeVfm^{-3}\\) & \\(MeVfm^{-3}\\) & in units of c \\\\ \\hline.41 &.2674E+01 &.2648E+02 &.5257E+02 &.3958E+03 &.6342E+00 \\\\.42 &.2740E+01 &.2967E+02 &.5709E+02 &.4068E+03 &.6474E+00 \\\\.43 &.2805E+01 &.3296E+02 &.6184E+02 &.4179E+03 &.6604E+00 \\\\.44 &.2870E+01 &.3636E+02 &.6682E+02 &.4291E+03 &.6731E+00 \\\\.45 &.2935E+01 &.3986E+02 &.7205E+02 &.4405E+03 &.6856E+00 \\\\.46 &.3001E+01 &.4347E+02 &.7753E+02 &.4519E+03 &.6978E+00 \\\\.47 &.3066E+01 &.4719E+02 &.8326E+02 &.4635E+03 &.7098E+00 \\\\.48 &.3131E+01 &.5101E+02 &.8925E+02 &.4752E+03 &.7215E+00 \\\\.49 &.3196E+01 &.5493E+02 &.9550E+02 &.4870E+03 &.7330E+00 \\\\.50 &.3262E+01 &.5896E+02 &.1020E+03 &.4989E+03 &.7442E+00 \\\\.51 &.3327E+01 &.6310E+02 &.1088E+03 &.5110E+03 &.7552E+00 \\\\.52 &.3392E+01 &.6733E+02 &.1159E+03 &.5233E+03 &.7660E+00 \\\\.53 &.3457E+01 &.7167E+02 &.1232E+03 &.5356E+03 &.7765E+00 \\\\.54 &.3523E+01 &.7611E+02 &.1309E+03 &.5481E+03 &.7868E+00 \\\\.55 &.3588E+01 &.8065E+02 &.1388E+03 &.5608E+03 &.7969E+00 \\\\.56 &.3653E+01 &.8528E+02 &.1470E+03 &.5736E+03 &.8068E+00 \\\\.57 &.3718E+01 &.9002E+02 &.1556E+03 &.5865E+03 &.8165E+00 \\\\.58 &.3783E+01 &.9486E+02 &.1644E+03 &.5996E+03 &.8259E+00 \\\\.59 &.3849E+01 &.9980E+02 &.1735E+03 &.6128E+03 &.8352E+00 \\\\.60 &.3914E+01 &.1048E+03 &.1830E+03 &.6262E+03 &.8443E+00 \\\\.61 &.3979E+01 &.1100E+03 &.1928E+03 &.6398E+03 &.8531E+00 \\\\.62 &.4044E+01 &.1152E+03 &.2029E+03 &.6535E+03 &.8618E+00 \\\\.63 &.4110E+01 &.1205E+03 &.2133E+03 &.6674E+03 &.8703E+00 \\\\.64 &.4175E+01 &.1259E+03 &.2240E+03 &.6815E+03 &.8786E+00 \\\\.65 &.4240E+01 &.1315E+03 &.2351E+03 &.6957E+03 &.8867E+00 \\\\.66 &.4305E+01 &.1371E+03 &.2466E+03 &.7102E+03 &.8947E+00 \\\\.67 &.4371E+01 &.1428E+03 &.2583E+03 &.7247E+03 &.9025E+00 \\\\.68 &.436E+01 &.1486E+03 &.2705E+03 &.7395E+03 &.9101E+00 \\\\.69 &.4501E+01 &.1545E+03 &.2830E+03 &.7544E+03 &.9175E+00 \\\\.70 &.4566E+01 &.1605E+03 &.2958E+03 &.7696E+03 &.9248E+00 \\\\.71 &.4631E+01 &.1665E+03 &.3090E+03 &.7849E+03 &.9319E+00 \\\\.72 &.4697E+01 &.1727E+03 &.3225E+03 &.8004E+03 &.9389E+00 \\\\.73 &.4762E+01 &.1790E+03 &.3365E+03 &.8161E+03 &.9457E+00 \\\\.74 &.4827E+01 &.1854E+03 &.3508E+03 &.8320E+03 &.9524E+00 \\\\.75 &.4892E+01 &.1918E+03 &.3655E+03 &.8480E+03 &.9589E+00 \\\\.76 &.4958E+01 &.1983E+03 &.3805E+03 &.8643E+03 &.9653E+00 \\\\.77 &.5023E+01 &.2050E+03 &.3960E+03 &.8808E+03 &.9715E+00 \\\\.78 &.5088E+01 &.2117E+03 &.4118E+03 &.8975E+03 &.9776E+00 \\\\.79 &.5153E+01 &.2185E+03 &.4281E+03 &.9144E+03 &.9836E+00 \\\\.80 &.5219E+01 &.2254E+03 &.4447E+03 &.9315E+03 &.895E+00 \\\\ \\hline \\end{tabular} \\end{table} Table 4: \\(\\frac{\\rm Energy}{\\rm nucleon}\\)\\(\\epsilon\\), pressure \\(P\\), energy density \\(\\varepsilon\\) and velocity of sound \\(v_{s}\\) as functions of nucleonic density \\(\\rho\\) for SNM. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|} \\hline \\(\\rho\\) & \\(\\rho/\\rho_{0}\\) & \\(\\epsilon\\) & P & \\(\\varepsilon\\) & \\(v_{s}\\) \\\\ \\hline \\(fm^{-3}\\) & & \\(MeV\\) & \\(MeVfm^{-3}\\) & \\(MeVfm^{-3}\\) & in units of c \\\\ \\hline.81 &.5284E+01 &.2324E+03 &.4618E+03 &.9488E+03 &.9952E+00 \\\\.82 &.5349E+01 &.2395E+03 &.4792E+03 &.9663E+03 &.1001E+01 \\\\.83 &.5414E+01 &.2467E+03 &.4971E+03 &.9840E+03 &.1006E+01 \\\\.84 &.5479E+01 &.2539E+03 &.5154E+03 &.1002E+04 &.1012E+01 \\\\.85 &.5545E+01 &.2613E+03 &.5341E+03 &.1020E+04 &.1017E+01 \\\\.86 &.5610E+01 &.2687E+03 &.5532E+03 &.1039E+04 &.1022E+01 \\\\.87 &.5675E+01 &.2762E+03 &.5727E+03 &.1057E+04 &.1027E+01 \\\\.88 &.5740E+01 &.2838E+03 &.5927E+03 &.1076E+04 &.1032E+01 \\\\.89 &.5806E+01 &.2915E+03 &.6131E+03 &.1095E+04 &.1037E+01 \\\\.90 &.5871E+01 &.2993E+03 &.6340E+03 &.1114E+04 &.1042E+01 \\\\.91 &.5936E+01 &.3072E+03 &.6553E+03 &.1134E+04 &.1046E+01 \\\\.92 &.6001E+01 &.3152E+03 &.6770E+03 &.1154E+04 &.1051E+01 \\\\.93 &.6067E+01 &.3232E+03 &.6992E+03 &.1174E+04 &.1055E+01 \\\\.94 &.6132E+01 &.3313E+03 &.7219E+03 &.1194E+04 &.1059E+01 \\\\.95 &.6197E+01 &.3395E+03 &.7450E+03 &.1215E+04 &.1064E+01 \\\\.96 &.6262E+01 &.3478E+03 &.7685E+03 &.1235E+04 &.1068E+01 \\\\.97 &.6327E+01 &.3562E+03 &.7926E+03 &.1256E+04 &.1072E+01 \\\\.98 &.6393E+01 &.3647E+03 &.8171E+03 &.1278E+04 &.1076E+01 \\\\.99 &.6458E+01 &.3732E+03 &.8420E+03 &.1299E+04 &.1080E+01 \\\\ 1.00 &.6523E+01 &.3819E+03 &.8675E+03 &.1321E+04 &.1084E+01 \\\\ \\hline \\end{tabular} \\end{table} Table 5: \\(\\frac{\\mathrm{Energy}}{\\mathrm{nucleon}}\\)\\(\\epsilon\\), pressure \\(P\\), energy density \\(\\varepsilon\\) and velocity of sound \\(v_{s}\\) as functions of nucleonic density \\(\\rho\\) for SNM. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|} \\hline \\(\\rho\\) & \\(\\rho/\\rho_{0}\\) & \\(\\epsilon\\) & P & \\(\\varepsilon\\) & \\(v_{s}\\) \\\\ \\hline \\(fm^{-3}\\) & & \\(MeV\\) & \\(MeVfm^{-3}\\) & \\(MeVfm^{-3}\\) & in units of c \\\\ \\hline.01 &.6523E-01 &.3509E+01 &.1780E-01 &.9424E+01 &.5166E-01 \\\\.02 &.1305E+00 &.4937E+01 &.4742E-01 &.1888E+02 &.5986E-01 \\\\.03 &.1957E+00 &.5992E+01 &.8594E-01 &.2835E+02 &.6774E-01 \\\\.04 &.2609E+00 &.6886E+01 &.1353E+00 &.3783E+02 &.7657E-01 \\\\.05 &.3262E+00 &.7702E+01 &.1983E+00 &.4733E+02 &.8637E-01 \\\\.06 &.3914E+00 &.8483E+01 &.2782E+00 &.5684E+02 &.9693E-01 \\\\.07 &.4566E+00 &.9254E+01 &.3783E+00 &.6637E+02 &.1081E+00 \\\\.08 &.5219E+00 &.1003E+02 &.5020E+00 &.7592E+02 &.1196E+00 \\\\.09 &.5871E+00 &.1083E+02 &.6526E+00 &.8548E+02 &.1314E+00 \\\\.10 &.6523E+00 &.1164E+02 &.8334E+00 &.9506E+02 &.1433E+00 \\\\.11 &.7175E+00 &.1249E+02 &.1048E+01 &.1047E+03 &.1554E+00 \\\\.12 &.7828E+00 &.1338E+02 &.1299E+01 &.1143E+03 &.1676E+00 \\\\.13 &.8480E+00 &.1430E+02 &.1590E+01 &.1239E+03 &.1797E+00 \\\\.14 &.9132E+00 &.1526E+02 &.1924E+01 &.1336E+03 &.1919E+00 \\\\.15 &.9785E+00 &.1626E+02 &.2304E+01 &.1433E+03 &.2041E+00 \\\\.16 &.1044E+01 &.1731E+02 &.2733E+01 &.1530E+03 &.2162E+00 \\\\.17 &.1109E+01 &.1840E+02 &.3214E+01 &.1627E+03 &.2282E+00 \\\\.18 &.1174E+01 &.1953E+02 &.3751E+01 &.1725E+03 &.2402E+00 \\\\.19 &.1239E+01 &.2071E+02 &.4345E+01 &.1823E+03 &.2521E+00 \\\\.20 &.1305E+01 &.2194E+02 &.5001E+01 &.1922E+03 &.2640E+00 \\\\.21 &.1370E+01 &.2321E+02 &.5720E+01 &.2020E+03 &.2757E+00 \\\\.22 &.1435E+01 &.2453E+02 &.6506E+01 &.2120E+03 &.2874E+00 \\\\.23 &.1500E+01 &.2590E+02 &.7361E+01 &.2219E+03 &.2989E+00 \\\\.24 &.1566E+01 &.2732E+02 &.8288E+01 &.2319E+03 &.3104E+00 \\\\.25 &.1631E+01 &.2878E+02 &.9290E+01 &.2419E+03 &.3218E+00 \\\\.26 &.1696E+01 &.3029E+02 &.1037E+02 &.2520E+03 &.3330E+00 \\\\.27 &.1761E+01 &.3185E+02 &.1153E+02 &.2621E+03 &.3442E+00 \\\\.28 &.1826E+01 &.3345E+02 &.1277E+02 &.2723E+03 &.3553E+00 \\\\.29 &.1892E+01 &.3511E+02 &.1410E+02 &.2825E+03 &.3662E+00 \\\\.30 &.1957E+01 &.3681E+02 &.1552E+02 &.2927E+03 &.3771E+00 \\\\.31 &.2022E+01 &.3855E+02 &.1702E+02 &.3030E+03 &.3878E+00 \\\\.32 &.2087E+01 &.4035E+02 &.1862E+02 &.3134E+03 &.3984E+00 \\\\.33 &.2153E+01 &.4219E+02 &.2032E+02 &.3238E+03 &.4090E+00 \\\\.34 &.2218E+01 &.4408E+02 &.2211E+02 &.3342E+03 &.4194E+00 \\\\.35 &.2283E+01 &.4602E+02 &.2401E+02 &.3447E+03 &.4297E+00 \\\\.36 &.2348E+01 &.4800E+02 &.2600E+02 &.3553E+03 &.4399E+00 \\\\.37 &.2414E+01 &.5003E+02 &.2810E+02 &.3659E+03 &.4499E+00 \\\\.38 &.2479E+01 &.5211E+02 &.3031E+02 &.3766E+03 &.4599E+00 \\\\.39 &.2544E+01 &.5423E+02 &.3264E+02 &.3873E+03 &.4698E+00 \\\\.40 &.2609E+01 &.5640E+02 &.3507E+02 &.3981E+03 &.4795E+00 \\\\ \\hline \\end{tabular} \\end{table} Table 6: \\(\\frac{\\mathrm{Energy}}{\\mathrm{nucleon}}\\)\\(\\epsilon\\), pressure \\(P\\), energy density \\(\\varepsilon\\) and velocity of sound \\(v_{s}\\) as functions of nucleonic density \\(\\rho\\) for PNM. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|} \\hline \\(\\rho\\) & \\(\\rho/\\rho_{0}\\) & \\(\\epsilon\\) & P & \\(\\varepsilon\\) & \\(v_{s}\\) \\\\ \\hline \\(fm^{-3}\\) & & \\(MeV\\) & \\(MeVfm^{-3}\\) & \\(MeVfm^{-3}\\) & in units of c \\\\ \\hline.81 &.5284E+01 &.1831E+03 &.2591E+03 &.9089E+03 &.7909E+00 \\\\.82 &.5349E+01 &.1871E+03 &.2682E+03 &.9233E+03 &.7966E+00 \\\\.83 &.5414E+01 &.1911E+03 &.2775E+03 &.9379E+03 &.8023E+00 \\\\.84 &.5479E+01 &.1952E+03 &.2871E+03 &.9526E+03 &.8078E+00 \\\\.85 &.5545E+01 &.1992E+03 &.2968E+03 &.9674E+03 &.8133E+00 \\\\.86 &.5610E+01 &.2034E+03 &.3067E+03 &.9824E+03 &.8187E+00 \\\\.87 &.5675E+01 &.2075E+03 &.3169E+03 &.9974E+03 &.8240E+00 \\\\.88 &.5740E+01 &.2117E+03 &.3272E+03 &.1013E+04 &.8293E+00 \\\\.89 &.5806E+01 &.2160E+03 &.3378E+03 &.1028E+04 &.8345E+00 \\\\.90 &.5871E+01 &.2203E+03 &.3486E+03 &.1043E+04 &.8396E+00 \\\\.91 &.5936E+01 &.2246E+03 &.3596E+03 &.1059E+04 &.8446E+00 \\\\.92 &.6001E+01 &.2290E+03 &.3709E+03 &.1074E+04 &.8496E+00 \\\\.93 &.6067E+01 &.2334E+03 &.3823E+03 &.1090E+04 &.8545E+00 \\\\.94 &.6132E+01 &.2378E+03 &.3940E+03 &.1106E+04 &.8594E+00 \\\\.95 &.6197E+01 &.2423E+03 &.4059E+03 &.1122E+04 &.8641E+00 \\\\.96 &.6262E+01 &.2468E+03 &.4180E+03 &.1138E+04 &.8689E+00 \\\\.97 &.6327E+01 &.2514E+03 &.4304E+03 &.1155E+04 &.8735E+00 \\\\.98 &.6393E+01 &.2559E+03 &.4429E+03 &.1171E+04 &.8781E+00 \\\\.99 &.6458E+01 &.2606E+03 &.4557E+03 &.1187E+04 &.8826E+00 \\\\ 1.00 &.6523E+01 &.2652E+03 &.4688E+03 &.1204E+04 &.8871E+00 \\\\ \\hline \\end{tabular} \\end{table} Table 8: \\(\\frac{\\text{Energy}}{\\text{nucleon}}\\)\\(\\epsilon\\), pressure \\(P\\), energy density \\(\\varepsilon\\) and velocity of sound \\(v_{s}\\) as functions of nucleonic density \\(\\rho\\) for PNM.
A mean field calculation for obtaining the equation of state (EOS) for symmetric nuclear matter from a density dependent M3Y interaction supplemented by a zero-range potential is described. The energy per nucleon is minimized to obtain the ground state of symmetric nuclear matter. The saturation energy per nucleon used for nuclear matter calculations is determined from the co-efficient of the volume term of Bethe-Weizsacker mass formula which is evaluated by fitting the recent experimental and estimated atomic mass excesses from Audi-Wapstra-Thibault atomic mass table by minimizing the mean square deviation. The constants of density dependence of the effective interaction are obtained by reproducing the saturation energy per nucleon and the saturation density of spin and isospin symmetric cold infinite nuclear matter. The EOS of symmetric nuclear matter, thus obtained, provide reasonably good estimate of nuclear incompressibility. Once the consants of density dependence are determined, EOS for asymmetric nuclear matter is calculated by adding to the isoscalar part, the isovector component of the M3Y interaction that do not contribute to the EOS of symmetric nuclear matter. These EOS are then used to calculate the pressure, the energy density and the velocity of sound in symmetric as well as isospin asymmetric nuclear matter. Keywords : Asymmetric nuclear matter, Mass formula, Binding Energy, Atomic mass excess, Nuclear incompressibility, Nuclear symmetry energy. PACS numbers: 21.65.+f, 23.60.+e, 23.70.+j, 25.70.Bc, 21.30.Fe, 24.10.Ht
Provide a brief summary of the text.
arxiv-format/0602014v2.md
# A Simple Proof of the Fundamental Theorem about Arveson Systems+ Footnote †: 2000 AMS-Subject classification: 46L55, 46L53, 60G20 Michael Skeide Dipartimento S.E.G.e S., Universita degli Studi del Molise, Via de Sanctis 86100 Campobasso, Italy E-mail: [email protected] ## 1 Introduction An _algebraic Arveson system_ is a family \\(E^{\\otimes}=\\big{(}E_{t}\\big{)}_{t\\in(0,\\infty)}\\) of infinite-dimensional separable Hilbert spaces\\({}^{\\text{\\@@cite[cite]{[\\@@bibref{}{A}{}{}]}}}\\) with unitaries \\(u_{s,t}\\colon E_{s}\\otimes E_{t}\\to E_{s+t}\\) that iterate associatively. Technically, an _Arveson system_[10, Definition 1.4] is the trivial bundle \\((0,\\infty)\\times H_{0}\\) (\\(H_{0}\\) an infinite-dimensional separable Hilbert space) with its natural Borel structure equipped with a (jointly) measurable associative multiplication \\(((s,x),(t,y))\\mapsto(s+t,xy)\\) such that \\(x\\otimes y\\mapsto xy\\) defines a unitary \\(H_{0}\\otimes H_{0}\\to H_{0}\\). We put \\(E_{t}:=(t,H_{0})\\), define unitaries \\(u_{s,t}\\colon E_{s}\\otimes E_{t}\\to E_{s+t}\\) by setting \\(u_{s,t}((s,x)\\otimes(t,y)):=(s+t,xy)\\) and observe that \\(E^{\\otimes}:=\\big{(}E_{t}\\big{)}_{t\\in(0,\\infty)}\\) is an algebraic Arveson system. For a section \\(x=\\big{(}x_{t}\\big{)}_{t\\in(0,\\infty)}\\) (\\(x_{t}\\in E_{t}\\)) we shall denote by \\(x(t)\\) the component of \\(x_{t}\\) in \\(H_{0}\\). A section \\(x\\) in an Arveson system is _measurable_, if the function \\(t\\mapsto x(t)\\) is measurable. The only property going beyond the structure of an algebraic Arveson system and the measurable structure of \\((0,\\infty)\\times H_{0}\\) that we need, is that for every two mesaurable sections \\(x,y\\) the function \\((s,t)\\mapsto x(s)y(t)\\) is measurable.\\({}^{\\text{\\@@cite[cite]{[\\@@bibref{}{A}{}{}]}}}\\) From the mesurable structure of \\((0,\\infty)\\times H_{0}\\) alone it follows already that an Arveson system has a countable family \\(\\left\\{\\big{(}e^{i}_{t}\\big{)}_{t\\in(0,\\infty)}\\colon i\\in\\mathbb{N}\\right\\}\\) of measurable sections such that for every \\(t\\) the family \\(\\big{\\{}e^{i}_{t}\\colon i\\in\\mathbb{N}\\big{\\}}\\) is an orthonormal basis for \\(E_{t}\\)[10, 11]. Proposition 1.15]. (Simply choose an orthonormal basis \\(\\left\\{e^{i}\\colon i\\in\\mathbb{N}\\right\\}\\) for \\(H_{0}\\) and put \\(e_{t}^{i}\\mathrel{\\mathop{:}}=(t,e^{i})\\).)1 Footnote 1: We note that these sections are also continuous for the trivial Banach bundle \\((0,\\infty)\\times H_{0}\\). This trivial observation has consequences for the generalization to product systems of Hilbert modules. With every proper \\(E_{0}\\)_-semigroup_\\(\\vartheta=\\left(\\vartheta_{t}\\right)_{t\\in\\mathbb{R}_{+}}\\) (that is, a semigroup of unital endomorphisms \\(\\vartheta_{t}\\), proper for \\(t>0\\)) on \\(\\mathcal{B}(H)\\) (\\(H\\) an infinite-dimensional separable Hilbert space) that is _normal_ (that is, every \\(\\vartheta_{t}\\) is normal) and _strongly continuous_ (that is, \\(t\\mapsto\\vartheta_{t}(a)x\\) is continuous for every \\(a\\in\\mathcal{B}(H)\\) and every \\(x\\in H\\)) there is associated an Arveson system (which determines the \\(E_{0}\\)-semigroup up to cocycle conjugacy).2 There exist two proofs of the converse statetment: Every Arveson system arises as the Arveson system associated with an \\(E_{0}\\)-semigroup. The first one obtained by Arveson in a series of papers [10, 11, 12]. The second one, completely different, obtained by Liebscher [13]. Both proofs are deep and difficult. It is our goal to furnish a new comparably simple proof of this _fundamental result about Arveson systems_. Footnote 2: If the \\(E_{0}\\)–semigroup consists of automorphisms, then the associated \\(E_{t}\\) would all be one-dimensional. Arveson excludes this case in the definition. While, usually, we tend to consider also the one-dimensional case, in these notes we find it convenient to stay with Arveson’s convention. There are two different ways of how to associate with a normal \\(E_{0}\\)-semigroup \\(\\vartheta\\) on \\(\\mathcal{B}(H)\\) an algebraic Arveson system, and if the \\(E_{0}\\)-semigroup is strongly continuous, then in both cases the algebraic Arveson system is, in fact, an Arveson system. Therefore, in these notes we shall assume that, _by convention_, all \\(E_{0}\\)-semigroups are normal, while we say explicitly, if an \\(E_{0}\\)-semigroup is assumed strongly continuous. We shall abbreviate \\(\\mathsf{id}_{E_{t}}\\) to \\(\\mathsf{id}_{t}\\). The first construction is due to Arveson [11, Section 2]. The (algebraic) Arveson system \\({E^{A}}^{\\otimes}=\\left(E_{t}^{A}\\right)_{t\\in(0,\\infty)}\\) Arveson constructs from \\(\\vartheta\\) comes along with a _nondegenerate representation_\\(\\eta^{\\otimes}=\\left(\\eta_{t}\\right)_{t\\in(0,\\infty)}\\) on \\(H\\). That is, we have linear maps \\(\\eta_{t}\\colon E_{t}^{A}\\to\\mathcal{B}(H)\\) that fulfill \\(\\eta_{t}(x_{t})\\eta_{s}(y_{s})=\\eta_{t+s}(x_{t}y_{s})\\) and \\(\\eta_{t}(x_{t})^{*}\\eta_{t}(y_{t})=\\langle x_{t},y_{t}\\rangle\\,\\mathsf{id}_{H}\\) and the nondegeneracy condition \\(\\overline{\\mathsf{span}}\\,\\eta_{t}(E_{t}^{A})H=H\\) for all \\(t>0\\). Arveson showed existence of an \\(E_{0}\\)-semigroup having a given \\(E^{\\otimes}\\) as associated Arveson system by contructing a nondegenerate representation of \\(E^{\\otimes}\\). Suppose we can find a family \\(w=\\left(w_{t}\\right)_{t\\in(0,\\infty)}\\) of unitaries \\(w_{t}\\colon E_{t}\\otimes H\\to H\\) that satisfies \\(w_{t}(\\mathsf{id}_{t}\\otimes w_{s})=w_{s+t}(u_{t,s}\\otimes\\mathsf{id}_{H})\\) (that is, \\(E_{t}\\otimes(E_{s}\\otimes H)=(E_{t}\\otimes E_{s})\\otimes H\\)). Then \\(\\eta_{t}(x_{t})x\\mathrel{\\mathop{:}}=w_{t}(x_{t}\\otimes x)\\) defines a nondegenerate representation of \\(E^{\\otimes}\\) and \\(\\vartheta_{t}(a)\\mathrel{\\mathop{:}}=w_{t}(\\mathsf{id}_{t}\\otimes a)w_{t}^{*}\\) an \\(E_{0}\\)-semigroup that has \\(E^{\\otimes}\\) as associated Arveson system. We call the pair \\((w,H)\\) (\\(H\ eq\\left\\{0\\right\\}\\)) a _right dilation_ of \\(E^{\\otimes}\\) on \\(H\\). (Putting \\(E_{\\infty}\\mathrel{\\mathop{:}}=H\\), a right dilation extends the product on \\(E^{\\otimes}\\) to \\(E^{\\otimes}\\times\\left(E_{t}\\right)_{(0,\\infty]}\\).) It is not difficult to show that every nondegenerate representation of \\(E^{\\otimes}\\) arises in the described way from a right dilation. Of course, if \\(\\eta\\) is the representation of \\({E^{A}}^{\\otimes}\\) constructed by Arveson from an \\(E_{0}\\)-semigroup, then \\(\\vartheta\\) gives back that \\(E_{0}\\)-semigroup. The second construction is due to Bhat [1]. The (algebraic) Arveson system \\({E^{B}}^{\\otimes}=\\left(E_{t}^{B}\\right)_{t\\in(0,\\infty)}\\) Bhat constructs from \\(\\vartheta\\) comes along with a family \\(\\big{(}v_{t}\\big{)}_{t\\in(0,\\infty)}\\) of unitaries \\(v_{t}\\colon H\\otimes E_{t}^{B}\\to H\\) that satisfies \\(v_{t}(v_{s}\\otimes\\mathsf{id}_{t})=v_{s+t}(\\mathsf{id}_{H}\\otimes u_{s,t})\\) (that is, \\((H\\otimes E_{s}^{B})\\otimes E_{t}^{B}=H\\otimes(E_{s}^{B}\\otimes E_{t}^{B})\\)) so that \\(v_{t}(a\\otimes\\mathsf{id}_{t})v_{t}^{*}\\) defines an \\(E_{0}\\)-semigroup (giving back \\(\\vartheta_{t}(a)\\)). In general, if \\(E^{\\otimes}\\) is an (algebraic) Arveson system, we call a pair \\((v,H)\\) (\\(H\ eq\\{0\\}\\)) with a family \\(v\\) of unitaries \\(v_{t}\\colon H\\otimes E_{t}\\to H\\) that satisfies the associativity condition a _left dilation_ of \\(E^{\\otimes}\\) on \\(H\\). (Putting \\(E_{\\infty}:=H\\), a left dilation extends the product on \\(E^{\\otimes}\\) to \\(\\big{(}E_{t}\\big{)}_{(0,\\infty]}\\times E^{\\otimes}\\).) Of course, the \\(E_{0}\\)-semigroup \\(\\vartheta\\) defined by setting \\(\\vartheta_{t}(a):=v_{t}(a\\otimes\\mathsf{id}_{t})v_{t}^{*}\\) has \\(E^{\\otimes}\\) as its associated Bhat system. For our purposes it is indispensable to note that the Arveson system and the Bhat system of an \\(E_{0}\\)-semigroup are not isomorphic but canonically anti-isomorphic (that is, they are equal as bundles, but the product of one is the opposite of the product of the other). As Tsirelson [19] has noted, they need not be isomorphic. So constructing a left dilation of an Arveson system \\(E^{\\otimes}\\) means producing an \\(E_{0}\\)-semigroup that has \\(E^{\\otimes}\\) as associated Bhat system, while constructing a right dilation of an Arveson system \\(E^{\\otimes}\\) means producing an \\(E_{0}\\)-semigroup that has \\(E^{\\otimes}\\) as associated Arveson system. Here our scope is to show that an Arveson system \\(E^{\\otimes}\\) can be obtained as the Bhat system of a strongly continuous \\(E_{0}\\)-semigroup, that is, we wish to contruct a left dilation \\((v,K)\\) of \\(E^{\\otimes}\\) that has certain continuity properties. (By switching to the opposite of \\(E^{\\otimes}\\) this shows also that \\(E^{\\otimes}\\) may by obtained as the Arveson system associated with a strongly continuous \\(E_{0}\\)-semigroup.) Anyway, for the proof of that the \\(E_{0}\\)-semigroup we construct is strongly continuous we will construct also a right dilation \\((w,L)\\) of \\(E^{\\otimes}\\). In fact, a left dilation \\((v,K)\\) and a right dilation \\((w,L)\\) can be put together to obtain a unitary semigroup \\(u=\\big{(}u_{t}\\big{)}_{t\\in(0,\\infty)}\\) on \\(K\\otimes L\\) by setting \\[u_{t}\\ :=\\ (v_{t}\\otimes\\mathsf{id}_{L})(\\mathsf{id}_{K}\\otimes w_{t}^{*}). \\tag{1.1}\\] (Identifying \\(K=K\\otimes E_{t}\\) by \\(v_{t}\\) and \\(L=E_{t}\\otimes L\\) by \\(w_{t}\\), this is nothing but the \"rebracketting\" \\(k\\otimes(x_{t}\\otimes\\ell)=(k\\otimes x_{t})\\otimes\\ell\\), and illustrates that it is not always safe to use these identifications too naively; see also Skeide [20] that discusses the case of spatial product systems.) Then the automorphism semigroup \\(\\alpha=\\big{(}\\alpha_{t}\\big{)}_{t\\in(0,\\infty)}\\) defined as \\(\\alpha_{t}=u_{t}\\bullet u_{t}^{*}\\) on \\(\\mathcal{B}^{a}(K\\otimes L)\\) restricts to the \\(E_{0}\\)-semigroup \\(\\vartheta\\) on \\(\\mathcal{B}(K)\\cong\\mathcal{B}(K)\\otimes\\mathsf{id}_{L}\\). Showing that \\(u_{t}\\) is strongly continuous will also show that \\(\\vartheta\\) is strongly (actually \\(\\sigma\\)-strongly) continuous. (Needless to say that if we extend \\(\\alpha\\) to all of \\(\\mathbb{R}\\), then \\(\\alpha_{-t}\\) (\\(t\\in\\mathbb{R}_{+}\\)) defines an \\(E_{0}\\)-semigroup on \\(\\mathcal{B}(L)\\cong\\mathsf{id}_{K}\\otimes\\mathcal{B}(L)\\) that has \\(E^{\\otimes}\\) as associated Arveson system.) **1.1 Remark**: The relation between Arveson system and Bhat system of an \\(E_{0}\\)-semigroup, between right and left dilation of an Arveson system is an instance of a far reaching duality between a von Neumann correspondence over a von Neumann algebra \\(\\mathcal{B}\\) and its _commutant_ which is a von Neumann correspondence over the commutant \\(\\mathcal{B}^{\\prime}\\) of \\(\\mathcal{B}\\). (The commutant has been introduced in Skeide [20] and, independently, in a version for \\(W^{*}\\)-correpondences in Muhly and Solel [1].) A Hilbert space is, in particular, a correspondence over the von Neumann algebra and the commutant of \\(\\mathbb{C}\\) is \\(\\mathbb{C}^{\\prime}=\\mathbb{C}\\). In this picture, the Arveson system associated with an \\(E_{0}\\)-semigroup turns out to be the _commutant system_ of the Bhat system of that \\(E_{0}\\)-semigroup. As Hilbert spaces the members of the two product systems are isomorphic but the commutant functor switches the order in tensor products. We find confirmed that the Arveson system of an \\(E_{0}\\)-semigroup is anti-isomorphic to its Bhat system. See the survey Skeide [20] for a more detailed discussion of this duality. In Skeide [20] we will present version of these notes for Hilbert modules. (In fact, our approach here was motivated by the wish to find a method that can be generalized to product systems of Hilbert modules.) In [20] it will also come out more clearly why we insist to look rather at the Bhat system of an \\(E_{0}\\)-semigroup than its Arveson system. For Hilbert modules a left dilation of a product system still gives rise immediately to an \\(E_{0}\\)-semigroup via amplification, while the construction of an \\(E_{0}\\)-semigroup from a right dilation is considerably more subtle than simple amplification. We do not have enough space to discuss this here more explicitly. In fact, one reason why we decided to discuss the case of Hilbert space separately in these notes (and not integrated into [20]) is that we wish to underline the extreme shortness of the argument, which would be obstructed by the far more exhaustive discussion in [20]. Another reason is that in [20] we concentrate on product systems that come shipped with a (strongly) continuous product system structure from the beginning, while we only scratch problems of measurability. The point is that every Bhat system of a strongly continuous \\(E_{0}\\)-semigroup has a (strongly) continuous product system structure (see Skeide [20]) and not just a measurable one, and for (strongly) continuous product systems everything works as for Hilbert modules and without any separability assumption. The Arveson system of an \\(E_{0}\\)-semigroup, instead, comes shipped with a product system operation that is only measurable. Only after showing that every Arveson system is the Bhat system of a strongly continuous \\(E_{0}\\)-semigroup, we know that also the Arveson system of a strongly continuous \\(E_{0}\\)-semigroup may be equipped with a continuous structure so that also the product system operation is continuous. See also Remark 2.1 ## 2 The idea The idea how to construct a left dilation of an Arveson system as such is simple and can be explained quickly. Let \\(E^{\\otimes}=\\big{(}E_{t}\\big{)}_{t\\in(0,\\infty)}\\) be an algebraic Arveson system. To obtain a left dilation we proceed in two steps. First, we construct a left dilation of the discrete subsystem \\(\\big{(}E_{n}\\big{)}_{n\\in\\mathbb{N}}\\), that is, a Hilbert space \\(E\\) and sufficiently associative identifications \\(E\\otimes E_{n}=E\\). Existence of such a left (and similarly of a right) dilation is comparably trivial, because every discrete product system of Hilbert spaces has unital units. In the the far more general case of Hilbert modules this is explained and exploited in Skeide [20]. For the sake of being self-contained we prove existence of left and right dilations of discrete product systems in the appendix. In order to \"lift\" a dilation of \\(\\big{(}E_{n}\\big{)}_{n\\in\\mathbb{N}}\\) to a dilation of \\(\\big{(}E_{t}\\big{)}_{t\\in(0,\\infty)}\\) we consider the direct integrals \\(\\int_{a}^{b}E_{\\alpha}\\,d\\alpha\\) (\\(0\\leq a<b\\leq\\infty\\)). Clearly, under the identification of \\(x_{t}\\in E_{t}\\) and \\(x(t)\\in H_{0}\\), we find \\(\\int_{a}^{b}E_{\\alpha}\\,d\\alpha=L^{2}((a,b],H_{0})\\). We put \\(K=E\\otimes\\int_{0}^{1}E_{\\alpha}\\,d\\alpha=L^{2}((a,b],E\\otimes H_{0})\\). Choose \\(t>0\\) and put \\(n:=\\{t\\}\\), the unique integer such that \\(t-n\\in(0,1]\\). Then the following identifications \\[K\\otimes E_{t}\\ =\\ E\\otimes\\left(\\int_{0}^{1}E_{\\alpha}\\,d\\alpha\\right) \\otimes E_{t}\\ =\\ E\\otimes\\int_{t}^{1+t}E_{\\alpha}\\,d\\alpha\\] \\[=\\ \\left(E\\otimes E_{n}\\otimes\\int_{t-n}^{1}E_{\\alpha}\\,d\\alpha\\right) \\oplus\\left(E\\otimes E_{n+1}\\otimes\\int_{0}^{t-n}E_{\\alpha}\\,d\\alpha\\right)\\] \\[=\\ \\left(E\\otimes\\int_{t-n}^{1}E_{\\alpha}\\,d\\alpha\\right)\\oplus\\left(E \\otimes\\int_{0}^{t-n}E_{\\alpha}\\,d\\alpha\\right)\\ =\\ K \\tag{2.1}\\] define a unitary \\(F\\otimes E_{t}=F\\). In the step from the second line to the third one we have made use of the identifications \\(E\\otimes E_{n}=E\\) and \\(E\\otimes E_{n+1}=E\\) coming from the dilation of \\(\\big{(}E_{n}\\big{)}_{n\\in\\mathbb{N}_{0}}\\). Existence of the dilation of the discrete subsystem means that \\(E\\) absorbs every tensor power of \\(E_{1}\\). Just that how many factors \\(E_{1}\\) have to be absorbed depends on whether \\(\\alpha+t-n\\) is bigger or smaller then \\(1\\). The only things that remain to be done is to show that, in a precise formulation, the identifications in (2.1) iterate associatively and that the obtained \\(E_{0}\\)-semigroup on \\(\\mathcal{B}(K)\\) is strongly continuous. As explaind in the introduction, for the proof of continuity we will have to construct also a right dilation. Constructing a right dilation follows simply by inverting the order of factors in all tensor products. Note that this transition from left to right dilation is much more involved for Hilbert modules; see [30]. ### 2.1 Remark We would like to say that our idea here is inspired very much by Liebscher's treatment in [16, Theorem 8]. Also there the a major part of the construction exploits the properties of the Arveson system in the segment \\((0,1]\\) and then puts together the segments suitably to cover the whole half-line. The more important it is to underline that the constructions _are_ definitely different. In Liebscher's construction the possibility to embed \\(\\mathcal{B}(E_{t})\\) into \\(\\mathcal{B}(E_{s}\\otimes E_{t})\\) as \\(\\mathsf{id}_{s}\\otimes\\mathcal{B}(E_{t})\\) play as outstanding role. But it was our scope to produce a proof that works also for Hilbert modules and amplification of operators that act on the _right_ factor in a tensor product of Hilbert modules is, in general, impossible. We also would like to mention another source of inspiration, namely, Riesz' proof of _Stone's theorem_ on the generators of unitary groups as discussed in[14]. Also here the decomposition of the real line (containing the spectrum of the generator) into the product of \\((0,1]\\) (leading to a periodic part in the unitary group) and \\(\\mathbb{Z}\\) (taking care for unboundedness of the generator) is crucial. ## 3 Associativity In this section we specify precisely the operations suggested by (2.1) and show that they iterate associatevely. So let \\(E^{\\otimes}=\\big{(}E_{t}\\big{)}_{t\\in(0,\\infty)}\\) be an Arveson system with the family \\(u_{s,t}\\) of unitaries defining the product system structure. Suppose \\((E,\\big{(}\\breve{v}_{n}\\big{)}_{n\\in\\mathbb{N}})\\) is a dilation of the discrete subsystem \\(\\big{(}E_{t}\\big{)}_{t\\in\\mathbb{N}}\\) of \\(\\big{(}E_{t}\\big{)}_{t\\in(0,\\infty)}\\). Let \\(f=\\big{(}f_{\\alpha}\\big{)}_{\\alpha\\in(0,1]}\\) be a section in \\(\\big{(}E\\otimes E_{\\alpha}\\big{)}_{\\alpha\\in(0,1]}\\) (that means just that \\(f_{\\alpha}\\in E\\otimes E_{\\alpha}\\) for every \\(\\alpha\\)) and choose \\(x_{t}\\in E_{t}\\). The operation suggested by (2.1) sends the section \\(f\\otimes x_{t}=\\big{(}f_{\\alpha}\\otimes x_{t}\\big{)}_{\\alpha\\in(0,1]}\\) in \\(\\big{(}(E\\otimes E_{\\alpha}\\big{)}\\otimes E_{t}\\big{)}_{\\alpha\\in(0,1]}\\) to a section \\(v_{t}(f\\otimes x_{t})=\\big{(}(v_{t}(f\\otimes x_{t}))_{\\alpha}\\big{)}_{\\alpha \\in(0,1]}\\) in \\(\\big{(}E\\otimes E_{\\alpha}\\big{)}_{\\alpha\\in(0,1]}\\) in such a way that \\(f_{\\alpha}\\otimes x_{t}\\) ends up on \\((v_{t}(f\\otimes x_{t}))_{\\alpha+t-n}\\) (with \\(n:=\\{\\alpha+t\\}\\) so that \\(\\alpha+t-n\\in(0,1]\\)) defined by setting \\[(v_{t}(f\\otimes x_{t}))_{\\alpha+t-n}\\ =\\ (\\breve{v}_{n}\\otimes\\mathsf{id}_{ \\alpha+t-n})(\\mathsf{id}_{E}\\otimes u_{n,\\alpha+t-n}^{*})(\\mathsf{id}_{E} \\otimes u_{\\alpha,t})(f_{\\alpha}\\otimes x_{t}). \\tag{3.1}\\] \\(\\alpha\\mapsto\\alpha+t-n\\) (\\(n\\) depending on \\(\\alpha\\) and \\(t\\)) is just the shift modulo \\(\\mathbb{Z}\\) on \\((0,1]\\) and, therefore, one-to-one. **3.1 Proposition**.: _The operations \\(v_{t}\\)\\((t\\in(0,\\infty))\\) on sections iterate associatively, that is,_ \\[v_{t}(v_{s}\\otimes\\mathsf{id}_{t})\\ =\\ v_{s+t}(\\mathsf{id}_{(E\\otimes E_{\\alpha})_{ \\alpha\\in(0,1]}}\\otimes u_{s,t}). \\tag{3.2}\\] **Proof.** We must check, whether left-hand side and right-hand side of (3.2) do the same to the point \\(f_{\\alpha}\\otimes x_{s}\\otimes y_{t}\\) in the section \\(f\\otimes x_{s}\\otimes y_{t}\\) for every \\(\\alpha\\in(0,1]\\). (Of course, this also shows that that both sides end up in \\(E\\otimes E_{\\alpha+s+t-\\{\\alpha+s+t\\}}\\).) It is useful to note the following identities \\[u_{r,s+t}^{*}u_{r+s,t} = (\\mathsf{id}_{r}\\otimes u_{s,t})(u_{r,s}^{*}\\otimes\\mathsf{id}_{ t})\\] \\[(u_{r,s}\\otimes\\mathsf{id}_{t})(\\mathsf{id}_{r}\\otimes u_{s,t}^{*}) = u_{r+s,t}^{*}u_{r,s+t}\\] which follow from associativity of the \\(u_{s,t}\\). Note also \\((\\mathsf{id}_{E}\\otimes u_{n,\\alpha+t-n}^{*})(\\mathsf{id}_{E}\\otimes u_{ \\alpha,t})=\\mathsf{id}_{E}\\otimes u_{n,\\alpha+t-n}^{*}u_{\\alpha,t}\\). We put \\(m:=\\{\\alpha+s\\}\\) and \\(n:=\\{\\alpha+s-m+t\\}=\\{\\alpha+s+t\\}-m\\), so that \\(m+n=\\{\\alpha+s+t\\}\\), and start with the left-hand side of (3.2). We will surpress the elementary tensor \\(f_{\\alpha}\\otimes x_{s}\\otimes y_{t}\\) to which it is applied. The left-hand side, when applied to an argument in \\((E\\otimes E_{\\alpha})\\otimes E_{s}\\otimes E_{t}\\) reads \\[(\\breve{v}_{n}\\otimes\\mathsf{id}_{\\alpha+s-m+t-n})(\\mathsf{id}_{ E}\\otimes u_{n,\\alpha+s-m+t-n}^{*})(\\mathsf{id}_{E}\\otimes u_{\\alpha+s-m,t})\\] \\[\\qquad\\qquad\\qquad\\qquad\\qquad\\times\\big{[}(\\breve{v}_{m}\\otimes \\mathsf{id}_{\\alpha+s-m})(\\mathsf{id}_{E}\\otimes u_{m,\\alpha+s-m}^{*})(\\mathsf{ id}_{E}\\otimes u_{\\alpha,s})\\otimes\\mathsf{id}_{t}\\big{]}\\] \\[= (\\breve{v}_{n}\\otimes\\mathsf{id}_{\\alpha+s+t-m-n})(\\mathsf{id}_{ E}\\otimes u_{n,\\alpha+s+t-m-n}^{*}u_{\\alpha+s-m,t})\\] \\[\\qquad\\qquad\\qquad\\qquad\\qquad\\times(\\tilde{v}_{m}\\otimes \\mathsf{id}_{\\alpha+s-m}\\otimes\\mathsf{id}_{t})(\\mathsf{id}_{E}\\otimes u_{m, \\alpha+s-m}^{*}u_{\\alpha,s}\\otimes\\mathsf{id}_{t})\\] \\[= (\\breve{v}_{n}\\otimes\\mathsf{id}_{\\alpha+s+t-m-n})(\\breve{v}_{m} \\otimes\\mathsf{id}_{n}\\otimes\\mathsf{id}_{\\alpha+s+t-m-n})\\] \\[\\qquad\\qquad\\qquad\\qquad\\times(\\mathsf{id}_{E}\\otimes\\mathsf{id}_{ m}\\otimes u_{n,\\alpha+s+t-m-n}^{*}u_{\\alpha+s-m,t})(\\mathsf{id}_{E}\\otimes u_{m, \\alpha+s-m}^{*}u_{\\alpha,s}\\otimes\\mathsf{id}_{t}) \\tag{3.3}\\]where for exchanging the two factors in the middle we used the identity \\((\\mathsf{id}_{K_{2}}\\otimes a)(a^{\\prime}\\otimes\\mathsf{id}_{L_{1}})=(a^{\\prime} \\otimes a)=(a^{\\prime}\\otimes\\mathsf{id}_{L_{2}})(\\mathsf{id}_{K_{1}}\\otimes a) \\in\\mathcal{B}^{a}(K_{1}\\otimes L_{1},K_{2}\\otimes L_{2})\\) that holds for every \\(a\\in\\mathcal{B}(L_{1},L_{2})\\) and \\(a^{\\prime}\\in\\mathcal{B}(K_{1},K_{2})\\). In the last two factors in the last line (3.3), ignoring \\(\\mathsf{id}_{E}\\), we obtain \\[(\\mathsf{id}_{m}\\otimes u^{*}_{n,\\alpha+s+t-m-n}u_{\\alpha+s-m,t})(u^{*}_{m,\\alpha+s-m}u _{\\alpha,s}\\otimes\\mathsf{id}_{t})\\\\ =\\ (\\mathsf{id}_{m}\\otimes u^{*}_{n,\\alpha+s+t-m-n})(\\mathsf{id}_ {m}\\otimes u_{\\alpha+s-m,t})(u^{*}_{m,\\alpha+s-m}\\otimes\\mathsf{id}_{t})(u_{ \\alpha,s}\\otimes\\mathsf{id}_{t})\\\\ =\\ (\\mathsf{id}_{m}\\otimes u^{*}_{n,\\alpha+s+t-m-n})u^{*}_{m, \\alpha+s+t-m}u_{\\alpha+s,t}(u_{\\alpha,s}\\otimes\\mathsf{id}_{t})\\\\ =\\ (u^{*}_{n,m}\\otimes\\mathsf{id}_{\\alpha+s+t-m-n})u^{*}_{m+n, \\alpha+s+t-m-n}u_{\\alpha,s+t}(\\mathsf{id}_{\\alpha}\\otimes u_{s,t}).\\] Using associativity for the first two factors in that last line of (3.3), we find \\[(\\tilde{v}_{n}\\otimes\\mathsf{id}_{\\alpha+s+t-m-n})(\\tilde{v}_{m} \\otimes\\mathsf{id}_{n}\\otimes\\mathsf{id}_{\\alpha+s+t-m-n})\\\\ =\\ (\\tilde{v}_{m+n}\\otimes\\mathsf{id}_{m+n}\\otimes\\mathsf{id}_{ \\alpha+s+t-m-n})(\\mathsf{id}_{E}\\otimes u_{m,n}\\otimes\\mathsf{id}_{\\alpha+s+t -m-n}).\\] Putting everything together, the factors containing \\(u_{m.n}\\) and \\(u^{*}_{m,n}\\) cancel out and we obtain \\[(\\tilde{v}_{m+n}\\otimes\\mathsf{id}_{m+n}\\otimes\\mathsf{id}_{ \\alpha+s+t-m-n})(\\mathsf{id}_{E}\\otimes u_{m,n}\\otimes\\mathsf{id}_{\\alpha+s+t -m-n})\\\\ \\times(\\mathsf{id}_{E}\\otimes u^{*}_{n,m}\\otimes\\mathsf{id}_{ \\alpha+s+t-m-n})(\\mathsf{id}_{E}\\otimes u^{*}_{m+n,\\alpha+s+t-m-n})\\\\ \\times(\\mathsf{id}_{E}\\otimes u_{\\alpha,s+t})(\\mathsf{id}_{E} \\otimes\\mathsf{id}_{\\alpha}\\otimes u_{s,t})\\\\ =\\ (\\tilde{v}_{m+n}\\otimes\\mathsf{id}_{m+n}\\otimes\\mathsf{id}_{ \\alpha+s+t-m-n})(\\mathsf{id}_{E}\\otimes u^{*}_{m+n,\\alpha+s+t-m-n})\\\\ \\times(\\mathsf{id}_{E}\\otimes u_{\\alpha,s+t})(\\mathsf{id}_{E} \\otimes\\mathsf{id}_{\\alpha}\\otimes u_{s,t}).\\] As \\(m+n=\\{\\alpha+s+t\\}\\), this is exactly how \\(u_{s+t}(\\mathsf{id}_{(E\\otimes E_{\\alpha})_{\\alpha\\in(0,1]}}\\otimes u_{s,t})\\) acts on \\(f_{\\alpha}\\otimes x_{s}\\otimes y_{t}\\). The mapping \\(f\\otimes x_{t}\\mapsto v_{t}(f\\otimes x_{t})\\) is fibre-wise unitary. Measurability of the product system structure implies that \\(v_{t}\\) sends measurable sections to measurable sections. So, by translational invariance of the Lebesgue measure, \\(v_{t}\\) sends square integrable sections isometrically to square integrable sections and, therefore, defines a unitary \\(v_{t}\\colon F\\otimes E_{t}\\to F\\). We summarize: **3.2 Theorem**.: \\((v,K)\\) _is a left dilation of \\(E^{\\otimes}\\)._ Let \\(\\mathcal{F}_{K,L}\\colon K\\otimes L\\to L\\otimes K\\) denote the canonical _flip_ of the factors in a tensor product of two Hilbert spaces \\(K\\) and \\(L\\). Omitting the obvious proof, we add: **3.3 Theorem**.: _If \\((\\tilde{v},L)\\) is the left dilation constructed as before for the opposite Arveson system of \\(E^{\\otimes}\\), then \\((w,L)\\) defined by setting \\(w_{t}=\\tilde{v}_{t}\\circ\\mathcal{F}_{E_{t},L}\\) is a right dilation of \\(E^{\\otimes}\\)._ **3.4 Remark**.: Once more, we emphasize that an operation like the flip \\(\\mathcal{F}\\) is not available for Hilbert modules. In fact, in the module case \\(K\\) will be a right Hilbert module, while \\(L\\) will be a Hilbert space with a nondegenerate representation. Also,in the formulation of Theorem 3.3 there would not occur the opposite system of \\(E^{\\otimes}\\) but its commutant system. But, the commutant destroys continuity properties. Therefore, in a formulation for modules Theorem 3.3 must be reproved from scratch starting with a right dilation of the discrete subsystem (guaranteed in [30]) inverting in the preceeding construction the orders of the factors in all tensor products. Remark 3.5: Recall that Proposition 3.1 is a statement about an operation acting pointwise on sections and not just a statement almost everywhere. Therefore, if we replace the translation invariant Lebesgue measure by the translation invariant counting measure, so that the direct integrals become direct sums, then measurability of sections does no longer play any role. Therefore, also algebraic Arveson systems admit left and right dilations. Just that the dilation spaces might be nonseparable and the correponding \\(E_{0}\\)-semigroups noncontinuous. The same remains true for algebraic product systems of Hilbert modules as long as we can guarantee (by [30]) dilations of the discrete subsystem; see [30]. ## 4 Continuity As indicated in the introduction we show that the unitary semigroup on \\(K\\otimes L\\), defined with the help of the left and the right dilation from Theorems 3.2 and 3.3 by (1.1), is strongly continuous. This shows also that the \\(E_{0}\\)-semigroup determined by \\((v,K)\\) is strongly continuous. First of all, note that \\(E\\) (carrying the dilation of the discrete subsystem) and, therefore, also \\(K\\) and \\(L\\) are separable. On a separable Hilbert space \\(H\\) measurability and weak measurability are equivalent. For checking weak measurability of \\(t\\mapsto x(t)\\in H\\) it is sufficient to check measurability of \\(t\\mapsto\\langle y,x(t)\\rangle\\) for \\(y\\) from a total subset of \\(H\\). Also, for checking strong continuity of a unitary semigroup it is sufficient to check weak measurability. ### Proposition \\(t\\mapsto u_{t}\\) _is weakly measurable and, therefore, strongly continuous._ Let \\(\\Big{\\{}\\big{(}e_{t}^{i}\\big{)}_{t\\in(0,\\infty)}\\colon i\\in\\mathbb{N}\\Big{\\}}\\) a mesaurable orthonormal basis for \\(E^{\\otimes}\\) (see the introduction). So \\[u_{t}\\ =\\ (v_{t}\\otimes\\mathsf{id}_{L})\\Big{(}\\!\\sum_{i\\in\\mathbb{N}}\\mathsf{ id}_{K}\\otimes\\!e_{t}^{i}e_{t}^{i^{*}}\\otimes\\mathsf{id}_{L}\\Big{)}(\\mathsf{id}_{K} \\otimes\\!w_{t}^{*})\\ =\\ \\sum_{i\\in\\mathbb{N}}(v_{t}\\otimes e_{t}^{i})\\otimes({e_{t}^{i}}^{*} \\otimes w_{t}^{*}),\\] where for every \\(x\\in E_{t}\\) we define the operator \\(v_{t}\\otimes x\\colon k\\mapsto v_{t}(k\\otimes x)\\) in \\(\\mathcal{B}(K)\\) and the operator \\(x^{*}\\otimes w_{t}^{*}\\in\\mathcal{B}(L)\\) as the adjoint of \\(x\\otimes w_{t}\\) defined in a way analogous to the definition of \\(v_{t}\\otimes x\\). For every measurable section \\(y=\\big{(}y_{\\alpha}\\big{)}_{\\alpha\\in(0,1]}\\) (\\(y_{\\alpha}\\in E_{\\alpha}\\)) and \\(x\\in E\\) the function \\((\\alpha,t)\\mapsto v_{t}(x\\otimes y\\otimes e_{t}^{i})_{\\alpha}\\in E\\otimes E_{\\alpha}\\) is measurable and, clearly, square-integrable over \\((0,1]\\times C\\) for every compact interval \\(C\\). Calculating an inner product with an element \\(k\\in K\\) means integrating the inner product \\(\\langle k_{\\alpha},v_{t}(x\\otimes y\\otimes e_{t}^{i})_{\\alpha}\\rangle\\) over \\(\\alpha\\in(0,1]\\). By _Cauchy-Schwarz inequality_ and _Fubini'stheorem_ the resulting function of \\(t\\) is measurable (and square-integrable over every compact interval \\(C\\)). In other words, the \\(\\mathcal{B}(K)\\)-valued functions \\(t\\mapsto v_{t}\\otimes e_{t}^{i}\\) are all weakly measurable. Similarly, the \\(\\mathcal{B}(L)\\)-valued functions \\(t\\mapsto{e_{t}^{i}}^{*}\\otimes w_{t}^{*}\\) are weakly measurable. By an application of the _dominated convergence theorem_ we find that \\[t\\ \\longmapsto\\ \\Big{\\langle}(k\\otimes\\ell)\\,,\\,\\Big{(}\\sum_{i \\in\\mathbb{N}}(v_{t}\\otimes e_{t}^{i})\\otimes({e_{t}^{i}}^{*}\\otimes w_{t}^{*} )\\Big{)}(k^{\\prime}\\otimes\\ell^{\\prime})\\Big{\\rangle}\\\\ =\\ \\sum_{i\\in\\mathbb{N}}\\langle k,(v_{t}\\otimes e_{t}^{i})k^{ \\prime}\\rangle\\langle\\ell,({e_{t}^{i}}^{*}\\otimes w_{t}^{*})\\ell^{\\prime}\\rangle\\] is measurable for all \\(k,k\\in K;\\ell,\\ell^{\\prime}\\in L\\). In conclusion \\(t\\mapsto u_{t}\\) is weakly measurable. This concludes the proof of Arveson's theorem: Theorem 4.2: _For every Arveson system \\(E^{\\otimes}\\) there exists a strongly continuous \\(E_{0}\\)-semigroup having \\(E^{\\otimes}\\) as associated Bhat system. By passing to the opposite Arveson system there exists also a strongly continuous \\(E_{0}\\)-semigroup having \\(E^{\\otimes}\\) as associated Arveson system._ ## Appendix **A.1 Theorem**.: _Every discrete product system \\(\\big{(}E_{n}\\big{)}_{n\\in\\mathbb{N}}\\) of (infinite-dimendional separable) Hilbert spaces \\(E_{n}\\) admits a left and a right dilation on an (infinite-dimensional separable) Hilbert space._ Proof.: It is known since the defintion of product systems in [10] that existence of a (unital) unit allows to construct easily a representation (that is, a right dilation). This does not depend on the index set \\(\\mathbb{N}\\) or \\(\\mathbb{R}_{+}\\setminus\\{0\\}\\). We repeat here the construction of a left dilation from [1] reduced to the case of Hilbert spaces. It is noteworthy that for Hilbert modules existence of a unit vector constitutes a serious problem, while every nonzero Hilbert space has unit vectors in abundance. Choose a unit vector \\(\\xi_{1}\\in E_{1}\\) and define \\(\\xi_{n}:=\\xi^{\\otimes n}\\in E_{n}=E_{1}^{\\otimes n}\\). (The \\(\\xi_{n}\\) form a unit in the sense that \\(\\xi_{m}\\otimes\\xi_{n}=\\xi_{m+n}\\).) The mappings \\(\\xi_{m}\\otimes\\mathsf{id}_{n}\\colon x_{n}\\mapsto\\xi_{m}\\otimes x_{n}\\) form an inductive system of isometric embeddings \\(E_{n}\\to E_{m}\\otimes E_{n}=E_{m+n}\\). The inductive limit \\(E:=\\overline{\\lim\\operatorname{ind}_{n}E_{n}}\\) comes shipped with unitaries \\(v_{n}\\to E\\otimes E_{n}\\to E\\) (namely, the limits of \\(u_{m,n}\\) for \\(m\\to\\infty\\)) that form a left dilation. For a right dilation either proceed as in Theorem 3.3 (just now for discrete index set) or repeat the preceeding construction for the isometric embeddings \\(\\mathsf{id}_{n}\\otimes\\xi_{m}\\). (Note that none of these suggestions works for Hilbert modules; cf. Remark 3.4.) Acknowledgements.We would like to thank W. Arveson and V. Liebscher for intriguing discussions. This work is supported by research funds of University of Molise and Italian MIUR (PRIN 2005). Note added after acceptance.After acceptance of this note Arveson [10] has provided yet another short proof. In [11] we show that the two proofs, ours here and Arveson's in [1], actually lead to unitarily equivalent constructions. The discussions in [10] unifies the advantages of the proof here (unitality of the endomorphisms) and in [1] (no problems with associativity). In Skeide [10] we applied Arveson's idea to the case of continuous product systems. ## References * [1] * [Arv89a] W. Arveson, _Continuous analogues of Fock space_, Mem. Amer. Math. Soc., no. 409, American Mathematical Society, 1989. * [Arv89b] * [Arv89b] * [Arv90a] * [Arv90b] * [Arv90c] * [Arv06] * [Arv06] * [Arv06] * [Arv
With every \\(E_{0}\\)-semigroup (acting on the algebra of of bounded operators on a separable infinite-dimensional Hilbert space) there is an associated Arveson system. One of the most important results about Arveson systems is that every Arveson system is the one associated with an \\(E_{0}\\)-semigroup. In these notes we give a new proof of this result that is considerably simpler than the existing ones and allows for a generalization to product systems of Hilbert module (to be published elsewhere).
Summarize the following text.
arxiv-format/0602059v1.md
# Short range correlations in relativistic nuclear matter models P.K. Panda, Joao da Providencia and Constanca Providencia Departamento de Fisica, Universidade de Coimbra, 3000 Coimbra, Portugal November 3, 2021 ###### We discuss the role of short range correlations in a relativistic approach to the description of nuclear matter. Several procedures may be used to introduce short range correlations into the model wave function. All have something in common with the so called \\(e^{S}\\) method or coupled-cluster expansion, introduced by Coester and Kummel [1] and most elegantly and extensively developed and applied to diverse quantal systems by R. Bishop [2]. In this short note, we consider the unitary operator method as proposed by Villars [3], which automatically guarantees that the correlated state is normalized. The general idea of introducing short range correlations in systems with short range interactions exists for a long time [4; 5] but has not been pursued for the relativistic case. Non-relativistic calculations based on realistic NN potentials predict equilibrium points which do not reproduce simultaneously the binding energy and saturation density. Either the saturation density is reproduced but the binding energy is too small, or the binding energy is reproduced at too high a density [6]. In order to solve this problem, the existence of a repulsive potential or density-dependent repulsive mechanism [7] is usually assumed. Due to Lorentz covariance and self-consistency, relativistic mean field theories [8] include automatically contributions which are equivalent to \\(n\\)-body repulsive potentials in non-relativistic approaches. The relativistic quenching of the scalar field provides a mechanism for saturation, though, by itself it may lead to too small an effective mass and too large incompressibility of nuclear matter, a situation which is encountered in the Walecka model [8]. In the non-relativistic case, we find a wound in the relative wave function if the vector interaction is stronger than the scalar interaction. However, this may not be the case if the interaction is strong enough for short distances. Then, the effective potential can actually turn attractive at short distances and the wave function may well have a node, as proposed by V. Neudatchin [9] and advocated by S. Moszkowski [10], on the basis of the the quark structure of the nucleon. The situation with short range correlations may be more subtle than might be thought from a simple non-relativistic model. We show that a short range node in the relative wave function may be encountered in relativistic models, although it remains to be seen to what extent the relativistic description simulates the quark structure. In non-relativistic models the saturation arises from the interplay between a long range attraction and a short range repulsion, so strong that it is indispensable to take short range correlations into account. In relativistic mean field models, the parameters are phenomenologically fitted to the saturation properties of nuclear matter. Although in this approach short range correlation effects may be accounted for, to some extent, by the model parameters, it is our aim to study explicitly the consequences of actual short range correlations. In a previous publication [12] we have discussed the effect of the correlations in the ground state properties of nuclear matter in the framework of the Hartree-Fock approximation using an effective Hamiltonian derived from the \\(\\sigma-\\omega\\) Walecka model. We have shown, for interactions mediated only by sigma and omega mesons, that the equation of state (EOS) becomes considerably softer when correlations are taken into account, provided the correlation function is treated variationally, always paying careful attention to the constraint imposed by the \"healing distance\" requirement. In the present note we will work within the same approach and will include also the exchange of pions and \\(\\rho\\)-mesons. Preliminary results of the present work have been presented in [11] We start by considering the effective Hamiltonian [12] \\[H=\\int\\psi^{\\dagger}_{\\alpha}(\\vec{x})(-i\\vec{\\alpha}\\cdot\\vec{\ abla}+\\beta M )_{\\alpha\\beta}\\psi_{\\beta}(\\vec{x})\\ d\\vec{x}+\\frac{1}{2}\\int\\psi^{\\dagger}_{ \\alpha}(\\vec{x})\\psi^{\\dagger}_{\\gamma}(\\vec{y})V_{\\alpha\\beta,\\gamma\\delta}( |\\vec{x}-\\vec{y}|)\\psi_{\\delta}(\\vec{y})\\psi_{\\beta}(\\vec{x})\\ d\\vec{x}\\ d \\vec{y}, \\tag{1}\\] where the exchange of \\(\\sigma\\), \\(\\omega\\), \\(\\rho\\) and \\(\\pi\\) mesons is taken into account, so that \\[V_{\\alpha\\beta,\\gamma\\delta}(r)=\\sum_{i=\\sigma,\\omega,\\rho,\\pi}V^{i}_{\\alpha \\beta,\\gamma\\delta}(r) \\tag{2}\\]with \\[V^{\\sigma}_{\\alpha\\beta,\\gamma\\delta}(r)=-\\frac{g^{2}_{\\sigma}}{4\\pi}(\\beta)_{ \\alpha\\beta}(\\beta)_{\\gamma\\delta}\\frac{e^{-m_{\\sigma}r}}{r},\\] \\[V^{\\omega}_{\\alpha\\beta,\\gamma\\delta}(r)=\\frac{g^{2}_{\\omega}}{4\\pi}\\left( \\delta_{\\alpha\\beta}\\delta_{\\gamma\\delta}-\\vec{\\alpha}_{\\alpha\\beta}\\cdot\\vec {\\alpha}_{\\gamma\\delta}\\right)\\frac{e^{-m_{\\omega}r}}{r},\\] \\[V^{\\rho}_{\\alpha\\beta,\\gamma\\delta}(r)=\\frac{g^{2}_{\\rho}}{4\\pi}\\left(\\delta_ {\\alpha\\beta}\\delta_{\\gamma\\delta}-\\vec{\\alpha}_{\\alpha\\beta}\\cdot\\vec{\\alpha }_{\\gamma\\delta}\\right)\\vec{\\tau}_{1}\\cdot\\vec{\\tau}_{2}\\frac{e^{-m_{\\rho}r}}{ r}.\\] An interaction of the form [13] \\[V^{\\pi}_{\\alpha\\beta,\\gamma\\delta}(\\vec{r})=\\frac{1}{3}\\left[\\frac{f_{\\pi}}{m _{\\pi}}\\right]^{2}(\\Sigma_{i})_{\\alpha\\beta}(\\Sigma_{i})_{\\gamma\\delta}\\vec{ \\tau}_{1}\\cdot\\vec{\\tau}_{2}\\left[\\frac{4\\pi}{m^{3}_{\\pi}}\\delta(\\vec{r})- \\frac{e^{-m_{\\pi}r}}{m_{\\pi}r}\\right], \\tag{3}\\] where \\(\\Sigma_{i}=\\alpha_{i}\\gamma_{5}\\), describes the exchange of \\(\\pi\\) mesons. The first term in the above is the repulsive contact interaction and the second term is an attractive Yukawa potential. This can be rewritten in momentum space as \\[V^{\\pi}_{\\alpha\\beta\\gamma\\delta}(\\vec{q})=\\frac{1}{3}\\left[\\frac{f_{\\pi}}{m_ {\\pi}}\\right]^{2}(\\Sigma_{i})_{\\alpha\\beta}(\\Sigma_{i})_{\\gamma\\delta}\\ \\vec{\\tau}_{1}\\cdot\\vec{\\tau}_{2}\\left[\\frac{q^{2}}{q^{2}+m^{2}_{\\pi}}\\right]. \\tag{4}\\] In equation (1), \\(\\psi\\) is the nucleon field interacting through the scalar and vector potentials and \\(\\vec{\\alpha},\\ \\beta\\) are the Dirac-matrices. The equal time quantization condition for the nucleons reads, \\([\\psi_{\\alpha}(\\vec{x},t),\\psi_{\\beta}(\\vec{y},t)^{\\dagger}]_{+}=\\delta_{ \\alpha\\beta}\\delta(\\vec{x}-\\vec{y})\\), where the indices \\(\\alpha\\) and \\(\\beta\\) refer to the spin. The field expansion for the field \\(\\psi\\) at time t=0 reads [14] \\[\\psi(\\vec{x})=\\frac{1}{\\sqrt{V}}\\sum_{r,k}\\left[U_{r}(\\vec{k})c_{r,\\vec{k}}+V_ {r}(-\\vec{k})\\hat{c}^{\\dagger}_{r,-\\vec{k}}\\right]e^{i\\vec{k}\\cdot\\vec{x}}, \\tag{5}\\] where \\(U_{r}\\) and \\(V_{r}\\) are \\[U_{r}(\\vec{k})=\\left(\\begin{array}{c}\\cos\\frac{\\chi(\\vec{k})}{2}\\\\ \\vec{\\sigma}\\cdot\\hat{k}\\sin\\frac{\\chi(\\vec{k})}{2}\\end{array}\\right)u_{r}\\ ;\\ \\ V_{r}(-\\vec{k})=\\left( \\begin{array}{c}-\\vec{\\sigma}\\cdot\\hat{k}\\sin\\frac{\\chi(\\vec{k})}{2}\\\\ \\cos\\frac{\\chi(\\vec{k})}{2}\\end{array}\\right)v_{r}. \\tag{6}\\] For free spinor fields, we have \\(\\cos\\chi(\\vec{k})=M/\\epsilon(\\vec{k})\\), \\(\\sin\\chi(\\vec{k})=|\\vec{k}|/\\epsilon(\\vec{k})\\) with \\(\\epsilon(\\vec{k})=\\sqrt{\\vec{k}^{2}+M^{2}}\\). However, we will deal with interacting fields so that we take the ansatz \\(\\cos\\chi(\\vec{k})=M^{*}(\\vec{k})/\\epsilon^{*}(\\vec{k})\\), \\(\\sin\\chi(\\vec{k})=|\\vec{k}^{*}|/\\epsilon^{*}(\\vec{k})\\), with \\(\\epsilon^{*}(\\vec{k})=\\sqrt{\\vec{k^{*}}^{2}+M^{*2}(\\vec{k})}\\), where \\(\\vec{k}^{*}\\) and \\(M^{*}(\\vec{k})\\) are the effective momentum and the effective mass, respectively, determined self-consistently by the Hartree-Fock (HF) prescription. The vacuum \\(\\mid 0\\rangle\\) is defined through \\(c_{r,\\vec{k}}\\mid 0\\rangle=\\hat{c}^{\\dagger}_{r,\\vec{k}}\\mid 0\\rangle=0\\); one-particle states are written \\(|\\vec{k},r\\rangle=c^{\\dagger}_{r,\\vec{k}}\\mid 0\\rangle\\); two-particle and three-particle uncorrelated states are written, respectively as \\(|\\vec{k},r;\\vec{k}^{\\prime},r^{\\prime}\\rangle=c^{\\dagger}_{r,\\vec{k}}\\ c^{ \\dagger}_{r^{\\prime},\\vec{k}^{\\prime}}\\mid 0\\rangle\\), and \\(|\\vec{k},r;\\vec{k}^{\\prime},r^{\\prime};\\vec{k}^{\\prime\\prime},r^{\\prime\\prime \\prime}\\rangle=c^{\\dagger}_{r,\\vec{k}}\\ c^{\\dagger}_{r^{\\prime},\\vec{k}^{\\prime} }\\ c^{\\dagger}_{r^{\\prime\\prime},\\vec{k}^{\\prime\\prime}}\\mid 0\\rangle\\), and so on. We now introduce the short range correlations through the unitary operator method. The correlated wave function [15] is \\(|\\Psi\\rangle=e^{i\\Omega}|\\Phi\\rangle\\) where \\(|\\Phi\\rangle\\) is a Slater determinant and \\(\\Omega\\) is, in general, a \\(n\\)-body Hermitian operator, splitting into a 2-body part, a 3-body part, etc.. The expectation value of \\(H\\) is \\[E=\\frac{\\langle\\Psi|H|\\Psi\\rangle}{\\langle\\Psi|\\Psi\\rangle}=\\frac{\\langle\\Phi|e^ {-i\\Omega}\\ H\\ e^{i\\Omega}|\\Phi\\rangle}{\\langle\\Phi|\\Phi\\rangle}. \\tag{7}\\] In the present calculation, we only take into account two-body correlations. Let us denote the two-body correlated wave function by \\(|\\vec{\\overline{k}},r;\\vec{k}^{\\prime},r^{\\prime}\\rangle=e^{i\\Omega}|\\vec{k},r; \\vec{k}^{\\prime},r^{\\prime}\\rangle\\approx f_{12}|\\vec{k},r;\\vec{k}^{\\prime},r^{ \\prime}\\rangle\\) where \\(f_{12}\\) is the short range correlation factor, the so-called Jastrow factor [16]. For simplicity, we consider \\(f_{12}=f(\\vec{r}_{12})\\), \\(\\vec{r}_{12}=\\vec{r}_{1}-\\vec{r}_{2}\\), and \\(f(r)=1-(\\alpha+\\beta r)\\ e^{-\\gamma r}\\) where \\(\\alpha\\), \\(\\beta\\) and \\(\\gamma\\) are parameters. The choice of a real function for the unitary operator has to be supplemented by a normalization condition (see eq. (8)) which assures unitarity to leading cluster order. The important effect of the short range correlations is the expression for the correlated ground-state energy. Here, in the leading order of the cluster expansion, the interaction matrix element \\(\\langle\\vec{k},r;\\vec{k}^{\\prime},r^{\\prime}|V_{12}|\\vec{k},r;\\vec{k}^{\\prime},r^ {\\prime}\\rangle\\) of the HF expression is replaced by \\(\\langle\\vec{\\overline{k}},r;\\vec{k}^{\\prime},r^{\\prime}|V_{12}+t_{1}+t_{2}|\\vec{ \\overline{k}},r;\\vec{k}^{\\prime},r^{\\prime}\\rangle-\\langle\\vec{k},r;\\vec{k}^{ \\prime},r^{\\prime}|t_{1}+t_{2}|\\vec{k},r;\\vec{k}^{\\prime},r^{\\prime}\\rangle\\), where \\(t_{i}\\) is the kinetic energy operator of particle \\(i\\). As argued by Moszkowski [17] and Bethe [18], it is expected that the true ground-state wave function of the nucleus, containing correlations, coincides with the independent particle, or HF wave function, for inter particle distances \\(r\\geq r_{h}\\), where \\(r_{h}\\approx 1\\) fm is the so-called \"healing distance\". This behavior is a consequence of the restrictions imposed by the Pauli Principle. A natural consequence of having the correlations introduced by a unitary operator is a normalization constraint on \\(f(r)\\), \\[\\int\\ (f^{2}(r)-1)\\ d^{3}r=0. \\tag{8}\\] The correlated ground state energy of symmetric nuclear matter reads \\[{\\cal E} = \\frac{\ u}{\\pi^{2}}\\int_{0}^{kr}k^{2}\\ dk\\ \\left[|k|\\sin\\chi(k)\\ +\\ M\\cos\\chi(k)\\right]\\ +\\ \\frac{\\tilde{F}_{\\sigma}(0)}{2}\\rho_{s}^{2}+\\frac{\\tilde{F}_{\\omega}(0)}{2} \\rho_{B}^{2}\\] \\[- \\frac{4}{(2\\pi)^{4}}\\int_{0}^{k_{f}}k^{2}\\ dk\\ k^{\\prime 2}\\ dk^{ \\prime}\\ \\Big{\\{}\\Big{[}|k|\\sin\\chi(k)+2\\ M\\cos\\chi(k)\\Big{]}I(k,k^{\\prime})+|k|\\ \\sin\\chi(k^{\\prime})\\ J(k,k^{\\prime})\\Big{\\}}\\] \\[+ \\frac{1}{(2\\pi)^{4}}\\int_{0}^{k_{f}}k\\ dk\\ k^{\\prime}\\ dk^{ \\prime}\\ \\Bigg{[}\\sum_{i=\\sigma,\\omega,\\rho,\\pi}A_{i}(k,k^{\\prime})+\\cos\\chi(k)\\cos \\chi(k^{\\prime})\\sum_{i=\\sigma,\\omega,\\rho,\\pi}B_{i}(k,k^{\\prime})\\] \\[+ \\sin\\chi(k)\\sin\\chi(k^{\\prime})\\sum_{i=\\sigma,\\omega,\\rho,\\pi}C_ {i}(k,k^{\\prime})\\Bigg{]}\\] where \\(A_{i}\\), \\(B_{i}\\), \\(C_{i}\\), \\(I\\) and \\(J\\) are exchange integrals defined in the appendix. In the above equation, the first term comes from the kinetic contribution, the second and third terms come respectively from the \\(\\sigma\\) and \\(\\omega\\) direct contributions to the correlated potential energy, the other terms arise from the exchange correlation contribution to the kinetic energy, and from the meson exchange contributions to the correlated potential energy. The direct term of the correlation contribution to the kinetic energy vanishes due to (8), and \\(\\rho_{B}\\) and \\(\\rho_{s}\\) are, respectively, the baryon and the scalar densities. The couplings \\(g_{\\sigma}\\), \\(g_{\\omega}\\), \\(g_{\\rho}\\), \\(g_{\\pi}\\), the meson masses, \\(m_{i}\\), \\(i=\\sigma,\\,\\omega,\\,\\rho,\\,\\pi\\) and the three parameters specifying the short range correlation function, \\(\\alpha\\), \\(\\beta\\) and \\(\\gamma\\) have to be fixed. The couplings \\(g_{\\sigma}\\) and \\(g_{\\omega}\\) are chosen so as to reproduce the ground state properties of nuclear matter. For the \\(\\rho\\) and \\(\\pi\\)-meson couplings we take the usual values \\(g_{\\rho}^{2}/4\\pi=0.55\\) and \\(f_{\\pi}^{2}/4\\pi=0.08\\). We choose \\(m_{\\sigma}=550\\) MeV, \\(m_{\\omega}=783\\) MeV, \\(m_{\\rho}=770\\) MeV and \\(m_{\\pi}=138\\) MeV. The normalization condition (8) determines \\(\\beta\\). We fix \\(\\alpha\\) by minimizing the energy. The parameter \\(\\gamma\\) is such that it reproduces a reasonable healing distance, assuming that this quantity decreases as \\(k_{F}\\) increases. Therefore, we assume that \\(\\gamma\\) depends on \\(k_{F}\\) according to \\(\\gamma=a_{1}+a_{2}\\,k_{F}/k_{F0}\\), where the parameters \\(a_{1}\\) and \\(a_{2}\\) are conveniently chosen. In tables 1 and 2, we have tabulated the parameters used in our calculation together with the relative effective mass \\(M^{*}/M\\), the kinetic energy \\({\\cal T}/\\rho_{B}-M\\), the direct and exchange parts of the potential energy (\\({\\cal V}_{d}/\\rho_{B}\\) and \\({\\cal V}_{e}/\\rho_{B}\\) respectively) with correlation, and the correlation contribution to the kinetic energy \\({\\cal T}^{C}/\\rho_{B}\\), all calculated at the saturation point. Notice that a HF calculation produces an EOS which is stiffer than the one obtained at the Hartree level. However, correlations reduce the effective mass and soften the EOS. In fact, the contribution of direct and exchange correlation terms are of the same order of magnitude of the other terms in the energy per particle. Moreover, the values of the couplings \\(g_{\\sigma}\\) and \\(g_{\\omega}\\) which reproduce the saturation density and binding energy strongly depend on the correlations, being considerably reduced by their presence, which is quite a remarkable fact. Hence, short-range correlations cannot be disregarded. The correlation function \\(f(r)\\) is plotted in figure 1 as a function of the relative distance for two different situations: in one calculation the contact term coming from the pion contribution was included and the other curve was obtained excluding it. It is clear from both curves that the correlations give rise to an extra node in the dependence of the ground-state wave-function on the relative coordinate, contrary to what generally happens in non-relativistic calculations with a hard core, when the wave function acquires a wound. However, both curves are rather different behaviors: when the contact term is included the node appears very close to zero and the correlation function has a quite flat behavior. This is a sign that the correlation function used was not flexible enough to respond simultaneously to the repulsive contact interaction and to the attractive component of the interaction. We have computed the binding energies as function of the density for the Hartree, HF and HF+Corr and compared with the quark-meson-coupling model (QMC) [19], as can be seen from fig. 2. The inclusion of correlations make the equation of state (EOS) softer than Hartree or HF calculations, if the contact term of the pion contribution is neglected. We also see that the inclusion of the \\(\\rho\\) and \\(\\pi\\)-mesons brings extra softness to the EOS. However, the curve which describes the model with contact term and with the correlations taken into account shows a very stiff behavior. This is due to the fact that a very simplified parametrization of the correlation function was used, which was not flexible enough to respond to the repulsive and to attractive components of the interaction. A softer EOS around nuclear matter saturation density is also provided by QMC. In Fig. 3, we plot the effective mass versus density of nuclear matter. If correlations are included and the contact term is neglected the effective mass does not decrease so fast with the increase of density as in a Hartree or, even worse, HF calculation. This explains the softer behavior of the EOS with correlations. However, the variation of the effective mass with density is still smaller within the QMC model. Correlations also affect the behavior of the neutron matter equation of state (EoS), which is plotted in Fig.4. In this calculation we include the four mesons \\(\\sigma,\\,\\omega,\\,\\rho\\) and \\(\\pi\\). We perform the calculation with and without short range correlations and with and without the delta term of the \\(\\pi\\) contribution. The delta term makes the equation of state very hard in both calculations: with and without short range correlations. We have already discussed that the correlation function should be more flexible to deal with the delta term. Both EoS without the delta term show an unrealistic behavior: either binding in the Hartree Fock calculation or a shallow minimum in the HF plus correlations calculation. Neutron star observation data are compatible with a zero density surface which would not be the case if a minimum at finite density would occur in the neutron star EoS. It is, however important to point out that the inclusion of correlations almost lifts this behavior of the neutron EoS typical of the Walecka model [20]. We stress that a node occurs in the relative wave function, and the EOS becomes softer, when the energy is optimized with respect to variations of \\(\\alpha\\). However, if \\(\\alpha\\) is set equal to 1, so that the node in the relative wave function is replaced by a wound, the EOS remains stiff. A variational treatment is therefore essential. We conclude that the explicit introduction of correlations has important effects. The behavior of the EOS and the values of the effective coupling constants are most sensitive to the presence of short range correlations, both when we keep and when we omit the contact term in the pion interaction. Finally, we observe that the presence of flexible short range correlations tends to soften the EOS. In fig. 2, the curve \"HF-corr-with contact term\" appears to contradict this statement. This is because the correlation function used was not flexible enough to respond simultaneously to the repulsive contact interaction and to the attractive component of the interaction. It is also clear that a richer parametrization of the correlation function, such as \\(f(r)=1-(1+\\alpha r+\\beta r^{2})\\ e^{-\\gamma r}\\), is required if the contact term is included. Work in this direction are in progress. It should be said that conclusions drawn from a study of this kind have only qualitative strength since the healing distance constraint imposed on the parameters of the correlation function is not completely unambiguous and since higher order terms of the cluster expansion of the expectation values have not been estimated. ###### Acknowledgements. Valuable discussions with S. Moszkowski are gratefully acknowledged. This work was partially supported by FCT (Portugal) under the projects POCTI/FP/FNU/50326/2003, and POCTI/FIS/451/94. PKP is grateful for the friendly atmosphere at Department of Physics, University of Coimbra, where this work was partially done. ## I Appendix The angular integrals are given by \\[A_{i}(k,k^{\\prime})=B_{i}(k,k^{\\prime})=2\\pi\\ \\frac{g_{i}^{2}}{4\\pi}\\int_{0}^{ \\pi}d\\cos\\theta\\ \\tilde{F}_{i}(k,k^{\\prime},\\cos\\theta),\\] \\[C_{i}(k,k^{\\prime})=2\\pi\\ \\frac{g_{i}^{2}}{4\\pi}\\int_{0}^{\\pi}\\cos\\theta\\ d\\cos \\theta\\ \\tilde{F}_{i}(k,k^{\\prime},\\cos\\theta),\\] \\[I(k,k^{\\prime})=2\\pi\\int_{0}^{\\pi}d\\cos\\theta\\ \\tilde{C}_{1}(k,k^{\\prime},\\cos \\theta),\\] and \\[J(k,k^{\\prime})=2\\pi\\int_{0}^{\\pi}\\cos\\theta\\ d\\cos\\theta\\ \\tilde{C}_{1}(k,k^{ \\prime},\\cos\\theta),\\]where \\[\\tilde{F}_{i}(\\vec{k},\\vec{k}^{\\prime})=\\int\\left[f(r)V_{i}(r)f(r)\\right]\\ e^{i( \\vec{k}-\\vec{k}^{\\prime})\\cdot\\vec{r}}\\ d\\vec{r}\\qquad\\mbox{and}\\qquad\\tilde{C}_ {1}(\\vec{k},\\vec{k}^{\\prime})=\\int(f2(r)-1)\\ e^{i(\\vec{k}-\\vec{k}^{\\prime}) \\cdot\\vec{r}}\\ d\\vec{r}.\\] ## References * (1) F. Coester, Nucl. Phys. **7** 421 (1958); F. Coester and H. Kummel, Nucl. Phys. **17** 477 (1960). * (2) R. F. Bishop and H. Kummel, Phys. Today **40(3)** 95 (1991); R. F. Bishop, Theor. Chem. Acta **80** 95 (1991). * (3) F. Villars, \"Proceedings of the International School of Physics, 'Enrico Fermi'-Course 23, (1961).\" Academic Press, New York, 1963; J.S. Bell, \" Lectures on the Many-Body Problem, First Bergen International School of Physics.\" Benjamin, New York, (1962). * (4) J. da Providencia and C.M. Shakin, Ann. Phys.(NY) **30**, 95 (1964). * (5) H. Feldmeier, T. Neff, R. Roth, and J. Schnack, Nucl. Phys. **A 632**, 61 (1998); T. Neff and H. Feldmeier, Nucl. Phys. **713** 311 (2003). * (6) F. Coester, S. Cohen, B.D. Day, and C.M. Vincent, Phys. Rev. **C 1**, 769 (1970); R. Brockmann and R. Machleidt, Phys. Rev. **C 42**, 1965 (1990). * (7) R.B. Wiringa, V. Fiks and A. Fabrocini, Phys. Rev. **C 38**, 1010 (1988); W. Zuo, A. Lejeune, U. Lombardo and J.-F. Mathiot, Nucl. Phys. **A 706**, 418 (2002). * (8) B.D. Serot, J.D. Walecka, Int. J. Mod. Phys. **E6**, 515 (1997). * (9) V.G. Neudatchin, I.T. Obukhovsky, V.I. Kukulin and N.F. Golovanova, Phys. Rec. **C 11**, 128 (1975). * (10) S. A. Moszkowski, Proceedings online of the Conference on Microscopic Approaches to Many-Body Theory (MAMBT), in honor of Ray Bishop, Manchester, UK, [http://www.qmbt.org/MAMBT/pdf/Moszkowski.pdf](http://www.qmbt.org/MAMBT/pdf/Moszkowski.pdf), 2005 * (11) P. K. Panda, J. da Providencia, C. Providencia and D. P. Menezes, Proceedings online of the Conference on Microscopic Approaches to Many-Body Theory (MAMBT), in honor of Ray Bishop, Manchester, UK, [http://www.qmbt.org/MAMBT/pdf/Providenceia.pdf](http://www.qmbt.org/MAMBT/pdf/Providenceia.pdf), 2005 * (12) P.K. Panda, D.P. Menezes, C. Providencia and J. da Providencia, Phys. Rev. **C 71** 015801 (2005); P.K. Panda, D.P. Menezes, C. Providencia and J. da Providencia, Braz. J. Phys **35** 873 (2005). * (13) A. Bouyssy, J.-F. Mathiot and N.Van Giai and S. Marcos, Phys. Rev. **C 36**, 380 (1987). * (14) A. Mishra, P.K. Panda, S. Schramm, J. Reinhardt and W. Greiner, Phys. Rev. **C 56**, 1380 (1997). * (15) J. da Providencia and C. M. Shakin, Phys. Rev **C 4**, 1560 (1971); C. M. Shakin, Phys. Rev **C 4**, 684 (1971). * (16) R. Jastrow, Phys. Rev. **98**, 1479 (1955). * (17) S.A. Moszkowski and B.L. Scott, Ann. Phys. (N.Y.), **11**, 65 (1960). * (18) H. Bethe, Ann. Rev. Nucl. Sci. **21**, 93 (1971). * (19) P. A. M. Guichon, Phys. Lett. **B 200**, 235 (1988). K. Saito and A.W. Thomas, Phys. Lett. B **327**, 9 (1994); P.K. Panda, A. Mishra, J.M. Eisenberg, W. Greiner, Phys. Rev. **C 56**, 3134 (1997). * (20) S. A. Chin, Ann. Phys. (NY) 108, 301 (1977). \\begin{table} \\begin{tabular}{c c c c c c} & \\(g_{\\sigma}\\) & \\(g_{\\omega}\\) & \\(\\alpha\\) & \\(\\beta\\) & \\(\\gamma\\) \\\\ & & & & (MeV) & (MeV) \\\\ \\hline Hartree & 11.079 & 13.806 & & & \\\\ HF & 10.432 & 12.223 & & & \\\\ HF+corr(\\(\\sigma+\\omega\\)) & 4.4559 & 2.6098 & 13.855 & -2252.448 & 1000 \\\\ HF+corr(\\(\\sigma+\\omega+\\rho+\\pi\\)) & 3.1925 & 2.199 & 13.822 & -2258.037 & 1000 \\\\ HF+corr(\\(\\sigma+\\omega+\\rho+\\pi+\\pi\\)(contact term)) & 11.662 & 12.930 & 1.5435 & -497.391 & 1000 \\\\ \\end{tabular} \\end{table} Table 1: Parameters of nuclear matter. We have used a density dependent parameter (HF+corr) \\(\\gamma=600+400\\ k_{F}/k_{F0}\\) MeV for the correlation. These parameters were obtained with fixed: \\(M=939\\) MeV, \\(m_{\\sigma}=550\\) MeV, \\(m_{\\omega}=783\\) MeV, \\(m_{\\rho}=770\\) MeV, \\(m_{\\pi}=138\\) MeV, \\(g_{\\rho}^{2}/4\\pi=0.55\\) and \\(f_{\\pi}^{2}/4\\pi=0.08\\) at \\(k_{F0}=1.3\\) fm\\({}^{-1}\\) with binding energy \\(E_{B}=\\varepsilon/\\rho-M=-15.75\\) MeV. The value of \\(\\gamma\\) refers to saturation density. \\begin{table} \\begin{tabular}{c c c c c c c} & \\(M^{*}/M\\) & \\({\\cal T}/\\rho_{B}-M\\) & \\({\\cal V}_{d}/\\rho_{B}\\) & \\({\\cal V}_{e}/\\rho_{B}\\) & \\({\\cal T}^{C}/\\rho_{B}\\) & contact term \\\\ & & (MeV) & (MeV) & (MeV) & (MeV) & (MeV) \\\\ \\hline Hartree & 0.540 & 8.11 & -23.86 & & \\\\ HF & 0.515 & 5.87 & -37.45 & 15.83 & & \\\\ HF+corr (\\(\\sigma+\\omega\\)) & 0.625 & 15.95 & -73.12 & 20.46 & 19.96 & \\\\ HF+corr (\\(\\sigma+\\omega+\\rho+\\pi\\)) & 0.645 & 16.41 & -17.76 & -34.57 & 20.16 & \\\\ HF+corr (\\(\\sigma+\\omega+\\rho+\\pi+\\pi\\)(contact term)) & 0.517 & 6.67 & -42.30 & 11.11 & 2.53 & 6.21 \\\\ \\end{tabular} \\end{table} Table 2: Ground state properties of nuclear matter at saturation density. Figure 1: The correlation function \\(f(r)\\) for the calculation with and without the delta term of the \\(\\pi\\) contribution. Figure 3: Effective mass as a function of density. Figure 2: The equation of state with and without short range correlations. Figure 4: Equation of state of neutron matter (solid line: \\(\\sigma+\\omega+\\rho+\\pi+\\mbox{correlation}\\) (without delta), dash-dotted line: \\(\\sigma+\\omega+\\rho+\\pi+\\mbox{correlation}\\) (with delta), dotted line: \\(\\sigma+\\omega+\\rho+\\pi\\) (without delta), dashed line: \\(\\sigma+\\omega+\\rho+\\pi\\) (with delta)
Short range correlations are introduced using unitary correlation method in a relativistic approach to the equation of state of the infinite nuclear matter in the framework of the Hartree-Fock approximation. It is shown that the correlations give rise to an extra node in the ground-state wave-function in the nucleons, contrary to what happens in non-relativistic calculations with a hard core. The effect of the correlations in the ground state properties of the nuclear matter and neutron matter is studied. The nucleon effective mass and equation of state (EOS) are very sensitive to short range correlations. In particular, if the pion contact term is neglected a softening of the EOS is predicted. Correlations have also an important effect on the neutron matter EOS which presents no binding but only a very shallow minimum contrary to the Walecka model.
Provide a brief summary of the text.
arxiv-format/0602226v1.md
## 1 Introduction and summary The properties of strongly interacting matter change distinctly during the transition from low to high temperatures [1], as is currently explored at heavy-ion colliders. Whereas the low-temperature phase can be described in terms of ordinary hadronic states, a copious excitation of resonances in a hot hadronic gas eventually implies the breakdown of the hadronic picture; instead, a description in terms of quarks and gluons is expected to arise naturally owing to asymptotic freedom. In the transition region between these asymptotic descriptions, effective degrees of freedom, such as order parameters for the chiral or deconfining phase transition, may characterize the physical properties in simple terms, i.e., with a simple effective action [2]. Recently, the notion of a strongly interacting high-temperature plasma phase has attracted much attention [3], implying that any generic choice of degrees of freedom will not lead to a weakly coupled description. In fact, it is natural to expect that the low-energy modes of the thermal spectrum still remain strongly coupled even above the phase transition. If so, a formulation with microscopic degrees of freedom from first principles should serve as the most powerful and flexible approach to a quantitative understanding of the system for a wide parameter range. In this microscopic formulation, an expansion in the coupling constant is a natural first step [4]. The structure of this expansion turns out to be theoretically involved [5], exhibiting a slow convergence behavior [6] and requiring coefficients of nonperturbativeorigin [7]. Still, a physically well-understood computational scheme can be constructed with the aid of effective-field theory methods [8]. This facilitates a systematic determination of expansion coefficients, and the agreement with lattice simulations is often surprisingly good down to temperatures close to \\(T_{\\rm cr}\\)[9]. The phase-transition region and the deep IR, however, remain inaccessible with such an expansion. In the present work, we use a different expansion scheme to study finite-temperature Yang-Mills theory and QCD in terms of microscopic variables, i.e., gluons and quarks. This scheme is based on a systematic and consistent operator expansion of the effective action which is inherently nonperturbative in the coupling. For bridging the scales from weak to strong coupling, we use the functional renormalization group (RG) [10; 11; 12] which is particularly powerful for analyzing phase transitions and critical phenomena. Since we do not expect that microscopic variables can answer all relevant questions in a simple fashion, we concentrate on two accessible problems. In the first part, we focus on the running of the gauge coupling driven by quantum as well as thermal fluctuations of pure gluodynamics. Our findings generalize similar previous zero-temperature studies to arbitrary values of the temperature [13]. In the second part, we employ this result for an investigation of the induced quark dynamics including its back-reactions on gluodynamics, in order to monitor the status of chiral symmetry at finite temperature. This strategy facilitates a computation of the critical temperature above which chiral symmetry is restored. Generalizing the system to an arbitrary number of quark flavors, we explore the phase boundary in the plane of temperature and flavor number. First results of our investigation have already been presented in [14]. In the present work, we detail our approach and generalize our findings. We also report on results for the gauge group SU(2), develop the formalism further for finite quark masses, and perform a stability analysis of our results. Moreover, we gain a simple analytical understanding of one of our most important results: the shape of the chiral phase boundary in the \\((T,N_{\\rm f})\\) plane. Whereas fermionic screening is the dominating mechanism for small \\(N_{\\rm f}\\), we observe an intriguing relation between the \\(N_{\\rm f}\\) scaling of the critical temperature near the critical flavor number and the zero-temperature IR critical exponent of the running gauge coupling. This relation connects two different universal quantities with each other and, thus, represents a generic testable prediction of the phase-transition scenario, arising from our truncated RG flow. In Sect. 2, we summarize the technique of RG flow equations in the background-field gauge, which we use for the construction of a gauge-invariant flow. In Sect. 3, we discuss the details of our truncation in the gluonic sector and evaluate the running gauge coupling at zero and finite temperature. Quark degrees of freedom are included in Sect. 4 and the general mechanisms of chiral quark dynamics supported by our truncated RG flow is elucidated. Our findings for the chiral phase transition are presented in Sect. 5, our conclusions and a critical assessment of our results are given in Sect. 6. RG flow equation in background-field gauge As an alternative to the functional-integral definition of quantum field theory, we use a differential formulation provided by the functional RG [10, 11, 12]. In this approach, flow equations for general correlation functions can be constructed [15]. A convenient version is given by the flow equation for the effective average action \\(\\Gamma_{k}\\) which interpolates between the bare action \\(\\Gamma_{k=\\Lambda}=S\\) and the full quantum effective action \\(\\Gamma=\\Gamma_{k=0}\\)[11]. The latter corresponds to the generator of fully-dressed proper vertices. Aiming at gluodynamics, a gauge-invariant flow can be constructed with the aid of the background-field formalism [16], yielding the flow equation [17] \\[k\\,\\partial_{k}\\Gamma_{k}[A,\\bar{A}]\\equiv\\partial_{t}\\Gamma_{k}[A,\\bar{A}]= \\frac{1}{2}\\mbox{STr}\\frac{\\partial_{t}R_{k}(\\Gamma_{k}^{(2)}[\\bar{A},\\bar{A}] )}{\\Gamma_{k}^{(2)}[A,\\bar{A}]+R_{k}(\\Gamma_{k}^{(2)}[\\bar{A},\\bar{A}])},\\quad t =\\ln\\frac{k}{\\Lambda}. \\tag{1}\\] Here, \\(\\Gamma_{k}^{(2)}\\) denotes the second functional derivative with respect to the fluctuating field \\(A\\), whereas the background-field denoted by \\(\\bar{A}\\) remains purely classical. The ghost fields are not displayed here and in the following for brevity, but the super-trace also includes a trace over the ghost sector with the corresponding minus sign. The regulator \\(R_{k}\\) in the denominator suppresses infrared (IR) modes below the scale \\(k\\), and its derivative \\(k\\partial_{k}R_{k}\\) ensures ultraviolet (UV) finiteness; as a consequence, the flow of \\(\\Gamma_{k}\\) is dominated by fluctuations with momenta \\(p^{2}\\simeq k^{2}\\), implementing the concept of smooth momentum-shell integrations. The background-field formalism allows for a convenient definition of a gauge-invariant effective action obtained by a gauge-fixed calculation [16]. For this, an auxiliary symmetry in the form of gauge-like transformations of the background field \\(\\bar{A}\\) is constructed which remains manifestly preserved during the calculation. Identifying the background field with the expectation value \\(A\\) of the fluctuating field at the end of the calculation, \\(A=\\bar{A}\\), the quantum effective action \\(\\Gamma\\) inherits the symmetry properties of the background field and thus is gauge invariant, \\(\\Gamma[A]=\\Gamma[A,\\bar{A}=A]\\). The background-field method for flow equations has been presented in [17]: the gauge fixing together with the regularization lead to gauge constraints for the effective action, resulting in regulator-modified Ward-Takahashi identities [18, 19], see also [20, 15]. In this work, we solve the flow approximately, following the strategy developed in [18, 13]. The property of manifest gauge invariance of the solution is still maintained by the approximation of setting \\(A=\\bar{A}\\) already for finite values of \\(k\\). Thereby, we neglect the difference between the RG flows of the fluctuating and the background field (see [21] for a treatment of this difference). The price to be paid for this approximation is that the flow is no longer _closed_[22]; i.e., information required for the next RG step is not completely provided by the preceding step. Moreover, this approximation satisfies some but not all constraints imposed by the regulator-modified Ward-Takahashi identities (mWTI). Here we assume that both the information loss and the corrections due to the mWTI are quantiatively negligible for the final result. The advantage of the approximation of using \\(\\Gamma_{k}[A,\\bar{A}=A]\\) for all \\(k\\) is that we obtain a gauge-invariant approximate solution of the quantum theory. In the present work, we optimize our truncated flow by inserting the background-field dependent \\(\\Gamma^{(2)}\\) into the regulator in Eq. (1). This adjusts the regularization to the spectral flow of the fluctuations [13, 22]; it also implies a significant improvement, since larger classes of diagrams can be resummed in the present truncation scheme. As another advantage, the background-field method together with the identification \\(A=\\bar{A}\\) for all \\(k\\) allows to bring the flow equation into a propertime form [13, 22, 25] which generalizes standard propertime flows [29]; the latter have often successfully be used for low-energy QCD models [30]. For this, we use a regulator \\(R_{k}\\) of the form \\[R_{k}(x)=xr(y),\\quad y:=\\frac{x}{\\mathcal{Z}_{k}k^{2}}\\,, \\tag{2}\\] with \\(r(y)\\) being a dimensionless regulator shape function of dimensionless argument. Here \\(\\mathcal{Z}_{k}\\) denotes a wave-function renormalization. Note that both \\(R_{k}\\) and \\(\\mathcal{Z}_{k}\\) are matrix-valued in field space. A natural choice for the matrix entries of \\(\\mathcal{Z}_{k}\\) is given by the wave function renormalizations of the corresponding fields, since this establishes manifest RG invariance of the flow equation.2 More properties of the regulator are summarized in Appendix A. Identifying the background field and the fluctuation field, the flow equation yields Footnote 2: For recent advances of an alternative approach which is based on a manifestly gauge invariant regulator, see [23]. Further proposals for thermal gauge-invariant flows can be found in [24]. Footnote 2: For the longitudinal gluon components, this implies that the matrix entry \\((\\mathcal{Z}_{k})_{\\rm LL}\\) is proportional to the inverse gauge-fixing parameter \\(\\xi\\). As a result, this renders the truncated flow independent of \\(\\xi\\), and we can implicitly choose the Landau gauge \\(\\xi\\equiv 0\\) which is known to be an RG fixed point [31, 32]. \\[\\partial_{t}\\Gamma_{k}[A\\!=\\!\\bar{A},\\bar{A}]=\\frac{1}{2}{\\rm STr}\\partial_{t }R_{k}(\\Gamma^{(2)}_{k})[\\Gamma^{(2)}_{k}\\!+\\!R_{k}]^{-1}=\\frac{1}{2}\\!\\int_{ 0}^{\\infty}ds\\,{\\rm STr}\\hat{f}(s,\\eta_{\\mathcal{Z}})\\exp\\big{(}-\\frac{s}{k^{2 }}\\Gamma^{(2)}_{k}\\big{)}\\,. \\tag{3}\\] Here, we have introduced the (matrix-valued) anomalous dimension \\[\\eta_{\\mathcal{Z}}:=-\\partial_{t}\\ln\\mathcal{Z}_{k}=-\\frac{1}{\\mathcal{Z}_{k} }\\partial_{t}\\mathcal{Z}_{k}. \\tag{4}\\] The operator \\(\\hat{f}(s,\\eta_{\\mathcal{Z}})\\) represents the translation of the regulator \\(R_{k}\\) into propertime space given by \\[\\hat{f}(s,\\eta_{\\mathcal{Z}})=\\tilde{g}(s)(2-\\eta_{\\mathcal{Z}})+(\\tilde{H}(s )-\\tilde{G}(s))\\frac{1}{s}\\partial_{t}. \\tag{5}\\] The auxiliary functions on the RHS are related to the regulator shape function \\(r(y)\\) by Laplace transformation: \\[h(y) = \\frac{-yr^{\\prime}(y)}{1+r(y)},\\quad h(y)=\\int_{0}^{\\infty}ds\\, \\tilde{h}(s){\\rm e}^{-ys},\\quad\\frac{d}{ds}\\tilde{H}(s)=\\tilde{h}(s)\\,,\\quad \\tilde{H}(0)=0, \\tag{6}\\] \\[g(y) = \\frac{r(y)}{1+r(y)},\\quad g(y)=\\int_{0}^{\\infty}ds\\,\\tilde{g}(s) {\\rm e}^{-ys},\\quad\\frac{d}{ds}\\tilde{G}(s)=\\tilde{g}(s),\\quad\\tilde{G}(0)=0. \\tag{7}\\]So far, we have discussed pure gauge theory. Quark fields with a mass matrix \\(M_{\\bar{\\psi}\\psi}\\) can similarly be treated within our framework. For this, we use a regulator \\(R_{k}^{\\psi}\\) of the form [26] \\[R_{k}^{\\psi}({\\rm i}\\bar{D})=Z_{\\psi}{\\rm i}\\bar{D}\\,r_{\\psi}\\Big{(}\\frac{({\\rm i }\\bar{D})^{2}}{k^{2}}\\Big{)}\\,, \\tag{8}\\] where \\(\\bar{D}\\) is a short-hand notation for \\(\\partial-i\\bar{g}\\bar{A}\\). Note that the quark fields live in the fundamental representation. This form of the fermionic regulator is chirally symmetric as well as invariant under background-field transformations. For later purposes, let us list the quark-fluctuation contributions to the gluonic sector; the flow of \\(\\Gamma_{k}[\\bar{A}]\\) induced by quarks with the regulator (8) can also be written in propertime form, \\[\\partial_{t}\\Gamma_{k}[\\bar{A}]\\big{|}_{\\psi}=-{\\rm Tr}\\partial_{t}R_{k}^{\\psi }({\\rm i}\\bar{D})[\\Gamma_{k}^{(2)}+R_{k}]_{\\psi}^{-1}=-\\int_{0}^{\\infty}ds\\,{ \\rm Tr}\\hat{f}_{\\psi}(s,\\eta_{\\psi},\\tfrac{M_{\\bar{\\psi}\\psi}}{k})\\exp\\left(- \\frac{s}{k^{2}}({\\rm i}\\bar{D})^{2}\\right), \\tag{9}\\] with \\([\\Gamma_{k}^{(2)}+R_{k}]_{\\psi}^{-1}\\) denoting the exact (regularized) quark propagator in the background field. In Eq. (9), we have introduced the anomalous dimension of the quark field, \\[\\eta_{\\psi}:=-\\partial_{t}\\ln Z_{\\psi}. \\tag{10}\\] In complete analogy to the gauge sector, we define the operator \\(\\hat{f}_{\\psi}(s,\\eta_{\\psi},\\tilde{m})\\) by \\[\\hat{f}_{\\psi}(s,\\eta_{\\psi},\\tilde{m})=\\tilde{g}^{\\psi}(s,\\tilde{m})(1-\\eta_ {\\psi})+(\\tilde{H}^{\\psi}(s,\\tilde{m})-\\tilde{G}^{\\psi}(s,\\tilde{m}))\\frac{1} {2s}\\partial_{t}\\,. \\tag{11}\\] The regulator shape function \\(r_{\\psi}(y)\\) is related to the auxiliary functions appearing in the definition of the operator \\(\\hat{f}_{\\psi}(s,\\eta_{\\psi},\\tilde{m})\\) as follows \\[h^{\\psi}(y,\\tilde{m}) = \\frac{-2y^{2}r_{\\psi}^{\\prime}(1+r_{\\psi})}{y(1+r_{\\psi})^{2}+ \\tilde{m}^{2}}\\,,\\quad g^{\\psi}(y,\\tilde{m})=\\frac{yr_{\\psi}(1+r_{\\psi})}{y( 1+r_{\\psi})^{2}+\\tilde{m}^{2}}\\,, \\tag{12}\\] \\[h^{\\psi}(y,\\tilde{m}) = \\int_{0}^{\\infty}ds\\,\\tilde{h}^{\\psi}(s,\\tilde{m}){\\rm e}^{-ys} \\quad\\frac{d}{ds}\\tilde{H}^{\\psi}(s,\\tilde{m})=\\tilde{h}^{\\psi}(s,\\tilde{m}),\\quad\\tilde{H}^{\\psi}(0,\\tilde{m})=0. \\tag{13}\\] The corresponding functions \\(g^{\\psi}(y,\\tilde{m})\\), \\(\\tilde{g}^{\\psi}(s,\\tilde{m})\\), and \\(\\tilde{G}^{\\psi}(s,\\tilde{M})\\) are related to each other analogously to Eq. (13). The present construction facilitates a simple inclusion of finite quark masses without complicating the convenient (generalized) propertime form of the flow equation. To summarize: the functional traces in Eq. (3) and (9) can now be evaluated, for instance, with powerful heat-kernel techniques, and all details of the regularization are encoded in the auxiliary functions \\(h,g\\,\\)etc. Equations (3) and (9) now serve as the starting point for our investigation of the gluon sector. The flow of quark-field dependent parts of the effective action proceeds in a standard fashion [27], see [28] for reviews; in particular, a propertime representation is not needed for the truncation in the quark sector described below. RG flow of the running coupling at finite temperature At first sight, the running coupling does not seem to be a useful quantity in the nonperturbative domain, since it is RG-scheme and strongly definition dependent. Therefore, we cannot a priori associate a universal meaning to the coupling flow, but have to use and interpret it always in the light of its definition and RG scheme. In fact, the background-field formalism provides for a simple nonperturbative definition of the running coupling in terms of the background-field wave function renormalization \\(Z_{k}\\). This is based on the nonrenormalization property of the product of coupling and background gauge field, \\(\\bar{g}\\bar{A}\\)[16]. The running-coupling \\(\\beta_{g^{2}}\\) function is thus related to the anomalous dimension of the background field (cf. Eq. (19) below), \\[\\beta_{g^{2}}\\equiv\\partial_{t}g^{2}=(d-4+\\eta)g^{2},\\quad\\eta=-\\frac{1}{Z_{k }}\\partial_{t}Z_{k}, \\tag{14}\\] where we have kept the spacetime dimension \\(d\\) arbitrary. Since the background field can naturally be associated with the vacuum of gluodynamics, we may interpret our coupling as the response strength of the vacuum to color-charged perturbations. ### Truncated RG flow Owing to strong coupling, we cannot expect that low-energy gluodynamics can be described by a small number of gluonic operators. On the contrary, infinitely many operators become RG relevant and will in turn drive the running of the coupling. Following the strategy developed in [18], we span a truncated space of effective action functionals by the ansatz \\[\\Gamma_{k}=\\Gamma_{k}^{\\rm YM}[A,\\bar{A}]+\\Gamma_{k}^{\\rm gf}[A,\\bar{A}]+ \\Gamma_{k}^{\\rm gh}[A,\\bar{A},\\bar{c},c]+\\Gamma_{k}^{\\rm quark}[A,\\bar{A}, \\bar{\\psi},\\psi]. \\tag{15}\\] Here, \\(\\Gamma^{\\rm gf}\\) and \\(\\Gamma^{\\rm gh}\\) represent generalized gauge-fixing and ghost contributions, which we assume to be well approximated by their classical form in the present work, \\[\\Gamma_{k}^{\\rm gf}[A,\\bar{A}]=\\frac{1}{2\\xi}\\int_{x}(D_{\\mu}[\\bar{A}](A-\\bar {A})_{\\mu})^{2},\\quad\\Gamma_{k}^{\\rm gh}[A,\\bar{A},\\bar{c},c]=-\\int_{x}\\bar{c }D_{\\mu}[\\bar{A}]D_{\\mu}[A]c,\\quad D[A]=\\partial-{\\rm i}\\bar{g}A, \\tag{16}\\] neglecting any non-trivial running in these sectors. Here, \\(\\bar{g}\\) denotes the bare coupling, and the gauge field lives in the adjoint representation, \\(A_{\\mu}=A_{\\mu}^{c}T^{c}\\), with hermitean gauge-group generators \\(T^{c}\\). The gluonic part \\(\\Gamma_{k}^{\\rm YM}\\) carries the desired physical information about the quantum theory that can be gauge-invariantly extracted in the limit \\(\\Gamma_{k}^{\\rm YM}[A]=\\Gamma_{k}^{\\rm YM}[A,\\bar{A}\\ =\\ A]\\). The quark contributions are contained in \\[\\Gamma_{k}^{\\psi}[A,\\bar{\\psi},\\psi]=\\int_{x}\\bar{\\psi}({\\rm i}{\\cal D}[A]+M_ {\\bar{\\psi}\\psi})\\psi+\\Gamma_{k}^{\\rm q\\,int}[\\bar{\\psi},\\psi], \\tag{17}\\] where \\(M_{\\bar{\\psi}\\psi}\\) denotes the quark mass matrix, and the quarks transform under the fundamental representation of the gauge group. The last term \\(\\Gamma_{k}^{\\rm q\\,int}[\\bar{\\psi},\\psi]\\) denotes our ansatzfor gluon-induced quark self-interactions to be discussed in Sect. 4. In Eq. (17), we have already set the quark wave function renormalization to \\(Z_{\\psi}=1\\), which is a combined consequence of the Landau gauge and our later choice for \\(\\Gamma_{k}^{\\rm q\\cdot int}[\\bar{\\psi},\\psi]\\). An infinite but still tractable set of gauge-field operators is given by the nontrivial part of our gluonic truncation, \\[\\Gamma_{k}^{\\rm YM}[A]=\\int_{x}{\\cal W}_{k}(\\theta),\\quad\\theta=\\frac{1}{4}F_ {\\mu\ u}^{a}F_{\\mu\ u}^{a}. \\tag{18}\\] Expanding the function \\({\\cal W}(\\theta)=W_{1}\\theta+\\frac{1}{2}W_{2}\\theta^{2}+\\frac{1}{3!}W_{3} \\theta^{3}\\dots\\), the expansion coefficients \\(W_{i}\\) denote an infinite set of generalized couplings. Here, \\(W_{1}\\) is identical to the desired background-field wave function renormalization, \\(Z_{k}\\equiv W_{1}\\), defining the running of the coupling, \\[g^{2}=k^{d-4}Z_{k}^{-1}\\bar{g}^{2}, \\tag{19}\\] which Eq. (14) is a consequence of. This truncation corresponds to a gradient expansion in the field strength, neglecting higher-derivative terms and more complicated color and Lorentz structures. In this way, the truncation includes arbitrarily high gluonic correlators projected onto their small-momentum limit and onto the particular color and Lorentz structure arising from powers of \\(F^{2}\\). In our truncation, the running of the coupling is successively driven by all generalized couplings \\(W_{i}\\). It is convenient to express the flow equation in terms of dimensionless renormalized quantities \\[\\vartheta = g^{2}k^{-d}Z_{k}^{-1}\\theta\\equiv k^{-4}\\bar{g}^{2}\\theta, \\tag{20}\\] \\[w(\\vartheta) = g^{2}k^{-d}{\\cal W}_{k}(\\theta)\\equiv k^{-4}Z_{k}^{-1}\\bar{g}^{ 2}{\\cal W}_{k}(k^{4}\\vartheta/\\bar{g}^{2}). \\tag{21}\\] Inserting Eq. (15) into Eq. (3), we obtain the flow equation for \\(w(\\vartheta)\\): \\[\\partial_{t}w=-(4-\\eta)w+4\\vartheta\\dot{w}+\\frac{g^{2}}{2(4\\pi)^ {\\frac{d}{2}}}\\int_{0}^{\\infty}\\!ds\\,\\Bigg{\\{}-16\\sum_{i=1}^{N_{c}}\\sum_{\\xi =1}^{N_{\\rm f}}\\tilde{h}^{\\psi}(s,\\tfrac{m_{\\xi}}{k})f_{T}^{\\psi}(s,\\tfrac{T} {k})f^{\\psi}(sb_{i})b_{i}^{e_{d}}\\] \\[\\qquad\\qquad\\qquad+\\tilde{h}(s)\\Bigg{[}4\\sum_{l=1}^{N_{c}^{2}-1 }\\Big{(}f_{T}^{A}(s\\dot{w},\\tfrac{T}{k})f_{1}^{A}(s\\dot{w}\\,b_{l})-f_{T}^{A}( s,\\tfrac{T}{k})f_{2}^{A}(sb_{l})\\Big{)}b_{l}^{e_{d}}\\] \\[-2f_{T}^{A}(s\\dot{w},\\tfrac{T}{k})f_{3}^{A}(s\\dot{w},\\frac{\\dot{w }}{\\dot{w}+2\\vartheta\\ \\dot{w}})\\Bigg{]}-\\Big{(}\\eta\\tilde{g}(s)+(\\tilde{h}(s)-\\tilde{g}(s))\\Big{(} \\frac{\\partial_{t}\\ \\dot{w}\\ -4\\vartheta\\ \\ddot{w}}{\\dot{w}}\\Big{)}\\Big{)}\\times\\] \\[\\qquad\\qquad\\times\\Bigg{[}2\\sum_{l=1}^{N_{c}^{2}-1}f_{T}^{A}(s \\dot{w},\\tfrac{T}{k})f_{1}^{A}(s\\dot{w}\\,b_{l})b_{l}^{e_{d}}-f_{T}^{A}(s\\dot{w },\\tfrac{T}{k})f_{3}^{A}(s\\dot{w},\\frac{\\dot{w}}{\\dot{w}+2\\vartheta\\ \\dot{w}})\\Bigg{]}\\] \\[-\\frac{2(\\tilde{h}(s)-\\tilde{g}(s))\\vartheta}{(\\dot{w}+2\\vartheta \\ \\ddot{w})^{2}}\\Big{(}\\ \\ddot{w}\\,\\partial_{t}\\dot{w}-\\dot{w}\\,\\partial_{t}\\ddot{w}+4\\dot{w}\\ddot{w} +4\\vartheta(w\\ddot{w}-\\ddot{w}^{2})\\Big{)}f_{4}^{A}(s\\dot{w},\\tfrac{T}{k}) \\Big{]}\\Bigg{\\}}\\,, \\tag{22}\\]where the auxiliary functions \\(f\\) are defined in App. A, and we have used the abbreviation \\(e_{d}=\\frac{d-1}{2}\\). The \"color magnetic\" field components \\(b_{i}\\) are defined by \\(b_{i}=|\ u_{i}|\\sqrt{2\\vartheta}\\), where \\(\ u_{i}\\) denotes eigenvalues of \\((n^{a}T^{a})\\) in the fundamental representation; correspondingly, \\(b_{l}\\) is equivalently defined for the adjoint representation. Furthermore, we have used the short-hand notation \\(w\\equiv w(\\vartheta)\\) and dots denote derivatives with respect to \\(\\vartheta\\). In order to extract the flow equation for the running coupling, we expand the function \\(w(\\vartheta)\\) in powers of \\(\\vartheta\\), \\[w(\\vartheta)=\\sum_{i=0}^{\\infty}\\frac{w_{i}}{i!}\\vartheta^{i}\\,,\\quad w_{1}=1. \\tag{23}\\] Note that \\(w_{1}\\) is fixed to \\(1\\) by definition (21). Inserting this expansion into Eq. (22), we obtain an infinite tower of first-order differential equations for the coefficients \\(w_{i}\\). In the present work, we concentrate on the running coupling and ignore the full form of the function \\(\\mathcal{W}\\); hence, we set \\(w_{i}\\to 0\\) for \\(i\\geq 2\\) on the RHS of the flow equation as a first approximation, but keep track of the flow of all coefficients \\(w_{i}\\). The resulting infinite tower of equations is of the form \\[\\partial_{t}w_{i}=X_{i}(g^{2},\\eta)+Y_{ij}(g^{2})\\partial_{t}w_{j}, \\tag{24}\\] with known functions \\(X_{i},Y_{ij}\\), the latter of which obeys \\(Y_{ij}=0\\) for \\(j>i+1\\). Note that we have not dropped the \\(w_{i}\\) flows, \\(\\partial_{t}w_{i}\\), which are a consequence of the spectral adjustment of the flow. This infinite set of equations can iteratively be solved, yielding the anomalous dimension as an infinite power series of \\(g^{2}\\) (for technical details, see [13, 33]), \\[\\eta=\\sum_{i=0}^{\\infty}a_{m}G^{m}\\quad\\text{with}\\quad G\\equiv\\frac{g^{2}}{2 (4\\pi)^{d/2}}\\,. \\tag{25}\\] The coefficients \\(a_{m}\\) can be worked out analytically; they depend on the gauge group, the number of quark flavors, their masses, the temperature and the regulator. Equation (25) constitutes an asymptotic series, since the coefficients \\(a_{m}\\) grow at least factorially. This is no surprise, since the expansion (23) induces an expansion of the propertime integrals in Eq. (22) for which this is a well-understood property [34]. A good approximation of the underlying finite integral representation of Eq. (25) can be deduced from a Borel resummation including only the leading asymptotic growth of the \\(a_{m}\\), \\[\\eta\\simeq\\sum_{i=0}^{\\infty}a_{m}^{\\text{!.g.}}\\,G^{m}\\,. \\tag{26}\\] The leading growth coefficients are given by a sum of gluon/ghost and gluon-quark contributions, \\[a_{m}^{\\text{!.g.}}=4(-2c_{1})^{m-1}\\frac{\\Gamma(z_{d}+m)\\Gamma (m+1)}{\\Gamma(z_{d}+1)}\\Big{[}\\bar{h}_{2m-e_{d}}^{A}(\\tfrac{T}{k})(d-2)\\frac{ 2^{2m}-2}{(2m)!}\\tau_{m}^{A}B_{2m}\\\\ -\\frac{4}{\\Gamma(2m)}\\tau_{m}^{A}\\bar{h}_{2m-e_{d}}^{A}(\\tfrac{T}{ k})+4^{m+1}\\frac{B_{2m}}{(2m)!}\\tau_{m}^{\\psi}\\sum_{i=1}^{N_{\\text{f}}}\\bar{h}_{2m-e _{d}}^{\\psi}(\\tfrac{m_{i}}{k},\\tfrac{T}{k})\\Big{]}. \\tag{27}\\]The auxiliary functions \\(c_{1}\\), \\(c_{2}\\), \\(z_{d}\\) and the moments \\(\\bar{h}_{j}\\), \\(\\bar{h}_{j}^{\\psi}\\) are defined in App. A and B. The group theoretical factors \\(\\tau_{m}^{A}\\) and \\(\\tau_{m}^{\\psi}\\) are defined and discussed in App. C. The last term in the second line of Eq. (27) contains the quark contributions to the anomalous dimension. The remaining terms are of gluonic origin. The first term in the second line has to be treated with care, since it arises from the Nielsen-Olesen mode in the propagator [35] which is unstable in the IR. This mode occurs in the perturbative evaluation of gradient-expanded effective actions and signals the instability of chromo fields with large spatial correlation. At finite temperature, this problem is particularly severe, since such a mode will strongly be populated by thermal fluctuations, typically spoiling perturbative computations [36]. From the flow-equation perspective, this does not cause conceptual problems, since no assumption on large spatial correlations of the background field is needed, in contrast to the perturbative gradient expansion. For an expansion of the flow equation about the (unknown) true vacuum state, the regulated propagator would be positive definite, \\(\\Gamma_{k}^{(2)}+R_{k}>0\\) for \\(k>0\\). Even without knowing the true vacuum state, it is therefore a viable procedure to include only the positive part of the spectrum of \\(\\Gamma_{k}^{(2)}+R_{k}\\) in our truncation, since it is an exact operation for stable background fields. At zero temperature, these considerations are redundant, since the unstable mode merely creates imaginary parts that can easily be separated from the real coupling flow. At finite temperature, we only have to remove the unphysical thermal population of this mode which we do by a \\(T\\)-dependent regulator that screens the instability. As an unambiguous regularization, we include the Nielsen-Olesen mode for all \\(k\\geq T\\) as it is, dropping possible imaginary parts; for \\(k<T\\) we remove the Nielsen-Olesen mode completely, thereby inhibiting its thermal excitation. Of course, a smeared regularization of this mode is also possible, as discussed in App. D. Therein, the regularization used here is shown to be a point of \"minimum sensitivity\" [37] in a whole class of regulators. This supports our viewpoint that our regularization has the least contamination of unphysical thermal population of the Nielsen-Olesen mode. We outline the resummation of \\(\\eta\\) of Eq. (26) in App. B, yielding \\[\\eta=\\eta_{1}^{A}+\\eta_{2}^{A}+\\eta^{\\rm q}, \\tag{28}\\] with gluonic parts \\(\\eta_{1}^{A},\\eta_{2}^{A}\\) and the quark contribution to the gluon anomalous dimension \\(\\eta^{\\rm q}\\).3 Finite integral representations of these functions are given in Eqs. (B.17), (B.23), and (B.24). For pure gluodynamics, \\(\\eta_{1}^{A}\\) and \\(\\eta_{2}^{A}\\) carry the full information about the running coupling. Footnote 3: The contribution \\(\\eta^{\\rm q}\\) should not be confused with the quark anomalous dimension \\(\\eta_{\\psi}\\) which is zero in our truncation. In Fig. 1, we show the result for the anomalous dimension \\(\\eta\\) as a function of \\(G=\\frac{\\alpha_{\\rm s}}{8\\pi}\\) for \\(N_{\\rm c}=3\\) and \\(N_{\\rm f}=3\\) in \\(d=4\\) dimensions. For pure gluodynamics (i.e. \\(N_{\\rm f}=0\\)), we find an IR stable fixed point for vanishing temperature, \\[\\alpha_{*}=[\\alpha_{*,8},\\alpha_{*,3}]\\approx[5.7,9.7], \\tag{29}\\]in agreement with the results found in [13]. The (theoretical) uncertainty is due to the fact that we have used a simple approximation for the exact color factors \\(\\tau_{j}^{A}\\) and \\(\\tau_{j}^{\\psi}\\), see App. C for details. This approximation introduces an artificial dependence on the color direction of the background field. The extremal cases of this dependence are given by the 3- and 8-direction in the Cartan sub-algebra, the results of which span the above interval for the IR fixed point. Even though this uncertainty is quantitatively large in the pure-glue case, it has little effect on the quantitative results for full QCD, see below. The inclusion of light quarks yields a lower value for the infrared fixed point \\(\\alpha_{*}\\), as can be seen from Fig. 1. However, this lower fixed point will only be attained if quarks stay massless or light in the deep IR. If \\(\\chi\\)SB occurs, the quarks become massive and decouple from the flow, such that the system is expected to approach the pure-glue fixed point. In any case, we can read off from Fig. 1 that, already in the symmetric regime, the inclusion of quarks leads to a smaller coupling \\(\\alpha_{s}\\) for scales \\(k>k_{\\chi SB}\\), as compared to the coupling of a pure gluonic system. ### Running-coupling results For quantitative results on the running coupling, we confine ourselves to \\(d=4\\) dimensions and to the gauge groups SU(2) and SU(3). Of course, results for arbitrary \\(d>2\\) and other Figure 1: Anomalous dimension \\(\\eta\\) as a function of \\(G=\\frac{\\alpha_{s}}{8\\pi}\\) for \\(4d\\)\\(SU(N_{\\rm c}=3)\\) theory with \\(N_{\\rm f}=3\\) massless quark flavors at vanishing temperature. The gluonic parts \\(\\eta_{1}^{A},\\eta_{2}^{A}\\) and the quark part \\(\\eta^{\\rm q}\\) contributing to the anomalous dimension \\(\\eta\\) (thick black line) are shown separately. The gluonic parts \\(\\eta_{1}^{A}\\) and \\(\\eta_{2}^{A}\\) agree with the results found in [13]. The figure shows the results from a calculation with a background field pointing into the 8-direction in color space. gauge groups can straightforwardly be obtained from our general expressions in App. B.4 Footnote 4: For instance, this offers a way to study nonperturbative renormalizability of QCD-like theories in extra dimensions as initiated in [33] for pure gauge theories. To this end, a quantitative evaluation of the coupling flow requires the specification of the regulator shape function \\(r(y)\\), cf. Eq. (2). In order to make simple contact with measured values of the coupling, e.g., at the \\(Z\\) mass or the \\(\\tau\\) mass, it is advantageous to choose \\(r(y)\\) in correspondence with a regularization scheme for which the running of the coupling is sufficiently close to the standard \\(\\overline{\\rm MS}\\) running in the perturbative domain. Here, it is important to note that already the two-loop \\(\\beta_{g^{2}}\\) coefficient depends on the regulator, owing to both the truncation as well as the mass-dependent regularization scheme. As an example, we give the two-loop \\(\\beta_{g^{2}}\\) function calculated from Eq. (22) for QCD with \\(N_{\\rm c}\\) colors and \\(N_{\\rm f}\\) massless quark flavors in \\(d=4\\) dimensions: \\[\\beta(g^{2}) = -\\Bigg{(}\\frac{22}{3}\\bar{h}_{\\frac{1}{2}}^{A}N_{\\rm c}-\\frac{4}{ 3}\\bar{h}_{\\frac{1}{2}}^{\\psi}N_{\\rm f}\\Bigg{)}\\frac{g^{4}}{(4\\pi)^{2}}- \\Bigg{(}\\,\\frac{77N_{\\rm c}^{2}\\bar{h}_{\\frac{1}{2}}^{A}-14N_{\\rm c}N_{\\rm f} \\bar{h}_{\\frac{1}{2}}^{\\psi}}{3}\\,\\bar{g}_{\\frac{1}{2}}^{A}\\] \\[\\quad-\\frac{1277_{2}^{A}\\bar{h}_{\\frac{5}{2}}^{A}+N_{\\rm f}7_{2} ^{\\psi}\\bar{h}_{\\frac{5}{2}}^{\\psi}}{45}\\Big{(}3(N_{\\rm c}^{2}-1)(\\bar{h}_{- \\frac{3}{2}}^{A}-\\bar{g}_{-\\frac{3}{2}}^{A})+2(\\bar{H}_{0}^{A}-\\bar{G}_{0}^{A} )\\Big{)}\\Bigg{)}\\frac{g^{6}}{(4\\pi)^{4}}+\\ldots\\] The moments \\(\\bar{g}_{j}^{A/\\psi}\\),\\(\\bar{h}_{j}^{A/\\psi}\\), \\(\\bar{G}_{j}^{A}\\) and \\(\\bar{H}_{j}^{A}\\) are defined in App. A. They specify the regulator dependence of the loop terms and depend on \\(\\frac{T}{k}\\), as is visualized in Fig. 2. We observe that even the one-loop coefficient is regulator dependent at finite temperature, but universal and exact at zero temperature, as it should. The latter holds, since \\(\\bar{g}_{\\frac{1}{2}}^{A/\\psi}(\\frac{T}{k}=0)=1\\) and \\(\\bar{h}_{\\frac{1}{2}}^{A/\\psi}(\\frac{T}{k}=0)=1\\) for all admissible regulators. Using the exponential regulator, we Figure 2: Thermal moments as a function of \\(\\frac{T}{k}\\) for the exponential regulator. The moments \\(h_{i}^{A}\\) as well as \\(h_{i}^{\\psi}\\) are finite in the limit \\(\\frac{T}{k}\\to 0\\). The gluonic thermal moments \\(h_{i}^{A}\\) grow linearly for increasing \\(\\frac{T}{k}\\) due to the presence of a soft Matsubara mode, whereas the fermionic thermal moments \\(h_{i}^{\\psi}\\) are exponentially supressed for \\(\\frac{T}{k}\\to\\infty\\). find \\(\\bar{h}^{A}_{-\\frac{3}{2}}(\\frac{T}{k}=0)=2\\zeta(3)\\), \\(\\bar{g}^{A}_{-\\frac{3}{2}}(\\frac{T}{k}=0)=1\\), \\(\\bar{h}^{A/\\psi}_{\\frac{5}{2}}(\\frac{T}{k}=0)=\\frac{1}{6}\\), \\(\\bar{G}^{A}_{0}(\\frac{T}{k}=0)=\\frac{1}{2}\\) and \\(\\bar{H}^{A}_{0}(\\frac{T}{k}=0)=\\zeta(3)\\) for the moments at zero temperature. Using the color factors \\(\\tau_{2}^{A}\\) and \\(\\tau_{2}^{\\psi}\\) from App. C, we compare our result to the perturbative two-loop result, \\[\\beta_{\\text{\\it pert.}}(g^{2})=-\\Bigg{(}\\frac{22}{3}N_{\\text{c}}-\\frac{4}{3}N _{\\text{f}}\\Bigg{)}\\frac{g^{4}}{(4\\pi)^{2}}-\\Bigg{(}\\frac{68N_{\\text{c}}^{3}+ 6N_{\\text{f}}-26N_{\\text{c}}^{2}N_{\\text{f}}}{3N_{\\text{c}}}\\Bigg{)}\\frac{g^{6 }}{(4\\pi)^{4}}+\\ldots\\,, \\tag{31}\\] and find good agreement to within 99% for the two-loop coefficient for SU(2) and 95% for SU(3) pure gauge theory. Beside this compatibility with the standard \\(\\overline{\\text{MS}}\\) running the exponential regulator is technically and numerically convenient. The perturbative quality of the regulator is mandatory for a reliable estimate of absolute, i.e., dimensionful, scales of the final results. The present choice enables us to fix the running coupling to experimental input: as initial condition, we use the measured value of the coupling at the \\(\\tau\\) mass scale [38], \\(\\alpha_{\\text{s}}=0.322\\), which by RG evolution agrees with the world average of \\(\\alpha_{\\text{s}}\\) at the \\(Z\\) mass scale. We stress that no other parameter or scale is used as an input. The global behavior of the running coupling can be characterized in simple terms. Let us first concentrate on pure gluodynamics, setting \\(N_{\\text{f}}\\to 0\\) for a moment. At zero temperature, we rediscover the results of [13], exhibiting a standard perturbative behavior in the UV. In the IR, the coupling increases and approaches a stable fixed point \\(g_{*}^{2}\\) which is induced by a second zero of the \\(\\beta_{g^{2}}\\) function, see Fig. 3. The appearance of an IR fixed point in Yang-Mills theories is a well-investigated phenomenon also in the Landau gauge [39]. Here, the IR fixed point is a consequence of a tight link between the fully dressed gluon and ghost propagators at low momenta which is visible in a vertex expansion [40]. Most interestingly, this behavior is in accordance with the Kugo-Ojima and Gribov-Zwanziger confinement scenarios [41]. Even though the relation between the Landau-gauge and the background-gauge IR fixed point is not immediate, it is reassuring that the definition of the running coupling in both frameworks rests on a nonrenormalization property that arises from gauge invariance [42, 16]. Within the present mass-dependent RG scheme, the appearance of an IR fixed point is moreover compatible with the existence of a mass gap: once the scale \\(k\\) has dropped below the lowest physical state in the spectrum, the running of physically relevant couplings should freeze out, since no fluctuations are left to drive any further RG flow. Finally, IR fixed-point scenarios have successfully been applied also in phenomenological studies [43, 44, 45, 46, 47, 48]. At finite temperature, the small-coupling UV behavior remains unaffected for scales \\(k\\gg T\\) and agrees with the zero-temperature perturbative running as expected. Towards lower scales, the coupling increases until it develops a maximum near \\(k\\sim T\\). Below, the coupling decreases according to a powerlaw \\(g^{2}\\sim k/T\\), see Fig. 3. This behavior has a simple explanation: the wavelength of fluctuations with moment a \\(p^{2}<T^{2}\\) is larger than the extent of the compactified Euclidean time direction. Hence, these modes become effectively 3-dimensional and their limiting behavior is governed by the spatial \\(3d\\) Yang-Mills theory. As a nontrivial result, we observe the existence of a non-Gaussian IR fixed point also in the reduced \\(3d\\) theory, see also Sec. 3.3. By virtue of a straightforward matching between the \\(4d\\) and \\(3d\\) coupling, the observed powerlaw for the \\(4d\\) coupling is a direct consequence of the strong-coupling \\(3d\\) IR behavior, \\(g^{2}(k\\ll T)\\sim g_{3d,*}^{2}\\,k/T\\). Again, the observation of an IR fixed point in the \\(3d\\) theory agrees with recent results in the Landau gauge [49]. The \\(3d\\) IR fixed point and the perturbative UV behavior already qualitatively determine the momentum asymptotics of the running coupling. Phenomenologically, the behavior of the coupling in the transition region near its maximum value is most important, which is quantitatively provided by the full \\(4d\\) finite-temperature flow equation. In addition to the shift of the position of the maximum with temperature, we observe a decrease of the maximum itself for increasing temperature. On average, the \\(4d\\) coupling gets weaker for higher temperature, in agreement with naive expectations. We emphasize, however, that this behavior results from a nontrivial interplay of various nonperturbative contributions. Now, we turn to the effect of a finite number \\(N_{\\rm f}\\) of massless quark flavors. In Fig. 4, we show the running coupling \\(\\alpha_{s}\\) as a function of \\(k\\) for \\(T=100\\) MeV and for \\(N_{\\rm f}=0,\\ldots,10\\). At high scales \\(k\\gg T\\), the running of the coupling agrees with the zero-temperature running in the presence of \\(N_{\\rm f}\\) massless quark flavors. Towards lower scales, the coupling increases less strongly than the coupling of the corresponding SU(3) Yang-Mills theory, due to fermionic screening. At a scale \\(k\\sim T\\), the coupling reaches its maximum. Below this scale, the quarks decouple from the flow, since they only have hard Matsubara modes and, hence, the coupling universally approaches the result for pure Yang-Mills theory. Furthermore, we observe that, for an increasing number of quark flavors, the maximum of the coupling becomes smaller and moves towards lower scales. Both effects are due to the fact that the Figure 3: Running SU(3) Yang-Mills coupling \\(\\alpha_{\\rm YM}(k,T)\\) as a function of \\(k\\) for \\(T=0,100,500\\) MeV compared to the one-loop running for vanishing temperature. anomalous dimension \\(\\eta\\) becomes smaller for an increasing number of quark flavors. Again, we stress that the results for the coupling with dynamical quarks have not yet accounted for \\(\\chi\\)SB, where the quarks become massive and decouple from the flow. This will be discussed in the following sections. For temperatures or flavor numbers larger than the corresponding critical value for \\(\\chi\\)SB, our results so far should be trustworthy on all scales. ### Dimensionally reduced high-temperature limit As discussed above, the running coupling for scales much lower than the temperature, \\(k\\ll T\\), is governed by the IR fixed point of the 3-dimensional theory. More quantitatively, we observe that the flow of the coupling is completely determined by \\(\\eta_{1}^{A}\\) for \\(\\frac{T}{k}\\gg 1\\); the quark contributions decouple from the flow in this limit, since they do not have a soft Matsubara mode. Therefore, we find an IR fixed point at finite temperature for the 4\\(d\\) theory at \\(g^{2}=0\\). In the limit \\(\\frac{T}{k}\\gg 1\\), the anomalous dimension Eq. (28) is given by \\[\\eta(T\\gg k)\\approx\\eta_{1}^{A}(T\\gg k)=:\\eta_{1}^{\\infty}(g^{2},\\frac{T}{k}) =\\bar{\\gamma}_{3d}\\,\\left(\\frac{T}{k}\\,g^{2}\\right)^{\\frac{5}{4}}, \\tag{32}\\] where \\(\\bar{\\gamma}_{3d}\\) is a number which depends on \\(N_{\\rm c}\\): \\[\\bar{\\gamma}_{3d}=\\frac{32\\zeta(\\frac{5}{2})(1-2\\sqrt{2})\\Gamma(\\frac{9}{4}) \\Gamma(\\frac{5}{4}+z_{4}^{\\infty})\\sqrt[4]{c_{1}^{\\infty}}}{(4\\pi)^{4}\\Gamma( \\frac{3}{2})\\Gamma(z_{4}^{\\infty}+1)}N_{\\rm c}. \\tag{33}\\] Figure 4: Running SU(3) coupling \\(\\alpha_{s}(k,T)\\) as a function of \\(k\\) for \\(T=100\\,\\)MeV for different number of quark flavors \\(N_{\\rm f}=0,1,2,\\ldots,10\\) (from top to bottom). For \\(k\\ll T\\), the coupling shows universal behavior, owing to the attraction of the pure-glue IR fixed point. We refer to App. B for the definition of the constants \\(z_{4}^{\\infty}\\) and \\(c_{1}^{\\infty}\\). In the high-temperature limit, we can solve the differential equation (14) for \\(g^{2}\\) analytically, \\[g^{2}\\Big{|}_{\\frac{T}{k}\\gg 1}=:g_{\\infty}^{2}(\\tfrac{k}{T})=\\frac{1}{(\\bar{ \\gamma}_{3d}(\\tfrac{T}{k})^{\\frac{5}{4}}-\\text{const.})^{\\frac{4}{5}}}\\,\\approx \\,\\bar{\\gamma}_{3d}^{-\\frac{4}{5}}\\,\\frac{k}{T}+\\mathcal{O}((\\tfrac{k}{T})^{2}). \\tag{34}\\] The RHS explains the shape of the running coupling for small \\(k/T\\) in Fig. 3. The factor \\(\\bar{\\gamma}_{3d}^{-\\frac{4}{5}}\\) is the fixed point value of the dimensionless \\(3d\\) coupling \\(g_{3d}^{2}\\) as can be seen from its relation to the dimensionless coupling \\(g^{2}\\) in four dimensions: \\[g_{3d}^{2}:=\\frac{T}{k}\\,g^{2}\\quad\\to g^{2}=\\frac{k}{T}\\,g_{3d}^{2}\\,. \\tag{35}\\] Comparing the right-hand side of Eq. (34) and (35), we find that the fixed point for \\(N_{\\text{c}}=3\\) in three dimensions is given by: \\[\\alpha_{*}^{3d}\\equiv\\frac{g_{3d,*}^{2}}{4\\pi}=[\\alpha_{*,8}^{3d},\\alpha_{*,3 }^{3d}]\\approx[2.70,2.77] \\tag{36}\\] Again, the uncertainty arises from our ignorance of the exact color factors \\(\\tau_{m}^{A}\\), see App. B and App. C. On the other hand, the fixed point of the 3d theory is determined by the zero of the corresponding \\(\\beta\\) function. In fact, \\(\\eta_{1}^{\\infty}(g^{2},\\tfrac{T}{k})\\) is identical to the 3d anomalous dimension \\(\\eta_{3d}(g_{3d}^{2})\\), as can be deduced from the pure \\(3d\\) theory, and we obtain \\[\\partial_{t}(\\tfrac{T}{k}g^{2})\\equiv\\partial_{t}g_{3d}^{2}=(\\eta_{3d}(g_{3d}^ {2})-1)g_{3d}^{2}, \\tag{37}\\] as suggested by Eq. (14). Since \\(\\eta_{3d}\\) is a monotonously increasing function, we find a \\(3d\\) IR fixed point for \\(g_{3d,*}^{2}=\\bar{\\gamma}_{3d}^{-\\frac{4}{5}}\\) which coincides with the result above. ## 4 Chiral quark dynamics Dynamical quarks influence the QCD flow by two qualitatively different mechanisms. First, quark fluctuations directly modify the running coupling as already discussed above; the nonperturbative contribution in the form of \\(\\eta^{\\text{q}}\\) in Eq. (28) accounts for the screening nature of fermionic fluctuations, generalizing the tendency that is already visible in perturbation theory. Second, gluon exchange between quarks induces quark self-interactions which can become relevant in the strongly coupled IR. Both the quark and the gluon sector feed back onto each other in an involved nonlinear fashion. In general, these nonlinearities have to be taken into account and are provided by the flow equation. However, we will argue that some intricate nonlinearities drop out or are negligible for locating the chiral phase boundary in a first approximation. Working solely in \\(d=4\\) from here on, let us now specify the last part of our truncation: the effective action of quark self-interactions \\(\\Gamma_{k}^{\\rm q\\,int}[\\bar{\\psi},\\psi]\\), introduced in Eq. (17). In a consistent and systematic operator expansion, the lowest nontrivial order is given by [52] \\[\\Gamma_{k} = \\int_{x}\\frac{1}{2}\\Big{[}\\bar{\\lambda}_{-}({\\rm V}\\!-\\!{\\rm A})+ \\bar{\\lambda}_{+}({\\rm V}\\!+\\!{\\rm A})+\\bar{\\lambda}_{\\sigma}({\\rm S}\\!-\\!{\\rm P })+\\bar{\\lambda}_{\\rm VA}[2({\\rm V}\\!-\\!{\\rm A})^{\\rm a\\,dj}\\!+(1/N_{\\rm c})({ \\rm V}\\!-\\!{\\rm A})]\\Big{]}. \\tag{38}\\] The four-fermion interactions occurring here have been classified according to their color and flavor structure. Color and flavor singlets are \\[({\\rm V}\\!-\\!{\\rm A}) = (\\bar{\\psi}\\gamma_{\\mu}\\psi)^{2}+(\\bar{\\psi}\\gamma_{\\mu}\\gamma_{5 }\\psi)^{2}, \\tag{39}\\] \\[({\\rm V}\\!+\\!{\\rm A}) = (\\bar{\\psi}\\gamma_{\\mu}\\psi)^{2}-(\\bar{\\psi}\\gamma_{\\mu}\\gamma_{5 }\\psi)^{2}, \\tag{40}\\] where (fundamental) color \\((i,j,\\dots)\\) and flavor \\((\\chi,\\xi,\\dots)\\) indices are contracted pairwise, e.g., \\((\\bar{\\psi}\\psi)\\equiv(\\bar{\\psi}^{\\chi}_{i}\\psi^{\\chi}_{i})\\). The remaining operators have non-singlet color or flavor structure, \\[({\\rm S}\\!-\\!{\\rm P}) = (\\bar{\\psi}^{\\chi}\\psi^{\\xi})^{2}-(\\bar{\\psi}^{\\chi}\\gamma_{5} \\psi^{\\xi})^{2}\\equiv(\\bar{\\psi}^{\\chi}_{i}\\psi^{\\xi}_{i})^{2}-(\\bar{\\psi}^{ \\chi}_{i}\\gamma_{5}\\psi^{\\xi}_{i})^{2},\\] \\[({\\rm V}\\!-\\!{\\rm A})^{\\rm a\\,dj} = (\\bar{\\psi}\\gamma_{\\mu}T^{a}\\psi)^{2}+(\\bar{\\psi}\\gamma_{\\mu} \\gamma_{5}T^{a}\\psi)^{2}, \\tag{41}\\] where \\((\\bar{\\psi}^{\\chi}\\psi^{\\xi})^{2}\\equiv\\bar{\\psi}\\chi\\psi^{\\xi}\\bar{\\psi}^{ \\xi}\\psi^{\\chi}\\), etc., and \\((T^{a})_{ij}\\) denotes the generators of the gauge group in the fundamental representation. The set of fermionic self-interactions introduced in Eq. (38) forms a complete basis. Any other pointlike four-fermion interaction which is invariant under \\({\\rm SU}(N_{\\rm c})\\) gauge symmetry and \\({\\rm SU}(N_{\\rm f})_{\\rm L}\\times{\\rm SU}(N_{\\rm f})_{\\rm R}\\) flavor symmetry is reducible by means of Fierz transformations. \\({\\rm U}_{\\rm A}(1)\\)-violating interactions are neglected, since we expect them to become relevant only inside the \\(\\chi\\)SB regime or for small \\(N_{\\rm f}\\); since the lowest-order \\({\\rm U}_{\\rm A}(1)\\)-violating term schematically is \\(\\sim(\\bar{\\psi}\\psi)^{N_{\\rm f}}\\), larger \\(N_{\\rm f}\\) correspond to larger RG \"irrelevance\" by naive power-counting. For \\(N_{\\rm f}=1\\), such a term is, of course, important, since it provides for a direct fermion mass term; in this case, the chiral transition is expected to be a crossover. Dropping the \\({\\rm U}_{\\rm A}(1)\\)-violating interactions, we thus confine ourselves to \\(N_{\\rm f}\\geq 2\\). We emphasize that the \\(\\bar{\\lambda}\\)'s are not considered as independent external parameters as, e.g., in the Nambu-Jona-Lasinio model. More precisely, we impose the boundary condition \\(\\bar{\\lambda}_{i}\\to 0\\) for \\(k\\to\\Lambda\\to\\infty\\) which guarantees that the \\(\\bar{\\lambda}\\)'s at \\(k<\\Lambda\\) are solely generated by quark-gluon dynamics, e.g., by 1PI \"box\" diagrams with 2-gluon exchange. As a severe approximation, we drop any nontrivial momentum dependencies of the \\(\\bar{\\lambda}\\)'s and study these couplings in the point-like limit \\(\\bar{\\lambda}(|p_{i}|\\ll k)\\). This inhibits a study of QCD properties in the chirally broken regime, since mesons, for instance, manifest themselves as momentum singularities in the \\(\\bar{\\lambda}\\)'s. Nevertheless, the point-like truncation can be a reasonable approximation in the chirally symmetric regime; this has recently been quantitatively confirmed for the zero-temperature chiral phase transition in many-flavor QCD [50], where the regulator independence of universal quantities has been shown to hold remarkably well even in this restrictive truncation. By adopting the same system at finite \\(T\\), we base our truncation on the assumption that quark dynamics both near the finite-\\(T\\) phase boundary as well as near the many-flavor phase boundary [51] is driven by qualitatively similar mechanisms. The resulting flow equations for the \\(\\bar{\\lambda}\\)'s are a straightforward generalization of those derived and analyzed in [52, 50] to the case of finite temperature. Introducing the dimensionless renormalized couplings \\[\\lambda_{i}=k^{2}\\bar{\\lambda}_{i}, \\tag{42}\\] (recall that \\(Z_{\\psi}=1\\) in our truncation), the flows of the quark interactions read \\[\\partial_{t}\\lambda_{-} = 2\\lambda_{-}\\!-4v_{4}l_{1,1}^{\\rm(FB)}\\left[\\frac{3}{N_{\\rm c}}g^ {2}\\lambda_{-}-3g^{2}\\lambda_{\\rm VA}\\right]-\\frac{1}{8}v_{4}l_{1,2}^{\\rm(FB)} \\left[\\frac{12+9N_{\\rm c}^{2}}{N_{\\rm c}^{2}}g^{4}\\right]\\] \\[-8v_{4}l_{1}^{\\rm(F)}\\Big{\\{}-N_{l}N_{\\rm c}(\\lambda_{-}^{2}+ \\lambda_{+}^{2})+\\lambda_{-}^{2}-2(N_{\\rm c}+N_{\\rm f})\\lambda_{-}\\lambda_{\\rm VA }+N_{\\rm f}\\lambda_{+}\\lambda_{\\sigma}+2\\lambda_{\\rm VA}^{2}\\Big{\\}},\\] \\[\\partial_{t}\\lambda_{+} = 2\\lambda_{+}\\!-4v_{4}l_{1,1}^{\\rm(FB)}\\left[-\\frac{3}{N_{\\rm c} }g^{2}\\lambda_{+}\\right]-\\frac{1}{8}v_{4}l_{1,2}^{\\rm(FB)}\\left[-\\frac{12+3N_{ \\rm c}^{2}}{N_{\\rm c}^{2}}g^{4}\\right]\\] \\[-8v_{4}l_{1}^{\\rm(F)}\\Big{\\{}-3\\lambda_{+}^{2}-2N_{\\rm c}N_{\\rm f }\\lambda_{-}\\lambda_{+}-2\\lambda_{+}(\\lambda_{-}+(N_{\\rm c}+N_{\\rm f})\\lambda_ {\\rm VA})+N_{\\rm f}\\lambda_{-}\\lambda_{\\sigma}\\] \\[\\qquad\\qquad+\\lambda_{\\rm VA}\\lambda_{\\sigma}+{ \\frac{1}{4}}\\lambda_{\\sigma}{}^{2}\\Big{\\}},\\] \\[\\partial_{t}\\lambda_{\\sigma} = 2\\lambda_{\\sigma}\\!-\\!4v_{4}l_{1,1}^{\\rm(FB)}\\left[6C_{2}(N_{ \\rm c})\\,g^{2}\\lambda_{\\sigma}-6g^{2}\\lambda_{+}\\right]-\\frac{1}{4}v_{4}l_{1,2 }^{\\rm(FB)}\\Big{[}-\\frac{24-9N_{\\rm c}^{2}}{N_{\\rm c}}\\,g^{4}\\Big{]}\\] \\[-8v_{4}l_{1}^{\\rm(F)}\\Big{\\{}2N_{\\rm c}\\lambda_{\\sigma}^{2}\\!-\\!2 \\lambda_{-}\\lambda_{\\sigma}\\!-\\!2N_{\\rm f}\\lambda_{\\sigma}\\lambda_{\\rm VA}\\!- \\!6\\lambda_{+}\\lambda_{\\sigma}\\Big{\\}},\\] \\[\\partial_{t}\\lambda_{\\rm VA} = 2\\lambda_{\\rm VA}\\!-\\!4v_{4}l_{1,1}^{\\rm(FB)}\\left[\\frac{3}{N_{ \\rm c}}g^{2}\\lambda_{\\rm VA}-3g^{2}\\lambda_{-}\\right]-\\frac{1}{8}v_{4}l_{1,2}^{ \\rm(FB)}\\left[-\\frac{24-3N_{\\rm c}^{2}}{N_{\\rm c}}g^{4}\\right]\\] \\[-8v_{4}l_{1}^{\\rm(F)}\\Big{\\{}-(N_{\\rm c}+N_{\\rm f})\\lambda_{\\rm VA }^{2}\\!+4\\lambda_{-}\\lambda_{\\rm VA}\\!-{\\frac{1}{4}}N_{\\rm f}\\lambda_{ \\sigma}^{2}\\Big{\\}}.\\] Here, \\(C_{2}(N_{\\rm c})=(N_{\\rm c}^{2}-1)/(2N_{\\rm c})\\) is a Casimir operator of the gauge group, and \\(v_{4}=1/(32\\pi^{2})\\). For better readability, we have written all gauge-coupling-dependent terms in square brackets, whereas fermionic self-interactions are grouped inside braces. The threshold functions \\(l_{1}^{\\rm(F)},l_{1,2}^{\\rm(FB)},l_{1,1}^{\\rm(FB)}\\) depend on the details of the regularization, see App. A; for zero quark mass and vanishing temperature, these functions reduce to simple positive numbers, see, e.g., Eqs. (A.26) and (A.29).5 For quark masses and temperature becoming larger than the regulator scale \\(k\\), these functions approach zero, which reflects the decoupling of massive modes from the flow. Footnote 5: Here, we ignore a weak dependence of the threshold functions on the anomalous quark and gluon dimensions which were shown to influence the quantitative results for the present system only on the percent level, if at all [50]. Within this set of degrees of freedom, a simple picture for the chiral dynamics arises: for vanishing gauge coupling, the flow is solved by vanishing \\(\\lambda_{i}\\)'s, which defines the Gaussian fixed point. This fixed point is IR attractive, implying that these self-interactions are RG irrelevant for sufficiently small bare couplings, as they should. At weak gauge coupling, the RG flow generates quark self-interactions of order \\(\\lambda\\sim g^{4}\\), as expected for a perturbative 1PI scattering amplitude. The back-reaction of these self-interactions on the total RG flow is negligible at weak coupling. If the gauge coupling in the IR remains smaller than a critical value \\(g<g_{\\rm cr}\\), the self-interactions remain bounded, approaching fixed points in the IR. These fixed points can simply be viewed as order-\\(g^{4}\\) shifted versions of the Gaussian fixed point, being modified by the gauge dynamics. At these fixed points, the fermionic subsystem remains in the chirally invariant phase which is indeed realized at high temperature. If the gauge coupling increases beyond the critical coupling \\(g>g_{\\rm cr}\\), the above-mentioned IR fixed points are destabilized and the quark self-interactions become critical. This can be visualized by the fact that \\(\\partial_{t}\\lambda_{i}\\) as a function of \\(\\lambda_{i}\\) is an everted parabola, see Fig. 5; for \\(g=g_{\\rm cr}\\), the parabola is pushed below the \\(\\lambda_{i}\\) axis, such that the (shifted) Gaussian fixed point annihilates with the second zero of the parabola. In this case, the gauge-fluctuation-induced \\(\\bar{\\lambda}\\)'s have become strong enough to contribute as relevant operators to the RG flow. These couplings now increase rapidly, approaching a divergence at a finite scale \\(k=k_{\\chi{\\rm SB}}\\). In fact, this seeming Landau-pole behavior indicates \\(\\chi{\\rm SB}\\) and, more specifically, the formation of chiral condensates. This is because the \\(\\bar{\\lambda}\\)'s are proportional to the inverse mass parameter of a Ginzburg-Landau effective potential for the order parameter in a (partially) bosonized formulation, \\(\\bar{\\lambda}\\sim 1/m^{2}\\). Thus, the scale at which the self-interactions formally diverge in our truncation is a good measure for the scale \\(k_{\\chi{\\rm SB}}\\) where the effective potential for the chiral order parameter becomes flat and is about to develop a nonzero vacuum expectation value. Whether or not chiral symmetry is preserved by the ground state therefore depends on the coupling strength of the system, more specifically, the value of the gauge coupling \\(g\\) relative to the critical coupling \\(g_{\\rm cr}\\) which is required to trigger \\(\\chi{\\rm SB}\\). Incidentally, the critical coupling \\(g_{\\rm cr}\\) itself can be determined by algebraically solving the fixed-point equations \\(\\partial_{t}\\lambda_{i}(\\lambda_{*})=0\\) for that value of the coupling, \\(g=g_{\\rm cr}\\), where the shifted Gaussian fixed point Figure 5: Sketch of a typical \\(\\beta\\) function for the fermionic self-interactions \\(\\lambda_{i}\\): at zero gauge coupling, \\(g=0\\) (upper solid curve), the Gaussian fixed point \\(\\lambda_{i}=0\\) is IR attractive. For small \\(g\\gtrsim 0\\) (middle/blue solid curve), the fixed-point positions are shifted on the order of \\(g^{4}\\). For gauge couplings larger than the critical coupling \\(g>g_{\\rm cr}\\) (lower/green solid curve), no fixed points remain and the self-interactions quickly grow large, signaling \\(\\chi{\\rm SB}\\). For increasing temperature, the parabolas become broader and higher, owing to thermal fermion masses; this is indicated by the dashed/red line. is annihilated. For instance, at zero temperature, the SU(3) critical coupling for the quarks system is \\(\\alpha_{\\rm cr}\\equiv g_{\\rm cr}^{2}/(4\\pi)\\simeq 0.8\\)[53], being only weakly dependent on the number of flavors [50].6 Since the IR fixed point for the gauge coupling is much larger \\(\\alpha_{*}>\\alpha_{\\rm cr}\\) (for not too many massless flavors), the QCD vacuum is characterized by \\(\\chi\\)SB. The same qualitative observations have already been made in [54] in a similar though smaller truncation. The existence of such a critical coupling also is a well-studied phenomenon in Dyson-Schwinger equations [55]. Footnote 6: Of course, the critical coupling is a non-universal value depending on the regularization scheme; the value given here for illustration holds for a class of regulators in the functional RG scheme that includes the most widely used linear (“optimized”) and exponential regulators. As soon as the the quark sector approaches criticality, also its back-reaction onto the gluon sector becomes sizable. Here, a subtlety of the present formalism becomes important: identifying the fluctuation field with the background field under the flow, our approximation generally does not distinguish between the flow of the background-field coupling and that of the fluctuation-field coupling. In our truncation, differences arise from the quark self-interactions. Whereas the running of the background-field coupling is always given by Eq. (14), the quark self-interactions can contribute directly to the running of the fluctuation-field coupling in the form of a \"vertex correction\" to the quark-gluon vertex. Since the fluctuation-field coupling is responsible for inducing quark self-interactions, this difference may become important. In [52], the relevant terms have been derived with the aid of a regulator-dependent Ward-Takahashi identity. The result hence implements an important gauge constraint, leading us to \\[\\partial_{t}g^{2} = \\eta\\,g^{2}-4v_{4}l_{1}^{\\rm(F)}\\,\\frac{g^{2}}{1-2v_{4}l_{1}^{ \\rm(F)}\\sum c_{i}\\lambda_{i}}\\,\\partial_{t}\\sum c_{i}\\lambda_{i},\\] \\[c_{\\sigma}=1+N_{\\rm f},\\ c_{+}=0,\\ c_{-}=-2,\\ c_{\\rm VA}=-2N_{ \\rm f},\\] with \\(\\eta\\) provided by Eq. (28) in our approximation. In principle, the approach to \\(\\chi\\)SB can now be studied by solving the coupled system of Eqs. (4.2), and (4.2)-(4.2). However, a simpler and, for our purposes, sufficient estimate is provided by the following argument: if the system ends up in the chirally symmetric phase, the \\(\\lambda_{i}\\)'s always stay close to the shifted Gaussian fixed point discussed above; apart from a slight variation of this fixed-point position with increasing \\(g^{2}\\), the \\(\\partial_{t}\\lambda_{i}\\) flow is small and vanishes in the IR, \\(\\partial_{t}\\lambda_{i}\\to 0\\). Therefore, the additional terms in Eq. (4.2) are negligible for all \\(k\\) and drop out in the IR. As a result, the behavior of the running coupling in the chirally symmetric phase is basically determined by \\(\\eta\\) alone, as discussed in the preceding section. In other words, the difference between the fluctuation-field coupling and the background-field coupling automatically switches off in the deep IR in the symmetric phase in our truncation. Therefore, if the coupling as predicted by \\(\\beta_{g^{2}}\\simeq\\eta g^{2}\\) alone never increases beyond the critical value \\(g_{\\rm cr}^{2}\\) for any \\(k\\), the system is in the chirally symmetric phase. In this case, it suffices to solve the \\(g^{2}\\) flow and compare it with \\(g_{\\rm cr}^{2}\\) which can be deduced from a purely algebraic solution of the fixed-point equations, \\(\\partial_{t}\\lambda_{i}(\\lambda_{*})=0\\). If the coupling as predicted by \\(\\beta_{g^{2}}\\simeq\\eta g^{2}\\) alone approaches \\(g_{\\rm cr}\\) for some finite scale \\(k_{\\rm cr}\\), the quark sector becomes critical and all couplings start to flow rapidly. To the present level of accuracy, this serves as an indication for \\(\\chi\\)SB. Of course, if the gauge coupling dropped quickly for decreasing \\(k\\), the quark sector could, in principle, become subcritical again. However, this might happen only for a marginal range of \\(g^{2}\\simeq g_{\\rm cr}^{2}\\) if at all. For even larger gauge coupling, the flow towards \\(\\chi\\)SB is unavoidable. Inside the \\(\\chi\\)SB regime, also the induced quark masses back-react onto the gluonic flow in the form of a decoupling of the quark fluctuations, i.e., \\(\\eta^{\\rm q}\\) in Eq. (28) approaches zero. However, the present truncation does not allow to explore the properties of the \\(\\chi\\)SB sector; for this, the introduction of effective mesonic degrees of freedom along the lines of [53, 56] is most useful and will be employed in future work. ## 5 Chiral phase transition Let us now discuss our results for the chiral phase transition in the framework presented so far. As elucidated in the previous section, the breaking of chiral-symmetry is triggered if the gauge coupling \\(g^{2}\\) increases beyond \\(g_{\\rm cr}^{2}\\), signaling criticality of the quark sector. We study the dependence of the chiral symmetry status on two parameters: temperature \\(T\\) and number of (massless) flavors \\(N_{\\rm f}\\). As already discussed in Sect. 3, the increase of the running coupling in the IR is weakened on average for both larger \\(T\\) and larger \\(N_{\\rm f}\\). In addition, also \\(g_{\\rm cr}\\) depends on \\(T\\) and \\(N_{\\rm f}\\), even though the \\(N_{\\rm f}\\) dependence is rather weak. The \\(T\\) dependence of \\(g_{\\rm cr}\\) has a physical interpretation: at finite \\(T\\), all quark modes acquire thermal masses which leads to a quark decoupling for \\(k\\lesssim T\\). Hence, stronger interactions are required to excite critical quark dynamics. Technically, this \\(T/k\\) dependence is a direct consequence of the \\(T/k\\) dependence of the threshold functions \\(l_{1}^{\\rm(F)},l_{1,2}^{\\rm(FB)},l_{1,1}^{\\rm(FB)}\\) in Eqs. (43) - (46), see App. A for their definition. Since the threshold functions decrease with increasing temperature, the \\(\\lambda_{i}\\) parabolas visualized in Fig. 5 become broader with a higher maximum; hence, the annihilation of the Gaussian fixed point by pushing the parabola below the \\(\\lambda_{i}\\) axis requires a larger \\(g_{\\rm cr}\\). At zero temperature and for small \\(N_{\\rm f}\\), the IR fixed point of the running coupling is far larger than \\(g_{\\rm cr}^{2}\\), hence the QCD vacuum is in the \\(\\chi\\)SB phase. For increasing \\(T\\), the temperature dependence of the coupling and that of \\(g_{\\rm cr}^{2}\\) compete with each other. This is illustrated in Fig. 6 where we show the running coupling \\(\\alpha_{\\rm s}\\equiv\\frac{g^{2}}{4\\pi}\\) and its critical value \\(\\alpha_{\\rm cr}\\equiv\\frac{g_{\\rm cr}^{2}}{4\\pi}\\) for \\(T=130\\,{\\rm MeV}\\) and \\(T=220\\,{\\rm MeV}\\) as a function of the regulator scale \\(k\\). The intersection point \\(k_{\\rm cr}\\) between both marks the scale where the quark dynamics becomes critical. Below the scale \\(k_{\\rm cr}\\), the system runs quickly into the \\(\\chi\\)SB regime. We estimate the critical temperature \\(T_{\\rm cr}\\) as the lowest temperature for which no intersection point between \\(\\alpha_{\\rm s}\\) and \\(\\alpha_{\\rm cr}\\) occurs.7 We find Footnote 7: Strictly speaking, this simplified analysis yields a sufficient but not a necessary criterion for chiral-symmetry restoration. In this sense, our estimate for \\(T_{\\rm cr}\\) is an upper bound for the true \\(T_{\\rm cr}\\). Small corrections to this estimate could arise, if the quark dynamics becomes uncritical again by a strong decrease of the gauge coupling towards the IR, as discussed in the preceding section. \\[T_{\\rm cr} \\approx 186\\pm 33\\,{\\rm MeV}\\quad{\\rm for}\\quad N_{\\rm f}=2,\\] \\[T_{\\rm cr} \\approx 161\\pm 31\\,{\\rm MeV}\\quad{\\rm for}\\quad N_{\\rm f}=3, \\tag{48}\\] for massless quark flavors in good agreement with lattice simulations [57]. The errors arise from the experimental uncertainties on \\(\\alpha_{\\rm s}\\)[38]. The theoretical error owing to the color-factor uncertainty turns out to be subdominant by far, see Fig. 7. Dimensionless observable ratios are less contaminated by this uncertainty of \\(\\alpha_{\\rm s}\\). For instance, the relative difference for \\(T_{\\rm cr}\\) for \\(N_{\\rm f}=2\\) and 3 flavors is \\[\\Delta:=\\frac{T_{\\rm cr}^{N_{\\rm f}=2}-T_{\\rm cr}^{N_{\\rm f}=3}}{(T_{\\rm cr}^ {N_{\\rm f}=2}+T_{\\rm cr}^{N_{\\rm f}=3})/2}=0.144\\begin{array}{c}+0.018\\\\ -0.013\\end{array} \\tag{49}\\] in reasonable agreement with the lattice value of \\(\\sim 0.12\\)[57].8 Footnote 8: Even this comparison is potentially contaminated by fixing the two theories with different flavor content in different ways. Whereas lattice simulations generically keep the string tension fixed, we determine all scales by fixing \\(\\alpha\\) at the \\(\\tau\\) mass scale, cf. the discussion below. For the case of many massless quark flavors \\(N_{\\rm f}\\), the critical temperature is plotted in Fig. 7. We observe an almost linear decrease of the critical temperature for increasing \\(N_{\\rm f}\\) with a slope of \\(\\Delta T_{\\rm cr}=T(N_{\\rm f})-T(N_{\\rm f}+1)\\approx 25\\,{\\rm MeV}\\). In addition, we find a critical number of quark flavors, \\(N_{\\rm f}^{\\rm cr}\\simeq 12.9\\), above which no chiral phase transition occurs. This result Figure 6: Running QCD coupling \\(\\alpha_{\\rm s}(k,T)\\) for \\(N_{\\rm f}=3\\) massless quark flavors and \\(N_{\\rm c}=3\\) colors and the critical value of the running coupling \\(\\alpha_{\\rm cr}(k,T)\\) as a function of \\(k\\) for \\(T=130\\,{\\rm MeV}\\) (left panel) and \\(T=220\\,{\\rm MeV}\\) (right panel). The existence of the \\((\\alpha_{\\rm s},\\alpha_{\\rm cr})\\) intersection point in the left panel indicates that the \\(\\chi\\)SB quark dynamics can become critical for \\(T=130\\,{\\rm MeV}\\). for \\(N_{\\rm f}^{\\rm cr}\\) agrees with other studies based on the 2-loop \\(\\beta\\) function [51]. However, the precise value of \\(N_{\\rm f}^{\\rm cr}\\) has to be taken with care: for instance, in a perturbative framework, \\(N_{\\rm f}^{\\rm cr}\\) is sensitive to the 3-loop coefficient which can bring \\(N_{\\rm f}^{\\rm cr}\\) down to \\(N_{\\rm f}^{\\rm cr}\\simeq 10\\)[50]. In our nonperturbative approach, the truncation error can induce similar uncertainties; in fact, it is reassuring that our prediction for \\(N_{\\rm f}^{\\rm cr}\\) lies in the same ball park as the perturbative estimates, even though the details of the corresponding \\(\\beta_{g^{2}}\\) are very different. This suggests that our truncation error for \\(N_{\\rm f}^{\\rm cr}\\) is also of order \\({\\cal O}(1)\\). We expect that a more reliable estimate can be obtained even within our truncation by a regulator [58, 15]. A remarkable feature of the \\(T,N_{\\rm f}\\) phase diagram of Fig. 7 is the shape of the phase boundary, in particular, the flattening near \\(N_{\\rm f}^{\\rm cr}\\). In fact, this shape can be understood analytically, revealing a direct connection between two universal quantities: the phase boundary and the IR critical exponent of the running coupling. Before we outline the argument in detail, let us start with an important caveat: varying \\(N_{\\rm f}\\) unlike varying \\(T\\) corresponds to an unphysical deformation of a physical system. Whereas the deformation itself is, of course, unambiguously defined, the comparison of the physical theory with the deformed theory (or between two deformed theories) is not Figure 7: Chiral-phase-transition temperature \\(T_{\\rm cr}\\) versus the number of massless quark flavors \\(N_{\\rm f}\\) for \\(N_{\\rm f}\\geq 2\\). The flattening at \\(N_{\\rm f}\\gtrsim 10\\) is a consequence of the IR fixed-point structure. The dotted line depicts the analytic estimate near \\(N_{\\rm f}^{\\rm cr}\\) which follows from the fixed-point scenario (cf. Eq. (55) below). Squares and triangles correspond to calculations with a background field in the 8- and 3-direction of the Cartan, respectively. The theoretical uncertainty which is given by the difference between both is obviously negligible in full QCD. unique. A meaningful comparison requires to identify one parameter or one scale in both theories. In our case, we always keep the running coupling at the \\(\\tau\\) mass scale fixed to \\(\\alpha(m_{\\tau})=0.322\\). Obviously, the couplings in the two theories are different on all other scales, as are generally all dimensionful quantities such as \\(\\Lambda_{\\rm QCD}\\). There is, of course, no generic choice for fixing the corresponding theories relative to each other. Nevertheless, we believe that our choice is particularly useful, since the \\(\\tau\\) mass scale is close to the transition between perturbative and nonperturbative regimes. In this sense, a meaningful comparison between the theories can be made in both regimes, without being too much afflicted by the choice of the fixing condition. Let us now study the shape of the phase boundary for small \\(N_{\\rm f}\\). Once the coupling is fixed to \\(\\alpha(m_{\\tau})=0.322\\), no free parameter is left. As a crude approximation, the mass scale of all dimensionful IR observables such as the critical temperature \\(T_{\\rm cr}\\) is set by the scale \\(k_{\\rm co}\\) where the running gauge coupling undergoes the crossover from small to nonperturbatively large couplings (for instance, one can define the crossover scale \\(k_{\\rm co}\\) from the inflection point of the running coupling in Fig. 3). As an even cruder estimate, let us approximate \\(k_{\\rm co}\\) by the position of the Landau pole of the perturbative one-loop running coupling.9 The latter can be derived from the one-loop relation Footnote 9: Actually, this is a reasonable estimate, since the \\(N_{\\rm f}\\) dependence of \\(k_{\\rm co}\\), which is all that matters in the following, is close to the perturbative behavior. \\[\\frac{1}{\\alpha(k)}=\\frac{1}{\\alpha(m_{\\tau})}+4\\pi b_{0}\\ln\\frac{k}{m_{\\tau} },\\quad b_{0}=\\frac{1}{8\\pi^{2}}\\left(\\frac{11}{3}N_{\\rm c}-\\frac{2}{3}N_{\\rm f }\\right). \\tag{50}\\] Defining \\(k_{\\rm co}\\) by the Landau-pole scale, \\(1/\\alpha(k_{\\rm co})=0\\), and estimating the order of the critical temperature by \\(T_{\\rm cr}\\sim k_{\\rm co}\\), we obtain \\[T_{\\rm cr}\\sim m_{\\tau}\\,{\\rm e}^{-\\frac{1}{4\\pi b_{0}a(m_{\\tau})}}\\simeq m_{ \\tau}\\,{\\rm e}^{-\\frac{6\\pi}{11N\\epsilon a(m_{\\tau})}}\\left(1-\\epsilon N_{\\rm f }+{\\cal O}((\\epsilon N_{\\rm f})^{2})\\right), \\tag{51}\\] where \\(\\epsilon=\\frac{12\\pi}{12N_{\\rm c}^{2}\\alpha(m_{\\tau})}\\simeq 0.107\\) for \\(N_{\\rm c}=3\\). This simple estimate hence predicts a linear decrease of the phase boundary \\(T_{\\rm cr}(N_{\\rm f})\\) for small \\(N_{\\rm f}\\), as is confirmed by the full solution plotted in Fig. 7. Actually, this estimate is also quantitatively accurate, since it predicts a relative difference for \\(T_{\\rm cr}\\) for \\(N_{\\rm f}\\)=2 and 3 flavors of \\(\\Delta\\simeq 0.146\\) which is in very good agreement with the full result, given in Eq. (49). We conclude that the shape of the phase boundary for small \\(N_{\\rm f}\\) is basically dominated by fermionic screening. For larger \\(N_{\\rm f}\\), the above estimate can no longer be used, because neither one-loop perturbation theory nor the \\(N_{\\rm f}\\) expansion are justified. For values of \\(N_{\\rm f}\\) close to the critical value \\(N_{\\rm f}^{\\rm cr}\\), a different analytic argument can be made: here the running coupling has to come close to its maximal value in order to be strong enough to trigger \\(\\chi\\)SB. The maximal value is, of course, close to the IR fixed point value \\(\\alpha_{*}\\) attained for \\(T=0\\). Even though at finite \\(T\\) the coupling is eventually governed by the \\(3d\\) fixed point implying a linear decrease with \\(k\\), the \\(\\chi\\)SB properties will still be dictated by the maximum coupling value, which roughly corresponds to the \\(T=0\\) fixed point. In the fixed-point regime, we can approximate the \\(\\beta_{g^{2}}\\) function by a linear expansion about the fixed-point value, \\[\\beta_{g^{2}}\\equiv\\partial_{t}g^{2}=-\\Theta\\left(g^{2}-g_{*}^{2}\\right)+{\\cal O }((g^{2}-g_{*}^{2})^{2}), \\tag{52}\\]where the universal \"critical exponent\" \\(\\Theta\\) denotes the (negative) first expansion coefficient. We know that \\(\\Theta<0\\), since the fixed point is IR attractive. For vanishing temperature, we find an approximate linear dependence of \\(\\Theta\\) on \\(N_{\\rm f}\\), cf. Tab. 1. The solution of Eq. (52) for the running coupling in the fixed-point regime reads \\[g^{2}(k)=g_{*}^{2}-\\left(\\frac{k}{k_{0}}\\right)^{-\\Theta}, \\tag{53}\\] where the scale \\(k_{0}\\) is implicitly defined by a suitable initial condition (to be set in the fixed-point regime) and is kept fixed in the following. It provides for all dimensionful scales in the following and is related to the initial \\(\\tau\\) mass scale by RG evolution. Our criterion for \\(\\chi\\)SB to occur is that \\(g^{2}(k)\\) should exceed \\(g_{\\rm cr}^{2}\\) for some value of \\(k=k_{\\rm cr}\\). We expect that this scale \\(k_{\\rm cr}\\) is generically somewhat larger than the temperature, since for \\(k\\) smaller than \\(T\\) the coupling decreases again owing to the \\(3d\\) fixed point.10 This allows us to ignore the \\(T\\) dependence of the running coupling \\(g^{2}\\) and of the critical coupling \\(g_{\\rm cr}\\) as a rough approximation, since the \\(T\\) dependence of the threshold functions is rather weak for \\(T\\lesssim k\\). From Eq. (53) and the condition \\(g^{2}(k_{\\rm cr})=g_{\\rm cr}^{2}\\), we derive the estimate Footnote 10: Indeed, this assumption is justified, since we find in the full calculation that \\(k_{\\rm cr}\\gg T\\) for large \\(N_{\\rm f}\\) and for temperatures in the vicinity of the critical temperature \\(T_{\\rm cr}\\). \\[k_{\\rm cr}\\simeq k_{0}\\,(g_{*}^{2}-g_{\\rm cr}^{2})^{-\\frac{1}{\\Theta}}. \\tag{54}\\] This scale \\(k_{\\rm cr}\\) plays the same role as the crossover scale \\(k_{\\rm co}\\) in the small-\\(N_{\\rm f}\\) argument given above: it sets the scale for \\(T_{\\rm cr}\\sim k_{\\rm cr}\\), with a proportionality coefficient provided by the solution of the full flow. To conclude the argument, we note that the IR fixed-point value \\(g_{*}^{2}\\) roughly depends linearly on \\(N_{\\rm f}\\), since the quark contribution to the coupling flow \\(\\eta^{\\rm q}\\) is linear in \\(N_{\\rm f}\\). From Eq. (54), we thus find the relation \\[T_{\\rm cr}\\sim k_{0}|N_{\\rm f}-N_{\\rm f}^{\\rm cr}|^{-\\frac{1}{\\Theta}}, \\tag{55}\\] which is expected to hold near \\(N_{\\rm f}^{\\rm cr}\\) for \\(N_{\\rm f}\\leq N_{\\rm f}^{\\rm cr}\\). Here, \\(\\Theta\\) should be evaluated at \\(N_{\\rm f}^{\\rm cr}\\).11 Relation (55) is an analytic prediction for the shape of the chiral phase boundary in the \\((T,N_{\\rm f})\\) plane of QCD. Remarkably, it relates two universal quantities with each other: the phase boundary and the IR critical exponent. Footnote 11: Accounting for the \\(N_{\\rm f}\\) dependence of \\(\\Theta\\) by an expansion around \\(N_{\\rm f}^{\\rm cr}\\) yields mild logarithmic corrections to Eq. (55). This relation can be checked with a fit of the full numerical result parametrized by the RHS of Eq. (55). In fact, the fit result, \\(\\Theta_{\\rm fit}\\simeq-0.60\\), determined from the phase boundary agrees with the direct determination of the critical exponent from the zero-temperature \\(\\beta\\) function, \\(\\Theta(N_{\\rm f}^{\\rm cr}\\simeq 12.9)\\simeq-0.60\\), within a one-percent accuracy (cf. Table 1). The fit is depicted by the dashed line in Fig. 7. In particular, the fact that \\(|\\Theta|<1\\,{\\rm near}\\,\\,N_{\\rm f}^{\\rm cr}\\) explains the flattening of the phase boundary near the critical flavor number. Qualitatively, relation (55) is a consequence of the IR fixed-point scenario predicted by our truncated flow equation. We emphasize, however, that the quantitative results for universal quantities such as \\(\\Theta\\) are likely to be affected by truncation errors. These can be reduced by an optimization of the present flow; we expect from preliminary regulator studies that more reliable estimates of \\(\\Theta\\) yield smaller absolute values and, thus, a more pronounced flattening of the phase boundary. We are aware of the fact that the relation (55) is difficult to test, for instance, by lattice gauge theory: neither the fixed-point scenario in the deep IR nor large flavor numbers are easily accessible, even though there are promising investigations that have collected evidence for the IR fixed-point scenario in the Landau gauge [59, 60] (see also [61, 62, 63]) as well as the existence of a critical flavor number [64]. Given the conceptual simplicity of the fixed-point scenario in combination with \\(\\chi\\)SB, further lattice studies are certainly worthwhile. ## 6 Conclusions and outlook We have obtained new nonperturbative results for the chiral phase boundary of QCD in the plane spanned by temperature and quark flavor number. Our work is based on the functional RG which provides for a functional differential formulation of QCD in terms of a flow equation for the effective action. We have studied this effective action from first principles in a systematic and consistent operator expansion which is partly reminiscent to a gradient expansion. We consider the truncated expansion as a minimal approximation of the effective action that is capable to access the nonperturbative IR domain and address the phenomenon of chiral symmetry breaking. In the gluon sector, this truncation provides for a stable flow of the gauge coupling, running into a fixed point in the IR at zero temperature in agreement with the results of [13] for the pure glue sector. As a new result, we find that the \\(3d\\) analogue of this IR fixed point governs the flow of the gauge coupling at finite temperature for scales \\(k\\ll T\\). Our truncation in the quark sector facilitates a description of critical dynamics with a gluon-driven approach to \\(\\chi\\)SB. The resulting picture for \\(\\chi\\)SB is comparatively simple: \\(\\chi\\)SB requires the coupling to exceed a critical value \\(g_{\\rm cr}\\). Whether or not this critical coupling is reached depends on the RG flow of the gauge coupling. The IR fixed-point scenario generically puts an upper bound on the maximal coupling value which depends on the external parameters such as temperature and quark flavor number. Of course, the interplay between the gluon and quark sectors in general, and between gauge coupling and critical coupling in particular, is highly nonlinear, since both sectors back-react onto each other in a manner which is quantitatively captured by the flow equation. The resulting phase boundary in the \\((T,N_{\\rm f})\\) plane exhibits a characteristic shape which can analytically be understood in terms of simple physical mechanisms: for small \\(N_{\\rm f}\\), we observe a linear decrease of \\(T_{\\rm cr}\\) as a function of \\(N_{\\rm f}\\); this is a direct consequence of the charge-screening properties of light fermions. Also, this screening nature is ultimately responsible for the existence of a critical flavor number \\(N_{\\rm f}^{\\rm cr}\\) above which the system remains in the chirally symmetric phase even at zero temperature (even though the theory is still asymptotically free for \\(N_{\\rm f}\\) not too much larger than \\(N_{\\rm f}^{\\rm cr}\\)). The shape of the phase boundary near the critical flavor number, \\(N_{\\rm f}\\lesssim N_{\\rm f}^{\\rm cr}\\), is most interesting from our viewpoint. In this region, the critical temperature is very small, and thus the system is probed in the deep IR. As a main result of this paper, we have shown that this connection becomes most obvious in an intriguing relation between the shape of the phase boundary for \\(N_{\\rm f}\\lesssim N_{\\rm f}^{\\rm cr}\\) and the IR critical exponent \\(\\Theta\\) of the running coupling at zero temperature. In particular, the flattening of the phase boundary in this regime is a direct consequence of \\(|\\Theta|\\) being smaller than 1. Since both the shape of the phase boundary and the critical exponent are universal quantities, their relation is a generic prediction of our analysis. It can directly be tested by other nonperturbative methods, even though it may numerically be expensive, e.g., in lattice simulations. Let us now critically assess the reliability of our results. Truncating the effective action, at first sight, is an uncontrolled approximation which can a priori be justified only with some insight into the physical mechanisms. The truncation in the quark sector supporting potential critical dynamics is an obvious example for this. The approximation can become (more) controlled if the inclusion of higher-order operators does not lead to serious modifications of the results. In the quark sector, it can indeed easily be verified that the contribution of many higher-order operators such as \\((\\overline{\\psi}\\psi)^{4}\\) or mixed gluonic-fermionic operators is generically suppressed by the one-loop structure of the flow equation or the fixed-point argument given below Eq. (47). This holds at least in the symmetric regime, which is sufficient to trace out the phase boundary. By contrast, we are not aware of similar arguments for the gluonic sector; here, higher-order expansions involving, e.g., \\((F_{\\mu\ u}\\widetilde{F}^{\\mu\ u})^{2}\\) or operators with covariant derivatives or ghost fields eventually have to be used to verify the expansion scheme. At finite temperature, the difference between so-called electric and magnetic sectors can become important, as mediated by operators involving the heat-bath four-velocity \\(u_{\\mu}\\), e.g., \\((F_{\\mu\ u}u_{\ u})^{2}\\). In view of results obtained in the Landau gauge [39], the inclusion of ghost contributions in the gauge sector appears important if not mandatory for a description of color confinement. A posteriori, the truncation can be verified by a direct comparison with lattice results. In the present case, this cross-check shows satisfactory agreement. The stability of the present results can also be studied by varying the regulator. Since universal quantities are independent of the regulator in the exact theory, any such regulator dependence of the truncated system is a measure for the reliability of the truncation. As was already quantitatively verified at vanishing temperature in [50], the present quark sector shows surprisingly little dependence on the regulator which strongly supports the truncation. By contrast, we do not expect such a regulator independence to hold in the truncated gluonic sector. If so, it is advisable to improve results for universal quantities towards their physical values. This can indeed be done by using stability criteria for the flow equation which have lead to optimization schemes [58, 15, 65]. We expect that the use of such optimized regulators give better results for dimensionless quantities, e.g. Eq. (49) or the IR critical exponent \\(\\Theta\\). In any case, we have confirmed that, for instance, the linear regulator [58], which satisfies optimization criteria in various systems, leads to the same qualitative results as presented above. Further regulator studies are left to future work. Further generalizations of our work will aim at a quantitative study of the effect of finite quark masses; the formalism of which has largely been developed already in this work. Owing to the mechanism of fermionic decoupling, we expect that the largest modifications arise from a realistic strange quark mass which is of the order of the characteristic scales such as \\(T_{\\rm cr}\\) or the scale of \\(\\chi\\)SB. Let us finally stress that our whole quantitative analysis relies on only one physical input parameter, namely the value of the gauge coupling at a physical input scale. This clearly demonstrates the predictive power of the functional RG approach for full QCD, and serves as a promising starting point for further phenomenological applications. ## Acknowledgment The authors are grateful to J. Jaeckel, J.M. Pawlowski, and H.-J. Pirner for useful discussions. H.G. acknowledges support by the DFG under contract \\(\\,\\)Gi 328/1-3 (Emmy-Noether program). J.B. acknowledges support by the GSI Darmstadt. ## Appendix A Thermal moments and threshold functions ### Thermal moments Let us first define the auxiliary functions \\(f\\) which are first introduced in Eq. (22): \\[f_{T}^{A}(u,v) = 2\\sqrt{4\\pi}v\\sum_{q=-\\infty}^{\\infty}\\int_{0}^{\\infty}dx\\,{\\rm e }^{-(2\\pi vx)^{2}u}\\cos(2\\pi qx)\\,,\\] (A.1) \\[f_{T}^{\\psi}(u,v) = 2\\sqrt{4\\pi}v\\sum_{q=-\\infty}^{\\infty}(-1)^{q}\\int_{0}^{\\infty }dx\\,{\\rm e}^{-(2\\pi vx)^{2}u}\\cos(2\\pi qx)\\,,\\] (A.2) \\[f^{\\psi}(u) = \\frac{1}{2}\\frac{1}{u^{e_{d}}}u\\coth(u),\\] (A.3) \\[f_{1}^{A}(u) = \\frac{1}{u^{e_{d}}}\\Big{(}e_{d}\\,\\frac{u}{\\sinh u}+2u\\sinh u \\Big{)}\\,,\\] (A.4) \\[f_{2}^{A}(u) = \\frac{1}{2}\\frac{1}{u^{e_{d}}}\\frac{u}{\\sinh u}\\,,\\] (A.5) \\[f_{3}^{A}(u) = \\frac{1}{u^{e_{d}}}(1-v)\\,,\\] (A.6) \\[f_{4}^{A}(u,v) = 2\\sqrt{4\\pi}v\\sum_{q=-\\infty}^{\\infty}\\int_{0}^{\\infty}dx(2\\pi v x )^{d-1}\\Gamma\\big{(}-e_{d},(2\\pi vx)^{2}u\\big{)}\\cos(2\\pi qx).\\] (A.7)Here, the sum over \\(q\\) arises from the application of Poisson's Formula to the (usual) Matsubara sum. These functions are needed for the construction of the thermal moments \\(\\bar{h}_{j}^{\\psi}\\)\\(\\bar{h}_{j}^{A}\\), \\(\\bar{g}_{j}^{A}\\), \\(\\bar{H}_{j}^{A}\\) and \\(\\bar{G}_{j}^{A}\\) which are related to the regulator via Eqs. (6), (7) and (12),(13) by \\[\\bar{h}_{j}^{\\psi}:=\\bar{h}_{j}^{\\psi}\\big{(}\\tilde{m},v\\big{)}=\\int_{0}^{ \\infty}\\!ds\\,\\widetilde{h}^{\\psi}(s,\\tilde{m})s^{j}f_{T}^{\\psi}\\big{(}s,v\\big{)}\\,,\\] (A.8) \\[\\bar{h}_{j}^{A}:=\\bar{h}_{j}^{A}\\big{(}v\\big{)}=\\int_{0}^{\\infty}\\!ds\\, \\widetilde{h}(s)s^{j}f_{T}^{A}\\big{(}s,v\\big{)}\\,,\\] (A.9) \\[\\bar{g}_{j}^{A}:=\\bar{g}_{j}^{A}\\big{(}v\\big{)}=\\int_{0}^{\\infty}\\!ds\\, \\widetilde{g}(s)s^{j}f_{T}^{A}\\big{(}s,v\\big{)}\\,,\\] (A.10) \\[\\bar{H}_{j}^{A}:=\\bar{H}_{j}^{A}\\big{(}v\\big{)}=\\int_{0}^{\\infty}\\!ds\\, \\widetilde{h}(s)s^{j}f_{4}^{A}\\big{(}s,v\\big{)}\\,,\\] (A.11) \\[\\bar{G}_{j}^{A}:=\\bar{G}_{j}^{A}\\big{(}v\\big{)}=\\int_{0}^{\\infty}\\!ds\\, \\widetilde{g}(s)s^{j}f_{4}^{A}\\big{(}s,v\\big{)}\\,,\\] (A.12) where \\(\\tilde{m}\\) denotes a dimensionless quark mass parameter. It is more convenient to express the moments in terms of the regulator functions \\(h(y)\\) and \\(g(y)\\) in momentum space which are defined in Eqs. (6) and (7). In order to obtain the representations for \\(\\bar{h}_{j}\\) and \\(\\bar{g}_{j}\\), we introduce \\[\\frac{s^{b+1}}{\\Gamma(b+1)}\\int_{0}^{\\infty}du\\,u^{b}{\\rm e}^{-su}=1\\qquad(b> -1)\\] (A.13) in Eq. (A.9), (A.10) and (A.8) and use Eq. (A.1) and (A.2), respectively: \\[\\bar{h}_{j}^{\\psi}=\\frac{2}{\\Gamma(b+1)\\sqrt{\\pi}}\\sum_{q=-\\infty}^{\\infty}(- 1)^{q}\\int_{0}^{\\infty}\\!dx\\,\\cos\\Big{(}q\\frac{x}{v}\\Big{)}\\,\\left(-\\frac{d}{ dy}\\right)^{j+b+1}\\int_{0}^{\\infty}\\!du\\,u^{b}h^{\\psi}(y+u+x^{2},\\tilde{m}) \\bigg{|}_{y=0}\\,,\\] (A.14) \\[\\bar{h}_{j}^{A}=\\frac{2}{\\Gamma(b+1)\\sqrt{\\pi}}\\sum_{q=-\\infty}^{\\infty}\\int_ {0}^{\\infty}\\!dx\\,\\cos\\Big{(}q\\frac{x}{v}\\Big{)}\\,\\left(-\\frac{d}{dy}\\right)^{ j+b+1}\\int_{0}^{\\infty}\\!du\\,u^{b}h(y+u+x^{2})\\bigg{|}_{y=0}\\,,\\] (A.15) \\[\\bar{g}_{j}^{A}=\\frac{2}{\\Gamma(b+1)\\sqrt{\\pi}}\\sum_{q=-\\infty}^{\\infty}\\int_ {0}^{\\infty}\\!dx\\,\\cos\\Big{(}q\\frac{x}{v}\\Big{)}\\,\\left(-\\frac{d}{dy}\\right)^ {j+b+1}\\int_{0}^{\\infty}\\!du\\,u^{b}g(y+u+x^{2})\\bigg{|}_{y=0}\\,.\\] (A.16) Note that \\(b\\) is an arbitrary parameter which can, e.g., be used to avoid fractional derivatives. By applying Poisson's formula to the (usual) Matsubara sum, we have obtained the sum over \\(q\\) which converges fast for \\(k\\gtrsim T\\). Moreover, we need \\(\\bar{H}_{j}^{A}\\) and \\(\\bar{G}_{j}^{A}\\) for \\(j=0\\), which are used in App. B. Integrating Eq. (A.11) and (A.12) by parts and using Eq. (A.13) and (A.7), we obtain \\[\\bar{H}_{0}^{A}=\\frac{2}{\\Gamma(b+1)\\sqrt{\\pi}}\\sum_{q=-\\infty}^{\\infty}\\int_{0} ^{\\infty}\\!dx\\,\\cos\\left(q\\frac{x}{v}\\right)\\,\\,\\Big{(}-\\frac{d}{dy}\\Big{)}^{b -e_{d}}\\int_{0}^{\\infty}\\!du\\,\\frac{u^{b}}{u+x^{2}}h(y+u+x^{2})\\bigg{|}_{y=0}\\,\\] (A.17) \\[\\bar{G}_{0}^{A}=\\frac{2}{\\Gamma(b+1)\\sqrt{\\pi}}\\sum_{q=-\\infty}^{\\infty}\\int_{ 0}^{\\infty}\\!dx\\,\\cos\\left(q\\frac{x}{v}\\right)\\,\\,\\Big{(}-\\frac{d}{dy}\\Big{)}^ {b-e_{d}}\\int_{0}^{\\infty}\\!du\\,\\frac{u^{b}}{u+x^{2}}g(y+u+x^{2})\\bigg{|}_{y=0}\\.\\] (A.18) In this paper, we use the exponential regulator. For the gluon and ghost fields, this regulator is given by \\[R_{k}(\\Delta)=\\Delta\\,r\\big{(}\\tfrac{\\Delta}{k^{2}}\\big{)}\\quad\\text{with} \\quad r(y)=\\tfrac{1}{\\mathrm{e}^{y}-1}\\,,\\] (A.19) and the functions \\(h(y)\\) and \\(g(y)\\) read [13] \\[h(y)=\\frac{y}{\\mathrm{e}^{y}-1}\\quad\\text{and}\\quad g(y)=\\mathrm{e}^{-y}\\,.\\] (A.20) For the quark fields, the exponential regulator reads \\[R_{k}^{\\psi}(\\mathrm{i}\\bar{\ ot{D}})=\\mathrm{i}\\bar{\ ot{D}}\\,r_{\\psi}\\Big{(} \\tfrac{(\\mathrm{i}\\bar{\ ot{D}})^{2}}{k^{2}}\\Big{)}\\quad\\text{with}\\quad r_{ \\psi}(y)=\\frac{1}{\\sqrt{1-\\mathrm{e}^{-y}}}-1\\,,\\] (A.21) and the functions \\(h^{\\psi}(y,\\tfrac{m}{k})\\) and \\(g^{\\psi}(y,\\tfrac{m}{k})\\) are given by \\[h^{\\psi}(y,\\tilde{m})=\\frac{y^{2}}{(\\mathrm{e}^{y}-1)(y+\\tilde{m}^{2}(1- \\mathrm{e}^{-y}))}\\quad\\text{and}\\quad g^{\\psi}(y,\\tilde{m})=\\frac{y(1- \\mathrm{e}^{-y})(1-\\sqrt{1-\\mathrm{e}^{-y}})}{y+\\tilde{m}^{2}(1-\\mathrm{e}^{- y})}\\,.\\] (A.22) Inserting Eq. (A.20) and (A.22) into Eqs. (A.14)-(A.18) completely determines the desired thermal moments. ### Threshold functions In Sec. 4, the regulator dependence of the flow equations of the four-fermion interactions is controlled by threshold functions. The purely fermionic threshold functions are defined by \\[l_{n}^{(F)d}(t,w)=n\\frac{v_{d-1}}{v_{d}}\\,t\\sum_{n=-\\infty}^{\\infty}\\int_{0}^{ \\infty}\\,dyy^{\\frac{d-3}{2}}\\frac{p_{\\psi}(y_{\\psi})-y_{\\psi}\\dot{p}_{\\psi}(y_ {\\psi})}{[p_{\\psi}(y_{\\psi})+w]^{n+1}}\\,,\\] (A.23) where \\(t\\equiv T/k\\) and \\(w\\) are dimensionless quantities, the latter being associated with finite quark masses. Dots denote derivatives with respect to \\(y_{\\psi}\\). The dimensionless momentum \\(y_{\\psi}=\\tilde{\ u}_{n}^{2}+y\\) depends on the (dimensionless) fermionic Matsubara frequencies \\(\\tilde{\ u}_{n}=(2n+1)\\pi t\\). The function \\(p_{\\psi}(y_{\\psi})\\) is related to the regulator shape function \\(r_{\\psi}\\) by \\[r_{\\psi}(y_{\\psi})=\\sqrt{\\frac{p_{\\psi}(y_{\\psi})}{y_{\\psi}}}-1\\,.\\] (A.24) The factor \\(v_{d}^{-1}\\) is proportional to the volume of the \\(d\\) dimensional unit ball: \\[v_{d}^{-1}=2^{d+1}\\pi^{\\frac{d}{2}}\\Gamma\\left(\\frac{d}{2}\\right)\\,.\\] (A.25) In Sec. 4, we only need \\(l_{1}^{(F)}\\). Using the exponential regulator Eq. (A.21) and \\(w=0\\) for massless quarks, the fermionic threshold function \\(l_{1}^{(F)}(t,0)\\) reads, \\[l_{1}^{(F)}(t,0)=\\sum_{n=-\\infty}^{\\infty}(-1)^{n}\\mathrm{e}^{-\\frac{n}{2t}}\\,, \\qquad l_{1}^{(F)}(t\\to 0,0)\\longrightarrow 1\\,.\\] (A.26) The threshold functions \\(l_{n_{1},n_{2}}^{(FB)d}(t,w_{1},w_{2})\\) arise from Feynman graphs, incorporating fermionic and bosonic fields: \\[l_{n_{1},n_{2}}^{(FB)d}(t,w_{1},w_{2}) = \\frac{v_{d-1}}{v_{d}}\\,t\\sum_{n=-\\infty}^{\\infty}\\int_{0}^{\\infty }\\,dyy^{\\frac{d-3}{2}}\\frac{1}{[p_{\\psi}(y_{\\psi})+w_{1}]^{n_{1}}[p_{A}(y_{A}) +w_{2}]^{n_{2}}}\\] (A.27) \\[\\times\\left\\{\\frac{n_{1}[p_{\\psi}(y_{\\psi})-y_{\\psi}\\dot{p}_{\\psi }(y_{\\psi})]}{p_{\\psi}(y_{\\psi})+w_{1}}+\\frac{n_{2}[p_{A}(y_{A})-y_{A}\\dot{p}_{ A}(y_{A})]}{p_{A}(y_{A})+w_{2}}\\right\\}\\,.\\] Here, \\(w_{1}\\) and \\(w_{2}\\) are dimensionless arguments and dots denote derivatives with respect to \\(y_{\\psi}\\) and \\(y_{A}\\), respectively. In analogy to the fermionic case, the dimensionless bosonic momentum \\(y_{A}=\\tilde{\\omega}_{n}^{2}+y\\) depends on the (dimensionless) bosonic Matsubara frequencies \\(\\tilde{\\omega}_{n}^{2}=4\\pi^{2}n^{2}t^{2}\\). The (bosonic) regulator shape function \\(r\\) is connected with \\(p_{A}\\) by the relation \\[p_{A}(y_{A})=y_{A}[1+r(y_{A})]\\,.\\] (A.28) In Sect. 4, we need \\(l_{1,1}^{(FB)4}\\) and \\(l_{1,2}^{(FB)4}\\). Using the exponential regulator Eq. (A.19) and (A.21), we can calculate the integrals analytically in the limit \\(t\\to 0\\) and \\(w_{1}=w_{2}=0\\) for \\(d=4\\): \\[\\lim_{t\\to 0}l_{1,1}^{(FB)4}(t,0,0)=1\\qquad\\mbox{and}\\qquad\\lim_{t\\to 0}l_{1,2}^{(FB)4}(t,0,0)=3 \\ln(\\tfrac{4}{3})\\,.\\] (A.29) For \\(t\\to\\infty\\) or \\(w\\to\\infty\\), the threshold functions \\(l_{n}^{(F)d}\\) and \\(l_{n_{1},n_{2}}^{(FB)d}\\) approach zero. For finite \\(t\\) and \\(w\\), the threshold functions can easily be evaluated numerically. Resummation of the anomalous dimension Here, we present details for the resummation of the series expansion of the anomalous dimension \\(\\eta\\), \\[\\eta\\simeq\\sum_{i=0}^{\\infty}a_{m}^{\\rm l.g.}\\,G^{m}\\,.\\] (B.1) The leading growth (l.g.) coefficients \\(a_{m}^{\\rm l.g.}\\) read \\[a_{m}^{\\rm l.g.}=a_{m}^{A}+a_{m}^{\\rm q}=4(-2c_{1})^{m-1}\\frac{ \\Gamma(z_{d}+m)\\Gamma(m+1)}{\\Gamma(z_{d}+1)}\\Big{[}\\bar{h}_{2m-e_{d}}^{A}( \\tfrac{T}{k})(d-2)\\frac{2^{2m}-2}{(2m)!}\\tau_{m}^{A}B_{2m}\\\\ -\\frac{4}{\\Gamma(2m)}\\tau_{m}^{A}\\bar{h}_{2m-e_{d}}^{A}(\\tfrac{T} {k})+4^{m+1}\\frac{B_{2m}}{(2m)!}\\tau_{m}^{\\psi}\\sum_{i=1}^{N_{\\rm f}}\\bar{h}_{ 2m-e_{d}}^{\\psi}(\\tfrac{m_{i}}{k},\\tfrac{T}{k})\\Big{]}\\,,\\] (B.2) where \\(B_{2m}\\) are the Bernoulli numbers and \\(z_{d}\\) is defined as \\[z_{d}:=(d-1)(N_{\\rm c}^{2}-1)c_{2}\\,.\\] (B.3) The temperature and regulator-dependent functions \\(c_{1}\\) and \\(c_{2}\\) are given by \\[c_{1} = 2\\big{(}\\bar{H}_{0}^{A}\\big{(}\\tfrac{T}{k}\\big{)}-\\bar{G}_{0}^{A }\\big{(}\\tfrac{T}{k}\\big{)}\\big{)}\\,,\\] (B.4) \\[c_{2} = \\frac{\\bar{h}_{-e_{d}}^{A}\\big{(}\\tfrac{T}{k}\\big{)}-\\bar{g}_{-e _{d}}^{A}\\big{(}\\tfrac{T}{k}\\big{)}}{c_{1}}\\,.\\] (B.5) Note that \\(c_{1}>0\\) and \\(c_{2}>0\\) for \\(\\tfrac{T}{k}\\geq 0\\). In the limits \\(\\tfrac{T}{k}\\to 0\\) and \\(\\tfrac{T}{k}\\to\\infty\\), \\(c_{1}\\) and \\(c_{2}\\) are given by \\[\\lim_{\\tfrac{T}{k}\\to 0}c_{1} = c_{1}^{0}=\\frac{4}{d}\\Big{(}\\frac{d}{2}\\zeta\\Big{(}1+\\frac{d}{2 }\\Big{)}-1\\Big{)}\\,,\\] (B.6) \\[\\lim_{\\tfrac{T}{k}\\to 0}c_{2} = c_{2}^{0}=\\frac{d}{4}\\,,\\] (B.7) \\[\\lim_{\\tfrac{T}{k}\\to\\infty}\\tfrac{k}{T}c_{1} = c_{1}^{\\infty}=2\\sqrt{4\\pi}\\Big{(}\\zeta\\Big{(}1+e_{d}\\Big{)}- \\frac{2}{d-1}\\Big{)}\\,,\\] (B.8) \\[\\lim_{\\tfrac{T}{k}\\to\\infty}c_{2} = c_{2}^{\\infty}=\\frac{e_{d}\\zeta(1+e_{d})-1}{2(\\zeta(1+e_{d})- \\tfrac{2}{d-1})}\\,,\\] (B.9) where we have used Eq. (A.9)-(A.18) for the exponential regulator and \\(\\zeta(x)\\) denotes the Riemann Zeta function. Now, we perform the resummation of \\(\\eta\\) along the lines of [13]: We split the anomalous dimension Eq. (25) into three contributions, \\[\\eta=\\eta_{1}^{A}+\\eta_{2}^{A}+\\eta^{\\rm q}\\,,\\] (B.10)where \\(\\eta_{1}^{A}\\) corresponds to the resummation of the term \\(\\sim\\tau_{m}^{A}B_{2m}\\) in Eq. (B.2) and \\(\\eta_{2}^{A}\\) to the resummation of the term containing the Nielsen-Olesen unstable mode (\\(\\sim 1/\\Gamma(2m)\\)), representing the leading and subleading growth, respectively. The remaining contributions are contained in \\(\\eta^{\\rm q}\\). First, we confine ourselves to \\(SU(N_{\\rm c}=2)\\) for which the group theoretical factors are \\(\\tau_{m}^{A}\\ =\\ N_{\\rm c}\\) and \\(\\tau_{m}^{\\psi}\\ =\\ N_{\\rm c}\\,(1/4)^{m}\\ =\\ 2\\,(1/4)^{m}\\) (see Appendix C for details), but we artificially retain the \\(N_{\\rm c}\\) dependence in all terms in order to simplify the generalization to gauge groups of higher rank. We start with the resummation of \\(\\eta_{1}^{A}\\): For this purpose, we use the standard integral representation of the \\(\\Gamma\\) functions [66], \\[\\Gamma(z_{d}+m)\\Gamma(m+1)=\\int_{0}^{\\infty}\\!ds_{1}\\int_{0}^{\\infty}\\!ds_{2} \\,s_{1}s_{2}^{z_{d}}(s_{1}s_{2})^{m-1}{\\rm e}^{-(s_{1}+s_{2})}=\\int_{0}^{\\infty }\\!dp\\,\\tilde{K}_{z_{d}-1}(p)\\,p^{m-1},\\] (B.11) where we have introduced the modified Bessel function \\[\\tilde{K}_{z_{d}-1}(s)=2s^{\\frac{1}{2}(z_{d}+1)}K_{z_{d}-1}(2\\sqrt{s})\\,.\\] (B.12) Furthermore, we use the series representation of the Bernoulli numbers [66], \\[\\frac{B_{2m}}{(2m)!}=2\\frac{(-1)^{m-1}}{(2\\pi)^{2m}}\\sum_{l=1}^{\\infty}\\, \\frac{1}{l^{2m}}\\,.\\] (B.13) With the aid of Eq. (B.11) and (B.13), we rewrite \\(\\eta_{1}^{A}\\) as follows \\[\\eta_{1}^{A}=\\frac{4(d\\!-\\!2)N_{\\rm c}G}{\\pi^{2}\\Gamma(z_{d}\\!+\\!1)}\\,\\sum_{m= 1}^{\\infty}\\sum_{l=1}^{\\infty}\\frac{1}{l^{2}}\\int_{0}^{\\infty}\\!dp\\,\\tilde{K} _{z_{d}-1}(p)\\,\\bar{h}^{A}_{2m-e_{d}}(\\tfrac{T}{k})\\Big{[}2\\Big{(}\\frac{2Gpc_ {1}}{\\pi^{2}l^{2}}\\Big{)}^{m-1}\\!-\\Big{(}\\frac{Gpc_{1}}{2\\pi^{2}l^{2}}\\Big{)} ^{m-1}\\Big{]}\\,.\\] (B.14) In order to perform the summation over \\(m\\), we define \\[S_{b}^{A}(q,v)= \\sum_{l=1}^{\\infty}\\frac{1}{l^{2}}\\sum_{m=1}^{\\infty}\\Big{(} \\frac{q}{l^{2}}\\Big{)}^{m-1}\\bar{h}^{A}_{2m-e_{d}}(v)\\] \\[= \\frac{2}{\\sqrt{\\pi}}\\sum_{l=1}^{\\infty}\\sum_{m=0}^{\\infty}\\sum_{ n=-\\infty}^{\\infty}\\int_{0}^{\\infty}\\!dx\\cos\\Big{(}\\frac{nx}{v}\\Big{)}\\int_{0}^{ \\infty}\\!dt\\,\\frac{{\\rm e}^{-t}}{l^{2}}\\int_{0}^{\\infty}\\!ds\\,\\tilde{h}(s) \\frac{s^{2-e_{d}}}{(2m)!}\\Big{(}\\frac{st\\sqrt{q}}{l}\\Big{)}^{2m}{\\rm e}^{-sx^{2}}\\] \\[= \\frac{1}{\\Gamma(b\\!+\\!1)\\sqrt{q\\pi}}\\sum_{n=-\\infty}^{\\infty}\\int _{0}^{\\infty}\\!dx\\,\\cos\\Big{(}\\frac{nx}{v}\\Big{)}\\int_{0}^{\\infty}\\!dt\\,{\\rm Li }_{1}\\Big{(}{\\rm e}^{-\\frac{t}{\\sqrt{q}}}\\Big{)}\\,\\sigma_{b}^{A}(x^{2},t)\\,,\\] (B.15) where we have used Eqs. (A.9), (A.1) and (A.13). The auxiliary function \\(\\sigma_{b}^{A}\\) is defined as \\[\\sigma_{b}^{A}(x,t)=\\Big{(}-\\frac{d}{dy}\\Big{)}^{b+3-e_{d}}\\,\\int_{0}^{\\infty} \\!du\\,u^{b}\\Big{[}h(y\\!+\\!u\\!+\\!x^{2}\\!-\\!t)+h(y\\!+\\!u\\!+\\!x^{2}\\!+\\!t)\\Big{]} \\bigg{|}_{y=0}\\,\\,.\\] (B.16)Using Eq. (B.15), we obtain the final expression for \\(\\eta_{1}^{A}\\), \\[\\eta_{1}^{A}=\\frac{4(d\\!-\\!2)N_{\\rm c}G}{\\pi^{2}\\Gamma(z_{d}\\!+\\!1)}\\,\\int_{0}^{ \\infty}\\!dp\\,\\tilde{K}_{z_{d}-1}(p)\\,\\Big{[}2S_{b}^{A}\\Big{(}\\frac{2Gpc_{1}}{ \\pi^{2}},\\frac{T}{k}\\Big{)}\\!-\\!S_{b}^{A}\\Big{(}\\frac{Gpc_{1}}{2\\pi^{2}},\\frac {T}{k}\\Big{)}\\Big{]}\\,,\\] (B.17) which can straightforwardly be evaluated numerically. Now we turn to the calculation of \\(\\eta_{2}^{A}\\), the subleading-growth part of \\(\\eta\\). Here, a careful treatment of the zeroth Matsubara frequency which contains the Nielsen-Olesen mode, is necessary. More specifically, we transform the modified moments \\(\\bar{h}_{j}^{A}\\) in Eq. (A.15) into a sum over Matsubara frequencies and insert a regulator function \\({\\cal P}\\big{(}\\frac{T}{k}\\big{)}\\) for the unstable mode, \\[\\bar{h}_{j}^{A,reg}(v)=\\sqrt{4\\pi}v\\sum_{n=-\\infty}^{\\infty}\\int_{0}^{\\infty} ds\\,\\widetilde{h}(s)s^{j}{\\rm e}^{-s\\widetilde{\\cal P}_{n}(v)}\\,.\\] (B.18) Here, we have introduced \\[\\widetilde{\\cal P}_{n}(v)=\\left\\{\\begin{array}{ll}(2\\pi nv)^{2}&(n\ eq 0) \\\\ {\\cal P}(v)&(n=0)\\end{array}\\right.\\,.\\] (B.19) The function \\({\\cal P}(v)\\) specifies the regularization of the Nielsen-Olesen mode and is defined in Eq. (D.3); the other modes with \\(n\ eq 0\\) remain unmodified. We rewrite \\(\\eta_{2}^{A}\\) by means of Eq. (B.11), \\[\\eta_{2}^{A}=-\\frac{16N_{\\rm c}G}{\\Gamma(z_{d}\\!+\\!1)}\\,\\sum_{m=1}^{\\infty} \\frac{1}{\\Gamma(2m)}\\int_{0}^{\\infty}\\!dp\\,\\tilde{K}_{z_{d}-1}(p)\\,\\bar{h}_{ 2m-e_{d}}^{A,reg}(\\frac{T}{k})\\Big{(}-2Gpc_{1}\\Big{)}^{m-1}\\,.\\] (B.20) Now it is convenient to introduce an auxiliary function \\(T^{A}(q)\\) which is defined as \\[T_{b}^{A}(q,v)= \\sum_{m=1}^{\\infty}\\frac{1}{\\Gamma(2m)}\\Big{(}-q\\Big{)}^{m-1} \\bar{h}_{2m-e_{d}}^{A,reg}(v)\\] \\[= \\frac{\\sqrt{\\pi}v}{\\Gamma(b+1)}\\sum_{n=-\\infty}^{\\infty}\\int_{0} ^{1}\\!dt\\int_{0}^{\\infty}\\!du\\,u^{b}\\int_{0}^{\\infty}\\!ds\\,\\tilde{h}(s)s^{b+3 -e_{d}}{\\rm e}^{-s(u+\\widetilde{\\cal P}_{n}(v))}\\Big{[}{\\rm e}^{-st\\sqrt{-q} }+{\\rm e}^{st\\sqrt{-q}}\\Big{]}\\] \\[= \\frac{\\sqrt{\\pi}v}{\\Gamma(b\\!+\\!1)}\\sum_{n=-\\infty}^{\\infty}\\, \\vartheta_{b}^{A}(\\widetilde{\\cal P}_{n}(v),q)\\,,\\] (B.21) Here, we have used Eqs. (B.18) and (A.13). Furthermore, we have defined the function \\(\\vartheta_{b}^{A}\\): \\[\\vartheta_{b}^{A}(x,q)=\\left.\\Big{(}-\\frac{d}{dy}\\Big{)}^{b+3-e_{d}}\\int_{0}^ {1}\\!dt\\int_{0}^{\\infty}\\!du\\,u^{b}\\Big{[}h(y\\!+\\!u\\!+\\!x\\!-\\!t\\sqrt{-q})+h(y\\! +\\!u\\!+\\!x\\!+\\!t\\sqrt{-q})\\Big{]}\\right|_{y=0}\\,.\\] (B.22)Applying Eq. (B.21) to Eq. (B.20), we obtain \\[\\eta_{2}^{A}=-\\frac{16N_{\\rm c}G}{\\Gamma(z_{d}{+}1)}\\,\\int_{0}^{\\infty}\\!dp\\, \\tilde{K}_{z_{d}-1}(p)\\,T_{b}^{A}(2Gpc_{1},\\tfrac{T}{k})\\,,\\] (B.23) which can straightforwardly be evaluated numerically. Finally, we have to calculate the contribution of the quarks to the gluon anomalous dimension. Performing analogous steps along the lines of the calculation of \\(\\eta_{1}^{A}\\), we obtain \\[\\eta^{q}=\\frac{8N_{\\rm c}G}{\\pi^{2}\\Gamma(z_{d}{+}1)}\\sum_{i=1}^{N_{\\rm f}} \\int_{0}^{\\infty}\\!dp\\,\\tilde{K}_{z_{d}-1}(p)\\,S_{b}^{\\psi}\\Big{(}\\frac{pGc_{ 1}}{2\\pi^{2}},\\frac{T}{k},\\frac{m_{i}}{k}\\Big{)}\\,.\\] (B.24) The auxiliary function \\(S_{b}^{\\psi}(q,\\tilde{m})\\) is defined as \\[S_{b}^{\\psi}(q,v,\\tilde{m})=\\frac{1}{\\Gamma(b{+}1)\\sqrt{4\\pi q}}\\sum_{n=- \\infty}^{\\infty}(-1)^{n}\\int_{0}^{\\infty}\\!dx\\,\\cos\\Big{(}\\frac{nx}{v}\\Big{)} \\int_{0}^{\\infty}\\!dt\\,{\\rm Li}_{1}\\Big{(}{\\rm e}^{-\\frac{t}{\\sqrt{q}}}\\Big{)} \\,\\sigma_{b}^{\\psi}(u,x^{2},t,\\tilde{m})\\,,\\] (B.25) where \\(\\sigma_{b}^{\\psi}(u,x,t,\\tilde{m})\\) is given by \\[\\sigma_{b}^{\\psi}(u,x,t,\\tilde{m}) = \\Big{(}-\\frac{d}{dy}\\Big{)}^{b+3-e_{d}}\\int_{0}^{\\infty}\\!du\\,u^ {b}\\Big{[}h_{s}^{\\psi}\\Big{(}\\sqrt{y{+}u{+}x{-}t},\\tilde{m}\\Big{)}+h_{s}^{\\psi }\\Big{(}\\sqrt{y{+}u{+}x{+}t},\\tilde{m}\\Big{)}\\] (B.26) \\[\\qquad\\qquad+h_{s}^{\\psi}\\Big{(}{-}\\sqrt{y{+}u{+}x{-}t},\\tilde{ m}\\Big{)}+h_{s}^{\\psi}\\Big{(}{-}\\sqrt{y{+}u{+}x{+}t},\\tilde{m}\\Big{)}\\Big{]} \\Big{|}_{y=0}\\,.\\] The regulator function occurs in the function \\(h_{s}^{\\psi}(\\sqrt{y},\\tilde{m})\\) which is related to \\(h^{\\psi}(y,\\tilde{m})\\) by \\[h_{s}^{\\psi}(\\sqrt{y},\\tilde{m})\\equiv h^{\\psi}(y,\\tilde{m})\\,.\\] (B.27) There is one essential difference between the resummation of \\(\\eta_{1/2}^{A}\\) and that of \\(\\eta^{q}\\): the regulator shape function \\(r(y)\\) can be expanded in powers of \\(y\\), while the corresponding function \\(r_{\\psi}(y)\\) for the quark fields should have a power series in \\(\\sqrt{y}\\) which is a consequence of chiral symmetry [67]; this explains the notation \\(h_{s}^{\\psi}(\\sqrt{y},\\tilde{m})\\). We stress that all integral representations in Eqs. (B.17), (B.23) and (B.24) are finite and can be evaluated numerically. For \\(d=4\\) and in the limit \\(T\\to 0\\), the results agree with those of Ref. [13]. The remainder of this section deals with a generalization to higher gauge groups. Since we do not have the explicit representation of the color factors \\(\\tau_{m}^{A/\\psi}\\) for gauge groups with \\(N_{\\rm c}\\geq 3\\) at hand, we have to scan the Cartan subalgebra for the extremal values of \\(\\tau_{m}^{A}\\) and \\(\\tau_{m}^{\\psi}\\). However, as discussed in App. C, these extremal values of \\(\\tau_{m}^{A}\\) and \\(\\tau_{m}^{\\psi}\\) can be calculated straightforwardly. Their insertion into Eq. (B.2) allows to display the anomalous dimension for \\(SU(3)\\) in terms of the already calculated formulas for \\(SU(2)\\): \\[\\eta_{3}^{\\rm SU(3)} = \\frac{2}{3}\\Big{[}\\eta_{1}^{A}+\\eta_{2}^{A}\\Big{]}_{N_{\\rm c} \\to 3}+\\frac{1}{3}\\Big{[}\\eta_{1}^{A}+\\eta_{2}^{A}\\Big{]}_{N_{\\rm c} \\to 3,c_{1}\\to c_{1}/4}+\\frac{2}{3}\\eta^{\\psi}\\Big{|}_{N_{\\rm c} \\to 3}\\,,\\] (B.28) \\[\\eta_{8}^{\\rm SU(3)} = \\Big{[}\\eta_{1}^{A}+\\eta_{2}^{A}\\Big{]}_{N_{\\rm c}\\to 3,c_{1} \\to 3c_{1}/4}+\\frac{2}{9}\\eta^{\\psi}\\Big{|}_{N_{\\rm c}\\to 3,c_{1}\\to c_{1}/3}+\\frac{4}{9}\\eta^{ \\psi}\\Big{|}_{N_{\\rm c}\\to 3,c_{1}\\to 4c_{1}/3}\\,.\\] (B.29)The notation here serves as a recipe for replacing \\(N_{\\rm c}\\) and \\(c_{1}\\), defined in Eq. (B.4), which appear on the right-hand sides of Eqs. (B.17), (B.23) and (B.24). Note that the replacement of \\(N_{\\rm c}\\) results also in a modification of \\(z_{d}\\), defined in Eq. (B.3). However, \\(c_{2}\\), which appears in the definition of \\(z_{d}\\), remains unchanged for all gauge groups and depends only on the dimension \\(d\\). ## Appendix C Color factors In the following, we discuss the color factors \\(\\tau_{i}^{A}\\) and \\(\\tau_{i}^{\\psi}\\) which carry the information of the underlying \\(SU(N_{\\rm c})\\) gauge group. First, we summarize the discussion of Ref. [18, 13, 33] for the \"gluonic \" factors \\(\\tau_{i}^{A}\\) appearing in the flow equation: Gauge group information enters the flow of the coupling via color traces over products of field strength tensors and gauge potentials. For our calculation, it suffices to consider a pseudo-abelian background field \\(\\bar{A}\\) which points into a constant color direction \\(n^{a}\\). Therefore, the color traces reduce to \\[n^{a_{1}}n^{a_{2}}\\ldots n^{a_{2i}}\\,{\\rm tr}_{\\rm c}[T^{(a_{1}}T^{a_{2}} \\ldots T^{a_{2i})}]\\,,\\] (C.1) where the parentheses at the color indices denote symmetrization. These factors are not independent of the direction of \\(n^{a}\\), but the left-hand side of the flow equation is, since it is a function of the \\(n^{a}\\)-independent quantity \\(\\frac{1}{4}F_{\\mu\ u}^{a}F_{\\mu\ u}^{a}\\). For this reason, we only need that part of the symmetric invariant tensor \\({\\rm tr}_{\\rm c}[T^{(a_{1}}\\ldots T^{a_{2i})}]\\) which is proportional to the trivial one, \\[{\\rm tr}_{\\rm c}[T^{(a_{1}}T^{a_{2}}\\ldots T^{a_{2i})}]=\\tau_{i}\\,\\delta_{(a_ {1}a_{2}}\\ldots\\delta_{a_{2i-1}a_{2i})}+\\ldots\\,.\\] (C.2) Here, we have neglected further nontrivial symmetric invariant tensors, since they do not contribute to the flow of \\({\\cal W}_{k}(\\theta)\\), but to that of other operators which do not belong to our truncation. For the gauge group SU(2), there are no further symmetric invariant tensors in Eq. (C.2), implying \\[\\tau_{i}^{{\\rm SU(2)}}=2,\\quad i=1,2,\\ldots\\;.\\] (C.3) However, for higher gauge groups, the above mentioned complications arise. Therefore, we do not evaluate the \\(\\tau_{i}^{A}\\)'s from Eq. (C.2) directly; instead, we use the fact that the color unit vector \\(n^{a}\\) can always be rotated into the Cartan sub-algebra. Here, we choose the two color vectors \\(n^{a}\\) which give the extremal values for the whole trace of Eq. (C.1). For SU(3), these extremal choices are given by vectors \\(n^{a}\\) pointing into the 3- and 8-direction in color space, respectively: \\[\\tau_{i,3}^{A,{\\rm SU(3)}}=2+\\frac{1}{4^{i-1}},\\quad\\tau_{i,8}^{A,{\\rm SU(3)}} =3\\,\\left(\\!\\frac{3}{4}\\!\\right)^{i-1}\\,.\\] (C.4) Finally, we turn to the color factors \\(\\tau_{j}^{\\psi}\\) of the quark sector. The above considerations also hold for the contributions of the flow equation which arise from the fermionic part of our truncation Eq. (15) and (17). Taking into account that quarks live in the fundamental representation and choosing a color vector \\(n^{a}\\) pointing into the 3- or 8-direction, we obtain \\[\\tau_{i,3}^{\\psi,\\mathrm{SU}\\left(3\\right)}=2\\,\\left(\\frac{1}{4}\\right)^{i},\\quad \\tau_{i,8}^{\\psi,\\mathrm{SU}\\left(3\\right)}=2\\,\\left(\\frac{1}{12}\\right)^{i}+ \\left(\\frac{1}{3}\\right)^{i}\\quad i=1,2,\\ldots\\.\\] (C.5) Again, all complications are absent for SU(2) and we find \\(\\tau_{i}^{\\psi,\\mathrm{SU}\\left(2\\right)}=\\tau_{i,3}^{\\psi,\\mathrm{SU}\\left(3 \\right)}\\). The uncertainty introduced by the artificial \\(n^{a}\\) dependence of the color factors is the reason for the uncertainties of our results for the critical temperature and the fixed point values in three and four dimensions. ## Appendix D Regulator dependence from the unstable mode In this section, we discuss the regulator dependence of the critical temperature \\(T_{\\sigma}\\), arising from the details of projecting out the unstable Nielsen-Olesen mode. As already explained in the main text, removing the tachyonic part of the unstable mode corresponds to an exact operation on the space of admissible stable background fields. In the present context, it even suffices to remove only the thermal excitations of the tachyonic part of the mode, since the imaginary part arising from quantum fluctuations can easily be identified and dropped. In the following, we take a less strict viewpoint and allow for a smeared regularization of this mode in a whole class of regulators. Since the true physical result will not depend on this part of the regularization, we can identify the optimal (truncated) result with a stationary point in the space of regulators, using the \"principle of minimum sensitivity\", cf. [37]. In order to inhibit the thermal population of the Nielsen-Olesen mode \\(E^{\\mathrm{NO}}\\) at finite temperature, it suffices to regularize Figure 8: Dependence of the critical temperature \\(T_{c}\\) on the smeared regularization of the Nielsen-Olesen mode with \\(m\\) labeling the regulator. The left and the right panel show the results for \\(N_{\\mathrm{c}}\\) with \\(N_{\\mathrm{f}}=3\\) and \\(N_{\\mathrm{f}}=11\\) massless quark flavors, respectively. The limit \\(m\\to\\infty\\) can be identified with the stationary point, and thus optimal regulator, in the class of considered regulators. This justifies constructively the procedure used in the main text which was derived from general considerations. only the soft part (zero Matsubara frequency) of this mode as follows: \\[\\frac{\\mathrm{E}_{\\mathrm{soft}}^{\\mathrm{NO}}+R_{k}}{k^{2}}\\quad \\longrightarrow\\quad\\mathcal{P}(\\tfrac{T}{k})+\\frac{\\mathrm{E}_{\\mathrm{soft}}^{ \\mathrm{NO}}+R_{k}}{k^{2}}\\,.\\] (D.1) The function \\(\\mathcal{P}(\\tfrac{T}{k})\\) has to satisfy the following constraints: \\[\\lim_{T/k\\to 0}\\mathcal{P}(\\tfrac{T}{k})=0\\qquad\\text{and}\\qquad\\lim_{T/k \\to\\infty}\\mathcal{P}(\\tfrac{T}{k})\\to\\infty\\,.\\] (D.2) In the following, we choose \\[\\mathcal{P}(\\tfrac{T}{k})\\equiv\\mathcal{P}_{m}(\\tfrac{T}{k})=(\\tfrac{T}{k})^{m }\\qquad\\text{with}\\qquad m\\,>\\,0\\] (D.3) as a convenient example. As a regulator optimization condition, we demand that \\(T_{\\mathrm{cr}}\\) should be stationary with respect to a variation of the optimal regulator function. Calculating \\(T_{\\mathrm{cr}}\\) as a function of the parameter \\(m\\), the optimization condition for the regulator function translates into \\[\\frac{\\partial T_{cr}}{\\partial m}\\Bigg{|}_{m=\\bar{m}}\\stackrel{{! }}{{=}}0\\,.\\] (D.4) The solution \\(m=\\bar{m}\\) defines the desired optimized regulator. As an example, we show \\(T_{cr}(m)/T_{cr}(\\infty)\\) as a function of \\(m\\) for \\(N_{\\mathrm{c}}=3\\) with \\(N_{\\mathrm{f}}=3\\) and with \\(N_{\\mathrm{f}}=11\\) quark flavors in Fig. 8. We find that the optimized regulator is given by \\(m\\to\\infty\\) for all \\(N_{\\mathrm{c}}\\) and \\(N_{\\mathrm{f}}\\). This represents an independent and constructive justification of the regularization used in the main text, corresponding to the choice \\(m\\to\\infty\\). ## References * [1] F. Karsch and E. Laermann, arXiv:hep-lat/0305025; D. H. Rischke, Prog. Part. Nucl. Phys. **52**, 197 (2004). P. Braun-Munzinger, D. Magestro, K. Redlich and J. Stachel, Phys. Lett. B **518** (2001) 41; P. Braun-Munzinger, K. Redlich and J. Stachel, arXiv:nucl-th/0304013; P. Braun-Munzinger, J. Stachel and C. Wetterich, Phys. Lett. B **596** (2004) 61; U. W. Heinz, AIP Conf. Proc. **739**, 163 (2005). * [2] R. D. Pisarski and F. Wilczek, Phys. Rev. D **29**, 338 (1984). * [3] E. Shuryak, Prog. Part. Nucl. Phys. **53**, 273 (2004); M. Gyulassy and L. McLerran, Nucl. Phys. A **750**, 30 (2005). * [4] E. V. Shuryak, Sov. Phys. JETP **47**, 212 (1978) [Zh. Eksp. Teor. Fiz. **74**, 408 (1978)]; S. A. Chin, Phys. Lett. B **78**, 552 (1978). * [5] J. I. K apusta, Nucl. Phys. B **148**, 461 (1979). * [6] P. Arnold and C. X. Zhai, Phys. Rev. D **50**, 7603 (1994); C. x. Zhai and B. M. K astening, Phys. Rev. D **52**, 7232 (1995). * [7] A. D. Linde, Phys. Lett. B **96**, 289 (1980); D. J. Gross, R. D. Pisarski and L. G. Yaffe, Rev. Mod. Phys. **53**, 43 (1981). * [8] E. Braaten and R. D. Pisarski, Phys. Rev. D **45**, 1827 (1992). E. Braaten and A. Nieto, Phys. Rev. D **53**, 3421 (1996); K. Kajantie, M. Laine, K. Rummukainen and M. E. Shaposhnikov, Nucl. Phys. B **458**, 90 (1996); Nucl. Phys. B **503**, 357 (1997); K. Kajantie, M. Laine, K. Rummukainen and Y. Schroder, Phys. Rev. D **67**, 105008 (2003);* [9] M. Laine, arXiv:hep-ph/0301011; C. P. Korthals Altes, arXiv:hep-ph/0308229; A. Hart, M. Laine and O. Philipsen, Nucl. Phys. B **586**, 443 (2000); P. Giovannangeli and C. P. Korthals Altes, Nucl. Phys. B **721**, 25 (2005); M. Laine and Y. Schroder, JHEP **0503**, 067 (2005). * [10] F. Wegner, A. Houghton, Phys. Rev. **A 8** (1973) 401; K. G. Wilson and J. B. Kogut, Phys. Rept. **12** (1974) 75; J. Polchinski, Nucl. Phys. **B231** (1984) 269. * [11] C. Wetterich, Phys. Lett. B **301** (1993) 90. * [12] M. Bonini, M. D'Attanasio and G. Marchesini, Nucl. Phys. B **409** (1993) 441; U. Ellwanger, Z. Phys. C **62** (1994) 503; T. R. Morris, Int. J. Mod. Phys. A **9** (1994) 2411. * [13] H. Gies, Phys. Rev. D **66**, 025006 (2002). * [14] J. Braun and H. Gies, arXiv:hep-ph/0512085. * [15] J. M. Pawlowski, arXiv:hep-th/0512261. * [16] L. F. Abbott, Nucl. Phys. B **185**, 189 (1981). * [17] M. Reuter and C. Wetterich, Nucl. Phys. B **417**, 181 (1994); * [18] M. Reuter and C. Wetterich, Phys. Rev. D **56**, 7893 (1997). * [19] F. Freire, D. F. Litim and J. M. Pawlowski, Phys. Lett. B **495**, 256 (2000). * [20] U. Ellwanger, Phys. Lett. B **335** (1994) 364; M. Bonini, M. D'Attanasio and G. Marchesini, Nucl. Phys. B **437**, 163 (1995). * [21] J. M. Pawlowski, Int. J. Mod. Phys. A **16**, 2105 (2001). * [22] D. F. Litim and J. M. Pawlowski, Phys. Rev. D **66**, 025030 (2002). * [23] T. R. Morris, JHEP **0012**, 012 (2000); S. Arnone, T. R. Morris and O. J. Rosten, arXiv:hep-th/0507154. O. J. Rosten, arXiv:hep-th/0602229. * [24] M. D'Attanasio and M. Pietroni, Nucl. Phys. B **498**, 443 (1997); D. F. Litim and J. M. Pawlowski, arXiv:hep-th/9901063. * [25] D. F. Litim and J. M. Pawlowski, Phys. Lett. B **546**, 279 (2002). * [26] H. Gies and J. Jaeckel, Phys. Rev. Lett. **93**, 110405 (2004). * [27] U. Ellwanger and C. Wetterich, Nucl. Phys. B **423**, 137 (1994). * [28] K. Aoki, Int. J. Mod. Phys. B **14**, 1249 (2000); J. Berges, N. Tetradis and C. Wetterich, Phys. Rept. **363**, 223 (2002); J. Polonyi, Central Eur. J. Phys. **1**, 1 (2003). * [29] S. B. Liao, Phys. Rev. D **53**, 2020 (1996); R. Floreanini and R. Percacci, Phys. Lett. B **356**, 205 (1995). * [30] B. J. Schaefer and H. J. Pirner, Nucl. Phys. A **660**, 439 (1999); B. J. Schaefer and J. Wambach, Nucl. Phys. A **757**, 479 (2005); J. Braun, B. Klein and H. J. Pirner, Phys. Rev. D **71**, 014032 (2005); Phys. Rev. D **72**, 034017 (2005); J. Braun, B. Klein, H. J. Pirner and A. H. Rezaeian, arXiv:hep-ph/0512274. * [31] U. Ellwanger, M. Hirsch and A. Weber, Z. Phys. C **69**, 687 (1996). * [32] D. F. Litim and J. M. Pawlowski, Phys. Lett. B **435**, 181 (1998). * [33] H. Gies, Phys. Rev. D **68**, 085015 (2003). * [34] G. V. Dunne and T. M. Hall, Phys. Rev. D **60** (1999) 065002; G. V. Dunne and C. Schubert, JHEP **0206**, 042 (2002). * [35] N. K. Nielsen and P. Olesen, Nucl. Phys. B **144**, 376 (1978). * [36] W. Dittrich and V. Schanbacher, Phys. Lett. B **100**, 415 (1981); B. Muller and J. Rafelski, Phys. Lett. B **101**, 111 (1981); A. O. Starinets, A. S. Vshivtsev and V. C. Zhukovsky, Phys. Lett. B **322**, 403 (1994); P. N. Meisinger and M. C. Ogilvie, Phys. Lett. B **407**, 297 (1997); H. Gies, Ph.D. Thesis, Tubingen U. (1999). * [37] P. M. Stevenson, Phys. Rev. D **23** (1981) 2916. * [38] S. Bethke, Nucl. Phys. Proc. Suppl. **135** (2004) 345. * [39] L. von Smekal, R. Alkofer and A. Hauck, Phys. Rev. Lett. **79**, 3591 (1997); D. Atkinson and J. C. Bloch, Mod. Phys. Lett. A **13**, 1055 (1998); C. Lerche and L. von Smekal, Phys. Rev. D **65**, 125006 (2002); C. S. Fischer and R. Alkofer, Phys. Lett. **B536**, 177 (2002); J. M. Pawlowski, D. F. Litim, S. Nedelko and L. von Smekal, Phys. Rev. Lett. **93**, 152002 (2004); C. S. Fischer and H. Gies, JHEP **0410**, 048 (2004). * [40] R. Alkofer, C. S. Fischer and F. J. Llanes-Estrada, Phys. Lett. B **611**, 279 (2005). * [41] T. Kugo and I. Ojima, Prog. Theor. Phys. Suppl. **66** (1979) 1; V. N. Gribov, Nucl. Phys. B **139**, 1 (1978); D. Zwanziger, Phys. Rev. D **69** (2004) 016002. * [42] J. C. Taylor, Nucl. Phys. B **33** (1971) 436. * [43] Y. L. Dokshitzer, A. Lucenti, G. Marchesini and G. P. Salam, JHEP **9805**, 003 (1998); Y. L. Dokshitzer, arXiv:hep-ph/9812252. * [44] E. Eichten _et al._,Phys. Rev. Lett. **34**, 369 (1975) [Erratum-ibid. **36**, 1276 (1975)]; T. Barnes, F. E. Close and S. Monaghan, Nucl. Phys. B **198**, 380 (1982); S. Godfrey and N. Isgur, Phys. Rev. D **32**, 189 (1985); A. C. Mattingly and P. M. Stevenson, Phys. Rev. Lett. **69**, 1320 (1992). * [45] G. Grunberg, Phys. Rev. D **65**, 021701 (2002); E. Gardi and G. Grunberg, JHEP **9903**, 024 (1999). * [46] D. V. Shirkov and I. L. Solovtsov, Phys. Rev. Lett. **79**, 1209 (1997); Theor. Math. Phys. **120**, 1220 (1999) [Teor. Mat. Fiz. **120**, 482 (1999)]; N. G. Stefanis, W. Schroers and H. C. Kim, Eur. Phys. J. C **18**, 137 (2000); D. V. Shirkov, Eur. Phys. J. C **22**, 331 (2001). * [47] S. J. Brodsky, S. Menke, C. Merino and J. Rathsman, Phys. Rev. D **67** (2003) 055008; S. J. Brodsky, E. Gardi, G. Grunberg and J. Rathsman, Phys. Rev. D **63** (2001) 094017. * [48] A. Deur, V. Burkert, J. P. Chen and W. Korsch, arXiv:hep-ph/0509113. * [49] A. Maas, J. Wambach and R. Alkofer, Eur. Phys. J. C **42**, 93 (2005); A. Maas, J. Wambach, B. Gruter and R. Alkofer, Eur. Phys. J. C **37**, 335 (2004). * [50] H. Gies and J. Jaeckel, arXiv:hep-ph/0507171. * [51] T. Banks and A. Zaks, Nucl. Phys. B **196**, 189 (1982); V. A. Miransky and K. Yamawaki, Phys. Rev. D **55**, 5051 (1997); T. Appelquist, J. Terning and L. C. R. Wijewardhana, Phys. Rev. Lett. **77**, 1214 (1996). * [52] H. Gies, J. Jaeckel and C. Wetterich, Phys. Rev. D **69** (2004) 105008. * [53] H. Gies and C. Wetterich, Phys. Rev. D **69**, 025001 (2004). * [54] K. I. Aoki, K. Morikawa, J. I. Sumi, H. Terao and M. Tomoyose, Phys. Rev. D **61**, 045008 (2000); K. I. Aoki, K. Takagi, H. Terao and M. Tomoyose, Prog. Theor. Phys. **103**, 81 5 (2000). * [55] V. A. Miransky, Nuovo Cim. A **90**, 149 (1985); C. D. Roberts and S. M. Schmidt, Prog. Part. Nucl. Phys. **45**, S1 (2000); R. Alkofer and L. von Smekal, Phys. Rept. **353**, 281 (2001); [arXiv:hep-ph/0007355]. * [56] H. Gies and C. Wetterich, Phys. Rev. D **65** (2002) 065001; J. Jaeckel, hep-ph/0309090. * [57] F. Karsch, E. Laermann and A. Peikert, Nucl. Phys. B **605** (2001) 579. * [58] D. F. Litim, Phys. Lett. B **486** (2000) 92; Phys. Rev. D **64** (2001) 105007. * [59] P. J. Silva and O. Oliveira, arXiv:hep-lat/0511043. * [60] E. M. Ilgenfritz, M. Muller-Preussker, A. Sternbeck and A. Schiller, arXiv:hep-lat/0601027. * [61] A. Cucchieri, Phys. Lett. B **422** (1998) 233; K. Langfeld, H. Reinhardt and J. Gattnar, Nucl. Phys. B **621** (2002) 131; J. Gattnar, K. Langfeld and H. Reinhardt, arXiv:hep-lat/0403011. * [62] F. D. R. Bonnet, P. O. Bowman, D. B. Leinweber, A. G. Williams and J. M. Zanotti, Phys. Rev. D **64** (2001) 034501; P. O. Bowman, U. M. Heller, D. B. Leinweber, M. B. Parappilly and A. G. Williams, arXiv:hep-lat/0402032. * [63] I. L. Bogolubsky, G. Burgio, M. Muller-Preussker and V. K. Mitrjushkin, arXiv:hep-lat/0511056. * [64] Y. Iwasaki, K. Kanaya, S. Sakai and T. Yoshie, Phys. Rev. Lett. **69** (1992) 21; Y. Iwasaki, K. Kanaya, S. Kaya, S. Sakai and T. Yoshie, Phys. Rev. D **69**, 014507 (2004). * [65] D. F. Litim, JHEP **0507** (2005) 005. * [66] I.S. Gradshteyn and I.M. Ryzhik, \"Table of integrals, series, and products\", 6th ed., Jeffrey, Alan (ed.), Academic Press, San Diego (2000). * [67] D. U. Jungnickel and C. Wetterich, Phys. Rev. D **53** (1996) 5142.
We analyze the approach to chiral symmetry breaking in QCD at finite temperature, using the functional renormalization group. We compute the running gauge coupling in QCD for all temperatures and scales within a simple truncated renormalization flow. At finite temperature, the coupling is governed by a fixed point of the 3-dimensional theory for scales smaller than the corresponding temperature. Chiral symmetry breaking is approached if the running coupling drives the quark sector to criticality. We quantitatively determine the phase boundary in the plane of temperature and number of flavors and find good agreement with lattice results. As a generic and testable prediction, we observe that our underlying IR fixed-point scenario leaves its imprint in the shape of the phase boundary near the critical flavor number: here, the scaling of the critical temperature is determined by the zero-temperature IR critical exponent of the running coupling. **Chiral phase boundary of QCD at finite temperature** Jens Braun and Holger Gies _Institut fur Theoretische Physik, Philosophenweg 16 and 19, 69120 Heidelberg, Germany_ _E-mail: [email protected], [email protected]_
Summarize the following text.
arxiv-format/0602414v2.md
# Universe Evolution in a 5\\(d\\) Ricci-Flat Cosmology CHENGWU ZHANG HONGYA LIU1 and LIXIN XU Department of Physics, Dalian University of Technology, Dalian 116024, P.R. China PAUL S. WESSON Department of Physics, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada Footnote 1: [email protected] ## 1 Introduction Observations of Cosmic Microwave Background (CMB) anisotropies indicate that the universe is flat and the total energy density is very close to the critical one with \\(\\Omega_{total}\\simeq 1\\)[1]. Meanwhile, observations of high redshift type Ia supernovae[2] reveal the speeding up expansion of our universe and the surveys of clusters of galaxies show that the density of matter is very much less than the critical density[3]. These three tests nicely complement each other and indicate that an exotic component with negative pressure dubbed dark energy dominates the present universe. Various dark energy models have been proposed among them the most promising ones are probably those with a scalar field such as quintessence[4], phantom[5], K-essence[6] and so on. For this kind of models, one can design many kinds of potentials[7] and then study EOS for the dark energy. Another way is to use a parameterization of theEOS to fit the observational data, and then to reconstruct the potential and/or the evolution of the universe[8]. The latter has the advantage that it does not depend on a specified model of the dark energy and, therefore, is also called a model-independent method[16]. Both the classical Kaluza-Klein theories and the modern string/brane theories require the existence of extra dimensions. If the universe has more than four dimensions, general relativity should be extended from \\(4D\\) to higher dimensions. One of such extensions is the \\(5D\\) Space-Time-Matter (STM) theory[9, 10] in which our universe is a \\(4D\\) hypersurface floating in a \\(5D\\) Ricci-flat manifold. This theory is supported by Campbell's theorem which states that any analytical solution of the \\(ND\\) Einstein equations can be embedded in a \\((N+1)D\\) Ricci-flat manifold[11]. A class of cosmological solutions of the STM theory is given by Liu, Mashhoon and Wesson[12, 13]. It was shown that dark energy models, similar as the 4D quintessence and phantom ones, can also be constructed in this \\(5D\\) cosmological solution in which the scalar field is induced from the \\(5D\\) vacuum[14, 15]. The purpose of this paper is to use a model-independent method to study the EOS of the dark energy and the evolution of the \\(5D\\) universe. Various parameterization of the EOS of dark energy have been presented and investigated[16, 17, 18, 19, 20]. In this paper we will study one of them presented firstly by Wetterich[21] where there is a bending parameter \\(b\\) which describes the deviation of the EOS from a constant \\(w_{0}\\). The paper is organized as follows. In Section 2, we introduce the \\(5D\\) Ricci-flat cosmological solution and derive the densities for the three major components of the universe. In Section 3, we will reconstruct the evolution of the model with different values of the bending parameter \\(b\\). Section 4 is a short discussion. ## 2 Density Parameters in the \\(5d\\) Model The \\(5D\\) cosmological solutions read[12] \\[dS^{2}=B^{2}dt^{2}-A^{2}\\left(\\frac{dr^{2}}{1-kr^{2}}+r^{2}d\\Omega^{2}\\right)- dy^{2}, \\tag{1}\\] with \\(d\\Omega^{2}\\equiv\\left(d\\theta^{2}+\\sin^{2}\\theta d\\phi^{2}\\right)\\) and \\[A^{2}=\\left(\\mu^{2}+k\\right)y^{2}+2\ u y+\\frac{\ u^{2}+K}{\\mu^{2}+k},\\] \\[B=\\frac{1}{\\mu}\\frac{\\partial A}{\\partial t}\\equiv\\frac{\\dot{A}}{\\mu}, \\tag{2}\\] where \\(\\mu=\\mu(t)\\) and \\(\ u=\ u(t)\\) are two arbitrary functions of \\(t\\), \\(k\\) is the \\(3D\\) curvature index (\\(k=\\pm 1,0\\)), and \\(K\\) is a constant. This solution satisfies the \\(5D\\) vacuum equations \\(R_{AB}=0\\). So we have \\[I_{1}\\equiv R=0,I_{2}\\equiv R^{AB}R_{AB}=0,\\] \\[I_{3}=R_{ABCD}R^{ABCD}=\\frac{72K^{2}}{A^{8}}, \\tag{3}\\]which shows that \\(K\\) determines the curvature of the \\(5D\\) manifold. The Hubble and deceleration parameters are[22], \\[H \\equiv \\frac{\\dot{A}}{AB}=\\frac{\\mu}{A} \\tag{4}\\] \\[q\\left(t,y\\right) \\equiv -A\\frac{d^{2}A}{d\\tau^{2}}\\bigg{/}\\left(\\frac{dA}{d\\tau}\\right)^{2 }=-\\frac{A\\dot{\\mu}}{\\mu\\dot{A}}. \\tag{5}\\] Using the \\(4D\\) part of the \\(5D\\) metric (1) to calculate the \\(4D\\) Einstein tensor, one obtains \\[{}^{\\left(4\\right)}G_{0}^{0} = \\frac{3\\left(\\mu^{2}+k\\right)}{A^{2}},\\] \\[{}^{\\left(4\\right)}G_{1}^{1} = {}^{\\left(4\\right)}G_{2}^{2}={}^{\\left(4\\right)}G_{3}^{3}=\\frac{2 \\mu\\dot{\\mu}}{A\\dot{A}}+\\frac{\\mu^{2}+k}{A^{2}}. \\tag{6}\\] So the \\(4D\\) induced energy-momentum tensor can be defined as \\(T^{\\alpha\\beta}=^{\\left(4\\right)}G^{\\alpha\\beta}\\). In this paper we consider the case where the \\(4D\\) induced matter \\(T^{\\alpha\\beta}\\) is composed of three components: matter \\(\\rho_{m}\\), radiation \\(\\rho_{r}\\) and dark energy \\(\\rho_{x}\\), which are minimally coupled to each other. So we have \\[\\frac{3\\left(\\mu^{2}+k\\right)}{A^{2}} = \\rho_{m}+\\rho_{r}+\\rho_{x},\\] \\[\\frac{2\\mu\\dot{\\mu}}{A\\dot{A}}+\\frac{\\mu^{2}+k}{A^{2}} = -p_{m}-p_{r}-p_{x}, \\tag{7}\\] with \\[\\rho_{m} = \\rho_{m0}A_{0}^{3}A^{-3},\\quad p_{m}=0,\\quad\\rho_{r}=\\rho_{r0}A_{ 0}^{4}A^{-4}=3p_{r}, \\tag{8}\\] \\[p_{x} = w_{x}\\rho_{x}. \\tag{9}\\] From Eqs. (7) - (9) and for \\(k=0\\), we obtain the EOS of the dark energy \\[w_{x}=\\frac{p_{x}}{\\rho_{x}}=-\\frac{2\\mu\\dot{\\mu}/\\left(AA\\dot{ \\right)}+\\mu^{2}/A^{2}+\\rho_{r0}A_{0}^{4}A^{-4}/3}{3\\mu^{2}A^{2}-\\rho_{m0}A_{0} ^{3}A^{-3}-\\rho_{r0}A^{-4}}, \\tag{10}\\] and the dimensionless density parameters \\[\\Omega_{m} = \\frac{\\rho_{m}}{\\rho_{m}+\\rho_{r}+\\rho_{x}}=\\frac{\\rho_{m0}A_{0} ^{3}}{3\\mu^{2}A}, \\tag{11}\\] \\[\\Omega_{r} = \\frac{\\rho_{r}}{\\rho_{m}+\\rho_{r}+\\rho_{x}}=\\frac{\\rho_{r0}A_{0} ^{4}}{3\\mu^{2}A^{2}},\\] (12) \\[\\Omega_{x} = 1-\\Omega_{m}-\\Omega_{r}. \\tag{13}\\] where \\(\\rho_{m0} and define \\(\\mu_{0}^{2}/\\mu^{2}=f(z)\\) (with \\(f(0)\\equiv 1\\)), and then we find that equations (10)-(13) and (6) can be expressed in term of the redshift \\(z\\) as \\[w_{x}=-\\frac{1+\\Omega_{r}+(1+z)dlnf(z)/dz}{3(1-\\Omega_{m}-\\Omega_{r})}, \\tag{15}\\] \\[\\Omega_{m}=\\Omega_{m_{0}}(1+z)f(z), \\tag{16}\\] \\[\\Omega_{r}=\\Omega_{r_{0}}(1+z)^{2}f(z), \\tag{17}\\] \\[\\Omega_{x}=1-\\Omega_{m}-\\Omega_{r}, \\tag{18}\\] \\[q=-\\frac{1+z}{2}dlnf(z)/dz. \\tag{19}\\] Now we conclude that if the function \\(f(z)\\) is given, the evolutions of all the cosmic observable parameters in (15) - (19) could be determined uniquely. ## 3 The Function \\(f(z)\\) and the Evolutions of Cosmic Parameters The parameterization of EOS of the dark energy given by Wetterich has been extensively studied[24, 25]. It is of the form[21] \\[w_{x}(z,b)=\\frac{w_{0}}{1+b\\ln(1+z)}, \\tag{20}\\] where \\(w_{x}(z,b)\\) is the EOS parameter with it's current value as \\(w_{0}\\), and \\(b\\) is a bending parameter describing the deviation of \\(w_{x}\\) from \\(w_{0}\\) as \\(z\\) increases. Let \\(w_{0}=-1.1\\), we plot the function (20) in Fig. 1 where \\(b\\) takes the value 0, 1/2, 1, 2, 4, respectively. From this figure we see that \\(w_{x}\\) varies with \\(z\\) sensitively at low redshift. At high redshift, it is near to a constant. By properly choosing the two parameters \\(w_{0}\\) and \\(b\\), the transition from \\(w_{x}<-1\\) to \\(w_{x}>-1\\) and the last scattering point at \\(z\\approx 1100\\) can be adjusted easily to fit cosmic observations. Figure 1: EOS \\(w_{x}\\) of the dark energy as a function of the redshift \\(z\\) with it’s current value \\(w_{0}=-1.1\\) and the bending parameter \\(b=0\\), 1/2, 1, 2, 4, respectively. Consider equation (15) now. With use of (16), (17) and (20), we find that (15) is actually a nonlinear first-order differential equation. This equation can be integrated out analytically, giving the solution \\[f(z,b) = (1+z)^{-1}[(1+b\\ln(1+z))^{3w_{0}/b}+\\Omega_{m0}-(1+b\\ln(1+z))^{3w_{ 0}/b}\\Omega_{m0}+ \\tag{21}\\] \\[+\\Omega_{r0}+z\\Omega_{r0}-(1+b\\ln(1+z))^{3w_{0}/b}\\Omega_{r0}]^{-1}.\\] For \\(b=0\\) we have \\(w(z,0)=w_{0}\\). For \\(b\\to 0\\), we find \\(f(z,b)\\to f(z,0)\\) with \\[f(z,0) = (1+z)^{-1}[(1+z)^{3w_{0}}+\\Omega_{m0}-(1+z)^{3w_{0}}\\Omega_{m0}+ \\Omega_{r0}+z\\Omega_{r0}- \\tag{22}\\] \\[-(1+z)^{3w_{0}}\\Omega_{r0}]^{-1}.\\] Furthermore, for \\(z=0\\) we have \\(f(0,0)=1\\) as it should be by it's definition. The function \\(f(z,b)\\) shown in (21), including the bending parameter \\(b\\) given in (20), could, in principle, be determined by observational data. Be aware that \\(f\\equiv\\mu_{0}^{2}/\\mu^{2}\\), so we arrive at a conclusion that the arbitrary function \\(\\mu\\) contained in the \\(5D\\) solution (1) - (2) could be determined, in terms of the redshift \\(z\\), by observational data. As for another arbitrary function \\(\ u(z)\\), it can be expressed via \\(\\mu(z)\\) and \\(A(z)\\) by solving (2) itself. Therefore, \\(\ u(z)\\) is now not arbitrary anymore but is determined by observational data too. In this way, the whole \\(5D\\) solution could be determined in principle. Return back to (16) - (19). If \\(f(z,b)\\) is known, all the three densities \\(\\Omega_{m}\\), \\(\\Omega_{r}\\), \\(\\Omega_{x}\\) and the deceleration parameter \\(q\\) are also known. The evolutions of these three densities and the deceleration parameter are plotted in Figs. 2-6. From Fig. 2 to Fig. 4 we see that as the redshift \\(z\\) increases, both \\(\\Omega_{m}\\) and \\(\\Omega_{r}\\) increase too while \\(\\Omega_{x}\\) decreases, and \\(\\Omega_{r}\\) increases almost linearly at low redshift. The effect of the bending parameter \\(b\\) on the three densities is sensitive. At high redshift it becomes much larger. Clearly, there is a transition at \\(z=z_{e}\\) at which \\(\\Omega_{m}=\\Omega_{x}\\). When \\(z<z_{e}\\), dark energy governs the universe; when \\(z>z_{e}\\), matter becomes the dominant part of the universe. For several different values of \\(b\\) we plot the transitions \\(z_{e}\\) in Fig. 5. If we traced back much earlier (around \\(z=5000\\)), there must be another transition \\(z=z_{r}\\) at which \\(\\Omega_{r}=\\)\\(\\Omega_{m}\\) and before which (at \\(z>z_{r}\\)) the density \\(\\Omega_{r}\\) dominates the universe. From Fig. 6 we can find how the bending parameter \\(b\\) affect the deceleration parameter \\(q\\). The transition from deceleration to acceleration can easily be seen from this figure. ## 4 Conclusions The \\(5D\\) Ricci-flat cosmological solution (1) - (2) contains two arbitrary functions \\(\\mu(t)\\) and \\(\ u(t)\\), and usually it is not easy to be determined for a real universe model. In this paper we have considered the case where the universe is composed of three major components: matter, radiation, and dark energy, and we have supposed the equation of state of dark energy is of Wetterich's parameterization form. Then we show that with use of the relation between the scale factor \\(A\\) and the redshift \\(z\\), \\(A=A_{0}/(1+z)\\), one can easily change the arbitrary function \\(\\mu(t)\\) to another arbitrary function \\(f(z)\\). Furthermore, we show that this \\(f(z)\\) could be integrated out analytically. Thus, if the current values of the three density parameters \\(\\Omega_{m0}\\), \\(\\Omega_{r0}\\), \\(\\Omega_{x0}\\), the EOS \\(w_{0}\\), and the bending parameter \\(b\\) contained in the EOS are all known, this \\(f(z)\\) could be determined uniquely, and then both \\(\\mu(z)\\) and \\(\ u(z)\\) could be determined too. In this way, the whole \\(5D\\) solution could be constructed and this \\(5D\\) solution could, in principle, provide with us a global cosmological model to simulate our real universe. We have also studied the evolutions of the mass density \\(\\Omega_{m}\\), the radiation density \\(\\Omega_{r}\\), the dark energy density \\(\\Omega_{x}\\), and the deceleration parameter \\(q\\), and we find that they all are sensitively dependent on the values of the bending parameter \\(b\\). Thus we expect that more accurate observational constraints, Figure 5: This figure shows how the bending parameter \\(b\\) affect the transition point \\(z_{e}\\). The bigger the bending parameter is, the earlier the transition from mater dominated to dark energy dominated happend. such as that on the last-scattering surface and those about the transition points from \\(\\Omega_{r}\\) dominated to \\(\\Omega_{m}\\) dominated, from \\(\\Omega_{m}\\) dominated to \\(\\Omega_{x}\\) dominated, and from decelerating expansion to accelerating expansion of \\(q\\) could help greatly to determine the bending parameter \\(b\\) and then to determine the global evolution of the universe. ## Acknowledgments This work was supported by NSF (10573003), NBRP (2003CB716300) of P. R. China and NSERC of Canada. ## References * [1] D.N. Spergel, et. al, _Astrophys. J._ Supp. **148** 175(2003), astro-ph/0302209. * [2] A. G. Riessel, et. al, _Astron. J._**116** 1009(1998), astro-ph/9805201. * [3] A. C. Pope, et. al, _Astrophys.J._**607** 655(2004),astro-ph/0401249. * [4] I. Zlatev, L. Wang, and P.J. Steinhardt, _Phys. Rev. Lett._**82** 896(1999), astro-ph/9807002. * [5] R.R. Caldwell, M. Kamionkowski, N.N. Weinberg, _Phys. Rev. Lett._**91** 071301(2003), astro-ph/0302506. * [6] Armendariz-Picon, T. Damour, V. Mukhanov, _Phys. Lett._**B458** 209(1999). * [7] V. Sahni, _Chaos. Soli. Frac._**16** 527(2003). * [8] Z.K. Guo, N. Ohtab and Y.Z. Zhang, astro-ph/0505253. * [9] P.S. Wesson, _Space-Time-Matter_ (Singapore: World Scientific) 1999. * [10] J.M. Overduin and P.S. Wesson, _Phys. Rept._**283**, 303(1997). * [11] J. E. Campbell, _A Course of Differential Geometry_, (Clarendon, 1926). * [12] H.Y. Liu and P.S. Wesson, _Astrophys. J._**562** (2001), gr-qc/0107093. * [13] H.Y. Liu and B. Mashhoon, _Ann. Phys._**4** 565(1995). * [14] B.R. Chang, H. Liu and L. Xu, _Mod. Phys. Lett._**A20**, 923(2005), astro-ph/0405084. * [15] H.Y. Liu, et. al, _Mod. Phys. Lett._**A20**, 1973(2005), gr-qc/0504021. * [16] P.S. Corasaniti and E.J. Copeland, _Phys. Rev._**D67**, 063521 (2003), astro-ph/0205544. * [17] E.V. Linder, _Phys. Rev. Lett._**90**, 091301 (2003), astro-ph/0208512. * [18] A. Upadhye, M. Ishak and P.J. Steinhardt, astro-ph/0411803. * [19] Y. Wang and M. Tegmark, _Phys. Rev. Lett._**92**, 241302 (2004), astro-ph/0403292. * [20] M. Chevallier, D. Polarski, _Int. J. Mod. Phys._**D10**, 213 (2001), gr-qc/0009008. * [21] C. Wetterich, _Phys. Lett._**B594** 17 (2004), astro-ph/0403289. * [22] H.Y. Liu, _Phys. Lett._**B560** 149(2003), hep-th/0206198. * [23] L. Xu, H.Y. Liu and C.W. Zhang, to be appeared in _Int. J. Mod. Phys. D_, astro-ph/0510673. * [24] Y. Gong, _Class. Quant. Grav._**22** 2121(2005), astro-ph/0405446. * [25] M. Bartelmann, M. Doran and C. Wetterich, astro-ph/0507257.
We use Wetterich's parameterization equation of state (EOS) of dark energy to a \\(5D\\) Ricci-flat cosmological solution and we suppose the universe contains three major components: matter, radiation and dark energy. By using the relation between the scale factor and the redshift \\(z\\), we show that the two arbitrary functions contained in the \\(5D\\) solution could be solved out analytically in terms of the variable \\(z\\). Thus the whole \\(5D\\) solution could be constructed uniquely if the current values of the three density parameters \\(\\Omega_{m0}\\), \\(\\Omega_{r0}\\), \\(\\Omega_{\\geq 0}\\), the EOS \\(w_{0}\\), and the bending parameter \\(b\\) contained in the EOS are all known. Furthermore, we find that all the evolutions of the mass density \\(\\Omega_{m}\\), the radiation density \\(\\Omega_{r}\\), the dark energy density \\(\\Omega_{x}\\), and the deceleration parameter \\(q\\) depend on the bending parameter \\(b\\) sensitively. Therefore it is deserved to study observational constraints on the bending parameter \\(b\\). cosmology; dark energy; bending parameter. Received (Day Month Year)Revised (Day Month Year)
Summarize the following text.
arxiv-format/0603094v3.md
# Critical Phenomena in Head-on Collisions of Neutron Stars Ke-Jian Jin\\({}^{(1)}\\) Wai-Mo Suen\\({}^{(1,2)}\\) \\({}^{(1)}\\)McDonnell Center for the Space Sciences, Department of Physics, Washington University, St. Louis, Missouri 63130 \\({}^{(2)}\\)Physics Departement, University of Hong Kong, Hong Kong November 4, 2021 ## Sec.1 Introduction and Motivation. In previous studies [1; 2] we found that head-on collisions of two neutron stars (NSs) could lead to prompt gravitational collapses, despite a conjecture to the contrary [3]. Upon collision, two 1.4 \\(M_{\\odot}\\) neutron stars with a polytropic equation of state (EOS) could promptly collapse to form a black hole within a dynamical timescale, while NSs with smaller masses may merge to form a single NS [1]. Further, in [2], we found that prompt collapses could occur even when the sum of the masses of the two colliding NSs is _less_ than the maximum mass of a single stable equilibrium star, in the case of NSs with the Lattimer-Swesty EOS. However, we were not able to determine the exact dividing line between the collapse and non-collapse cases, as a reasonably accurate determination would require numerical simulations with a resolution higher than what our 3D numerical code (GR-Astro [4]) could achieve on computers we have access to. In this work we investigate this dividing line. For the investigation, we have constructed an axisymmetric version of GR-Astro. GR-Astro-2D enables high resolution simulations of axisymmetric systems. Details of GR-Astro-2D and the numerical setup will be given in Sec. 2 below. In Sec. 3, we report on the type I critical phenomena found at the dividing line for NSs with a polytropic EOS of \\(\\Gamma=2\\). (Critical phenomena in gravitational collapse was first discovered by Choptuik [5]; for review see e.g. [6; 7]). In the super-critical regime, the merged object collapses promptly to form a black hole, even though its mass could be _less_ than the maximum stable mass of one single NS in equilibrium with the same EOS. In the sub-critical regime, an oscillating NS is formed. Near the dividing line, the near critical solutions oscillate for a long time, before collapsing or becoming an oscillating NS. The exact critical solution presumably will oscillate forever, while neighboring configurations would eventually deviate from the critical solution. In Sec. 4, we determine the critical index \\(\\gamma\\) as the time scale of growth of the unstable mode bringing a near critical solution away from the critical solution. We found \\(\\gamma=10.78(\\pm 0.6)M_{\\odot}\\). This corresponds to a growth time of the unstable mode of about 0.054ms. In Sec. 5, we investigate the universality of the phenomena with different critical parameter choices. Particularly interesting is the case in which the parameter is taken to be the polytropic index \\(\\Gamma\\), instead of a parameter associated with the initial configurations. The same critical index is found. In Sec. 6, we study gravitational waves emitted by the near critical solutions. We find that the gravitational radiations damp rapidly, suggesting that the unstable mode is not radiating. The determination of the possible existence of a non-spherical unstable mode/bifurcation point ([8]) requires a resolution beyond the computational resources available to us. The existence of critical collapses in compact objects formed in collisions of NSs modeled by a commonly used EOS is by itself interesting. Further, the facts that (1) the growth timescale of the unstable mode is short (\\(0.054ms\\)), (2) it can appear in systems which deviate significantly from spherical, and (3) most importantly, a critical collapse could result through the softening of the EOS, suggest that critical collapses may actually occur in nature, e.g., a proto-NS initially supported by thermal pressure gradually cools down through neutrino emissions. When the EOS gradually changes in the timescale of seconds, the proto-NS evolves from the sub-critical regime toward the critical regime. The unstable mode, which grows in a timescale shorter than the evolution timescale of the EOS, may then be triggered, leading to a critical collapse. ## Sec. 2. GR-Astro-2D and the Numerical Setup. Simulations of NS collisions are carried out using the GR-Astro code, which solves the coupled system of the Einstein equations and the general relativistic hydrodynamic (GR-Hydro) equations as described in [9; 10; 11]. The choice of numerical schemes as well as conventions used in this paper are given in [11]. A module GR-Astro-2D specially adapted to axisymmetric systems is constructed to provide the high resolution needed for this investigation. GR-Astro-2D uses the same spacetime and GRHydro evolution routines as GR-Astro but applies symmetry conditions to restrict the simulation to a thin layer of spacetime (5 grid points across) containing the symmetry axis [12]. Initial data sets for head-on collisions of NSs are constructed by solving the Hamiltonian and momentum constraints (HC and MCs) representing two NSs with equal mass separated by a distance along the \\(z\\)-axis and boosted towards one another at a prescribed speed. The numerical evolutions in this paper are carried out with the \\(\\Gamma\\) freezing shift (or no shift) and the \"\\(1+\\log\\)\" lapse (for details of the shift and lapse conditions, see [11]). Extensive convergence tests have been performed including tests showing that the simulations are converging throughout the evolution. In particular the critical index extracted (see below) has been shown to converge in a \\(1^{st}\\) order manner, which is the designed rate of convergence due to our use of high resolution shock capturing treatment. NSs in this paper are described by a polytropic EOS: \\(P=(\\Gamma-1)\\rho\\epsilon\\) with \\(\\Gamma=2\\) (and cases close to 2). Here \\(\\rho\\) is the proper rest mass density and \\(\\epsilon\\) is the proper specific internal energy density. Notice that the \"kinetic-energy-dominated\" assumption has not been made, unlike earlier investigations of the critical collapses of perfect fluid systems (for review, see [6; 7; 13]). Initial data sets are constructed with \\(P=k\\rho^{\\Gamma}\\), where \\(k=0.0298c^{2}/\\rho_{n}\\) (\\(\\rho_{n}\\) is the nuclear density, approximately \\(2.3\\times 10^{14}\\;\\mathrm{g/cm}^{3}\\)). For this EOS, the maximum stable NS configuration has an Arnowitt-Deser-Misner (ADM) mass of \\(1.46M_{\\odot}\\) and a baryonic mass of \\(1.61M_{\\odot}\\). **Sec. 3. Existence of the Critical Phenomena in Head-on Collisions of NSs.** In the first set of simulations, the two NSs are initially at a fixed distance (the maximum density points of the two NSs are separated by \\(3R\\), where \\(R\\sim 9.1M_{\\odot}\\) is the coordinate radius of the NSs). The initial velocities of the NSs are that of freely falling from infinity, determined by the Newtonian formula plus the 1PN correction. For the configurations investigated in Fig. 1 where the baryonic mass of each of the NS ranges from \\(0.786M_{\\odot}\\) to \\(0.793M_{\\odot}\\), the initial speed ranges from \\(0.15537\\) to \\(0.15584\\) (in units of \\(c=1\\)). The computational grid has \\(323\\times 5\\times 323\\) points, covering a computational domain of (\\(\\pi r^{2}\\times\\)height ) = \\((\\pi\\times 38.5^{2}\\times 77.0){M_{\\odot}}^{3}\\). Each NS radius is resolved with 76 grid points, taking advantage of the octane- and axi-symmetry of the problem. Fig. 1 shows the evolution of the lapse function \\(\\alpha\\) at the center of the collision as a function of the coordinate time, for systems with slightly different masses (all other parameters, including physical and numerical parameters, are the same). The line labeled 1 in Fig. 1 (which dips to 0 near \\(t\\sim 150M_{\\odot}\\)) represents the case of \\(0.793M_{\\odot}\\). We see that after the collision, \\(\\alpha\\) promptly \"collapses\" to zero, signaling the formation of a black hole. Note that the total baryonic mass of the merged object \\(1.59M_{\\odot}\\) is _less_ than the maximum stable mass of a TOV solution of the same EOS in equilibrium. The prompt gravitational collapse of the merged object with such a mass indicates that it is in a state that is very different from being stationary [1; 2]. The line labeled 41 in Fig. 1 (which rises at \\(t\\sim 120M_{\\odot}\\)) represents the case where each of the NSs has the baryonic mass \\(0.786M_{\\odot}\\). The lapse at the collision center dips as the two stars merge, then rebounds. The merged object does not collapse to a black hole but instead form a stable NS in axisymmetric oscillations. The lapse at the center of the merged object oscillates around a value of \\(0.71\\), with a period of about \\(160M_{\\odot}\\). For configurations with masses between the bottom line(1) and top line (41), the lapse \\(\\alpha\\) would rebound, dip etc., before eventually dipping to zero (a black hole is formed) or going back up (a NS is formed). The critical solution is found by fine tuning \\(\\rho_{c}\\), the proper mass density as measured by an observer at rest with the fluid at the center of the star at the initial time. For the numerical setup used in the study, at around \\(\\rho_{c}=6.128202618199\\times 10^{-4}\\) (mass of each NS \\(=0.79070949026M_{\\odot}\\)), a change of the \\(\\rho_{c}\\) by the \\(10^{th}\\) significant digit changes the dynamics from collapse to no collapse. In Fig. 1 we see that for these near critical configurations \\(\\alpha\\) oscillates at about \\(0.255\\) with a period of \\(\\sim 40M_{\\odot}\\). As the lapse is given by the determinant of the 3 metric, this represents an oscillation of the 3 geometry. For a more invariant measure, in Fig. 2 we plot as dot ted and long dashed lines the 4-D scalar curvature \\(R\\) at the collision center for two of the near critical solutions (lines 20 and 21 in Fig. 1; they are the last ones to move away from the exact critical solution at \\(t\\sim 300M_{\\odot}\\)). We see that \\(R\\) oscillates with the same period as the determinant of the 3 metric (the lapse). As \\(\\alpha\\) collapses to zero, \\(R\\) blows up and in each such case we find an apparent horizon, indicating the formation of a black hole. Similar oscillatory behavior has been seen in other critical collapse studies [6; 7; 8; 14]. We note that at late time \\(R\\) of the sub-critical case (line 21) tends to a small negative value as a static TOV star should. We note that while in Fig. 1 a change in the \\(10^{th}\\) significant digit of the total mass of the system can change the dynamics from that of sub-critical to supercritical, this does not imply that we have determined the critical point to the \\(10^{th}\\) digit of accuracy. The exact value of the critical point is affected by the resolution of the numerical grid as well as the size of the computational domain. We have performed high resolution simulations with 76 grid points per \\(R\\), (with computational domain covering \\(8.5R\\)), and large computational domain simulations covering \\(34R\\) (with resolution 38 grid point per \\(R\\)). Convergence tests in both the directions of resolution and size of the computational domain suggest that the total mass of the critical solution in the headon collision case with the EOS given is at \\(1.58\\pm 0.05M_{\\odot}\\), with the error bound representing the truncation errors. **Sec. 4. Determination of the Critical Index.** The critical index \\(\\gamma\\) is determined through the relation \\(T=\\gamma\\log(p-p_{*})\\), where \\(T\\) is the length of the coordinate time (which is asymptotically Minkowski) that a near critical solution with a parameter value \\(p\\) stays near the exact critical solution with \\(p_{*}\\). In Sec. 3 above, \\(p\\) is taken as the central density \\(\\rho_{c}\\) of the initial NSs. In Fig. 3, we plot \\((\\alpha-\\alpha_{*})/\\alpha_{*}\\) at the center of collision against the coordinate time, where \\(\\alpha_{*}\\) is the lapse of the critical solution to the best we can determine. Only the last part of the evolution is shown. We see explicitly the growth of the unstable mode driving the near critical solution away from the critical solution. We defined the \"departure time\" \\(T_{0.05}\\) as the coordinate time that a line in this figure reaches \\(\\pm 0.05=\\pm 5\\%\\). Likewise we define \\(T_{0.1}\\), \\(T_{0.15}\\) and \\(T_{0.2}\\). In Fig. 4, the departure times \\(T_{0.05}\\) and \\(T_{0.2}\\) are plotted against the log difference of \\(p\\) (taken to be \\(\\rho_{c}\\) as in Fig. 1) between the near critical and the critical solutions. With this, \\(\\gamma_{\\,0.05}\\) defined as \\(T_{0.05}/\\log(p-p_{*})\\) is found to be \\(10.87\\), whereas \\(\\gamma_{\\,0.10}\\), \\(\\gamma_{\\,0.15}\\) and \\(\\gamma_{\\,0.2}\\) are found to be \\(10.92,10.93\\) and \\(10.92\\) respectively. We see that the value of the critical index does not depend sensitively on the definition of the departure point. **Sec. 5. Universality.** The above study uses the total mass/central density of the initial NSs as the critical parameter \\(p\\). Next we fix the central density \\(\\rho_{c}\\) of the initial NSs at \\(6.12820305495\\times 10^{-4}{M_{\\odot}}^{-1}\\). The initial coordinate separation between the center of the two NSs is fixed to be \\(D=27.5M_{\\odot}\\). The initial velocity \\(v\\) is taken to be the parameter \\(p\\). For each choice of \\(v\\), the HC and MC equations are solved. Convergence with respect to spatial resolutions and outer boundary location has been verified. We find the same critical phenomena. The critical index is extracted in the same manner and found to be \\(10.78M_{\\odot}\\).. Other choices of parameter \\(p\\) have also been studied, including:(i) \\(p=D\\), while fixing \\(\\rho_{c}\\) and \\(v\\), and (ii) \\(p=\\rho_{c}\\) while fixing \\(v\\) and \\(D\\). Note that the latter case is different from the case discussed in Secs. 3 and 4 above, where the initial velocity is determined by the free fall velocity up to the first PN correction. In all cases studied, we see the same critical phenomena with consistent values of the critical index \\(\\gamma\\). Next we ask: Is critical collapse possible only through fine tuning the initial data? If true, we would not expect to see critical collapse phenomena in nature. We investigate the possibility of taking \\(p=\\Gamma\\), the adiabatic index, as slow changes of the EOS could occur in many astrophysical situations, e.g., accreting NSs and during cooling of proto-NSs generated in supernovae. We fix \\(D\\), \\(\\rho_{c}\\), \\(v\\) and vary \\(\\Gamma\\) away from 2. The evolution of the lapse at the center of collision is shown in Fig. 5. We see behavior similar to that of Fig. 1. The critical index \\(\\gamma\\) is found to be again \\(10.78M_{\\odot}\\), consistent with the values found by fine tuning the initial configurations. **Sec. 6. Gravitational Wave Signals From Near Critical Collapses.** Can near critical collapses have a signature in their gravitational radiations that one can identify in observation, in view of the possibility that near critical collapse of a neutron star like object may occur in nature? In Fig. 6, we show the dominant gravitational wave component (even parity \\(L=2,m=0\\) component of the Moncrief gauge invariant \\(Q\\)[15], \\(z\\)-axis is the symmetry axis) for four cases shown in Fig. 1. For the line 41, the merger leads to a NS oscillating in a non-spherical fashion and emits gravitational waves. For the line 1 the merged object moves away from the critical solution at around \\(t\\sim 130\\) and collapses to a black hole. The asymmetric collapse generates gravitational wave. The simulation was terminated at \\(t\\sim 330\\) as the metric functions can no longer be adequately resolved with the formation of the black hole. In contrast, the radiations from the near critical solutions represented by lines 20 and 21 (the dotted-dashed line and the solid line respectively, nearly coinciding with one another all the way) decrease in amplitude. At around \\(t\\sim 270\\), the unstable mode sets in and the two near critical solutions 20 and 21 move away from the critical solution (as shown in Fig. 1). However, there is only a negligible amount of wave emitted, independent of whether the unstable mode leads to a black hole (line 20) or an oscillating NS (line 21). This suggests that the unstable mode may be a spherical mode that does not radiate. The waveforms reported here suffer from that the computational domain (covering only \\(\\sim 1\\) wavelength) cannot be extended further out, limited by the computational resources available to us. Further investigations of the waveforms will be reported elsewhere. **Sec. 7. Conclusion.** We found that the dividing line between prompt and delayed collapses of a compact object formed in a head-on collision of NSs with a polytropic EOS occurs at a mass less than the maximum stable mass of a single equilibrium star. The EOS is one that is often used in modeling NSs. There exists a type I critical collapse phenomena at the dividing line. Universality of the phenomena with respect to different choices of the critical parameter is confirmed and the critical index extracted. The growth time of the unstable mode which brings a near critical solution away from the critical one is found to be \\(\\sim 0.054ms\\), for the EOS studied. The study suggests that, despite the highly asymmetric initial data, the final critical collapse may not generate gravitational radiation. We found that critical collapses could happen without fine tuning of initial data, but instead, through a gradual change of the EOS. This opens the intriguing possibility that, e.g., when a proto NS cools and loses thermal support on a timescale longer than \\(0.054ms\\), the collapse could exhibit critical behavior. This and related questions will be investigated elsewhere. GR-Astro is written and supported by Ed Evans, Mark Miller, Jian Tao, Randy Wolfmeyer, Hui-Min Zhang and others. GR-Astro-2D is developed and supported by K. J. Jin. GR-Astro makes use of the Cactus Toolkit developed by Tom Goodale and the Cactus support group. We thank Sai Iyer, Jian Tao, Malcolm Tobias, Randy Wolfmeyer, Hui-Min Zhang and other members of the WUGRAV group for discussions and support. The research is supported by the NSF Grant Phy 99-79985 (the KDI Astrophysics Simulation Collaboratory Project), NSF NRAC MCA93S025, and the McDonnell Center for Space Sciences at the Washington University. ## References * (1) M. Miller, W.-M. Suen, and M. Tobias, Phys. Rev. D. Rapid Comm. **63**, 121501(R) (2001). * (2) E. Evans _et al._, Phys. Rev. D **67**, 104001 (2003). * (3) S. Shapiro, Phys. Rev. D **58**, 103002 (1998). * (4)[http://wugrav.wustl.edu/research/projects/nsgc.html](http://wugrav.wustl.edu/research/projects/nsgc.html). * (5) M. Choptuik, Phys. Rev. Lett. **70**, 9 (1993). * (6) G. Gundlach, Physics Reports **376**, 339 (2003). * (7) A. Z. Wang, Braz.J. Phys. **31**, 188 (2001). * (8) M. Choptuik, Phys. Rev. D **68**, 044007 (2003). * (9) J. A. Font, M. Miller, W. M. Suen, and M. Tobias, Phys. Rev. D **61**, 044011 (2000). * (10) J. A. Font _et al._, Phys. Rev. D **65**, 084024 (2002). * (11) M. Miller, P. Gressman, and W. M. Suen, Phys. Rev. D **69**, 064026 (2004). * (12) M. Alcubierre _et al._, Int. J. Mod. Phys. D **10**, 273 (2001). * (13) D. W. Neilsen and M. W. Choptuik, Class.Quant.Grav. 17 (2000) 761-782. * (14) A. M. Abrahams, G. B. Cook, S. L. Shapiro, and S. A. Teukolsky, Phys. Rev. D **49**, 5153 (1994). * (15) V. Moncrief, Annals of Physics **88**, 323 (1974).
We found type I critical collapses of compact objects modeled by a polytropic equation of state (EOS) with polytropic index \\(\\Gamma=2\\) without the ultra-relativistic assumption. The object is formed in head-on collisions of neutron stars. Further we showed that the critical collapse can occur due to a change of the EOS, without fine tuning of initial data. This opens the possibility that a neutron star like compact object, not just those formed in a collision, may undergo a critical collapse in processes which slowly change the EOS, such as cooling. pacs: 95.30.Sf, 04.40.Dg, 04.30.Db, 97.60.Jd
Summarize the following text.
arxiv-format/0603174v1.md
# Filamental Instability of Partially Coherent Femtosecond Optical Pulses in Air M. Marklund and P. K. Shukla Centre for Nonlinear Physics, Department of Physics, Umea University, SE-901 87 Umea, Sweden Institut fur Theoretische Physik IV and Centre for Plasma Science and Astrophysics, Ruhr-Universitat Bochum, D-44780 Bochum, Germany Revised 21 March 2006, accepted for publication in Opt. Lett.) ###### 030.1640 (Coherence), 190.7110 (Ultrafast nonlinear optics) Recently, there has been a great deal of interest[1; 2; 3; 4; 5; 6; 7; 8] in investigating the nonlinear propagation of optical pulses in air. In order for the pulse propagation over a long distance, it is necessary to avoid filamentational instabilities that grow in space. Filamentation instabilities of optical pulses occur in nonlinear dispersive media, where the medium index of refraction depends on the pulse intensity. This happens in nonlinear optics (viz. a nonlinear Kerr medium) where a small modulation of the optical pulse amplitudes can grow in space due to the filamentation instability arising from the interplay between the medium nonlinearity and the pulse dispersion/diffraction. The filamentational instability is responsible for the break up of pulses into light pipes. It is, therefore, quite important to look for mechanisms that contribute to the nonlinear stability of optical pulses in nonlinear dispersive media. One possibility would be to use optical pulses that have finite spectral bandwidth, since the latter can significantly reduce the growth rate of the filamentation instability. Physically, this happens because of the distribution of the optical pulse intensity over a broad spectrum, which is unable to drive the filamentation instability with fuller efficiency, contrary to a coherent pulse which has a delta-function spectrum. In this Letter, we present for the first time a theoretical study of the filamentation instability of partially coherent optical pulses in air. We show that the spatial amplification rate of the filamentation instability is significantly reduced by means of spatial spectral broadening of optical pulses. The present results could be of significance in applications using ultra-short pulses for remote sensing of the near Earth atmosphere. The dynamics of coherent femtosecond optical pulses with a weak group velocity dispersion in air is governed by the modified nonlinear Schrodinger equation[4; 5; 9; 10; 11] \\[i\\partial_{z}\\psi+\ abla_{\\perp}^{2}\\psi+f(|\\psi|^{2})\\psi+i\ u|\\psi|^{2K-2} \\psi=0, \\tag{1}\\] where \\(\\psi(z,{\\bf r}_{\\perp})\\) is the spatial wave envelope, \\({\\bf r}_{\\perp}=(x,y)\\), and \\(f(|\\psi^{2}|)=\\alpha|\\psi|^{2}-\\varepsilon|\\psi|^{4}-\\gamma|\\psi|^{2K}\\). Here \\(\\alpha=0.466\\), \\(\\varepsilon=7.3\\times 10^{-7}\\,{\\rm cm}^{2}/w_{0}^{2}\\), \\(\\gamma=8.4\\times 10^{-40}\\,{\\rm cm}^{2(K-1)}/w_{0}^{2(K-1)}\\), and \\(\ u=1.2\\times 10^{-35}\\,{\\rm cm}^{2(K-2)}/w_{0}^{2(K-2)}\\) for a pulse duration of 250 fs, and \\(w_{0}\\) (in units of cm) is the beam waist[10] (for a discussion of the approximations leading to Eq. (1), we refer to 9). We note that Eq. (1) has been used in Ref. [11] to analyze the multi-filamentation of optical beams. Following Ref. [12], we can derive a wave kinetic equation that governs the nonlinear propagation intense optical pulses which have a spectral broadening in space. Accordingly, we apply the Wigner-Moyal transform method[13; 14; 15; 16]. The multi-dimensional Wigner-Moyal transform, including the Klimontovich statistical average, is defined as \\[\\rho(z,{\\bf r}_{\\perp},{\\bf p})=\\frac{1}{(2\\pi)^{2}}\\int d^{2}\\xi\\,e^{i{\\bf p }\\cdot\\xi}\\langle\\psi^{*}(z,{\\bf r}_{\\perp}+\\mathbf{\\xi}/2)\\psi(z,{\\bf r}_{\\perp} -\\mathbf{\\xi}/2)\\rangle, \\tag{2}\\]where \\({\\bf p}=(p_{x},p_{y})\\) represents the momenta of the quasiparticles and the angular bracket denotes the ensemble average [17]. The pulse intensity \\(\\langle|\\psi|^{2}\\rangle\\equiv I\\) satisfies \\[I=\\int d^{2}p\\,\\rho(z,{\\bf r}_{\\perp},{\\bf p}). \\tag{3}\\] Applying the transformation (2) on Eq. (2), we obtain the Wigner-Moyal kinetic equation [14; 15; 16; 18] for the evolution of the Wigner distribution function, \\[\\partial_{z}\\rho+2{\\bf p}\\cdot\ abla_{\\perp}\\rho+2f(I)\\sin\\left(\\tfrac{1}{2} \\stackrel{{\\leftarrow}}{{\ abla}}_{\\perp}\\cdot\\stackrel{{ \\rightarrow}}{{\ abla}}_{p}\\right)\\rho+2\ u I^{K-1}\\cos\\left(\\tfrac{1}{2} \\stackrel{{\\leftarrow}}{{\ abla}}_{\\perp}\\cdot\\stackrel{{ \\rightarrow}}{{\ abla}}_{p}\\right)\\rho=0. \\tag{4}\\] Seeking the solution \\(\\bar{\\rho}=\\bar{\\rho}(z,{\\bf p})\\) to Eq. (4), we may write \\(\\bar{\\rho}(z,{\\bf p})=\\rho_{0}({\\bf p})\\bar{I}(z)\\), where \\(\\rho_{0}\\) is an arbitrary function of \\({\\bf p}\\) satisfying \\(\\int d^{2}p\\,\\rho_{0}=1\\), and \\(\\bar{I}(z)=I_{0}(2K-2)/[2\ u I_{0}^{2K-2}z+(2K-2)^{2K-2}]^{1/(2K-2)}\\), with \\(I_{0}=\\bar{I}(0)\\). Thus, the effect of a small but non-zero \\(\ u\\) is to introduce a slow fall-off in the intensity along the \\(z\\)-direction when \\(K\\geq 1\\). Moreover, as \\(\ u\\to 0\\) this solution reduces to \\(\\bar{I}=I_{0}\\). We now consider spatial filamentation of a well defined optical pulses against small perturbations having the parallel wavenumber \\(k_{\\parallel}\\) and the perpendicular wavevector \\({\\bf k}_{\\perp}\\), by assuming that \\(\ u\\) is small so that \\(k_{\\parallel}\\gg|\\partial_{z}|\\) for the background distribution. We let \\(\\rho=\\bar{\\rho}(z,{\\bf p})+\\rho_{1}({\\bf p})\\exp(ik_{\\parallel}z+i{\\bf k}_{ \\perp}\\cdot{\\bf r}_{\\perp})+\\text{c.c.}\\) and \\(I=\\bar{I}(z)+I_{1}\\exp(ik_{\\parallel}z+i{\\bf k}_{\\perp}\\cdot{\\bf r}_{\\perp})+ \\text{c.c.}\\), where \\(|\\rho_{1}|\\ll\\bar{\\rho}\\), \\(|I_{1}|\\ll\\bar{I}\\), and \\(c.c.\\) stands for the complex conjugate. We linearize (4) with respect to the perturbation variables and readily obtain the nonlinear dispersion equation \\[1=\\int d^{2}p\\,\\frac{[f^{\\prime}(\\bar{I})+i\ u(K-1)\\bar{I}^{K-2}]\\bar{\\rho}(z, {\\bf p}-{\\bf k}_{\\perp}/2)-[f^{\\prime}(\\bar{I})-i\ u(K-1)\\bar{I}^{K-2}]\\bar{ \\rho}(z,{\\bf p}+{\\bf k}_{\\perp}/2)}{k_{\\parallel}+2{\\bf k}_{\\perp}\\cdot{\\bf p }-2i\ u\\bar{I}^{K-1}}, \\tag{5}\\] which is valid for partially coherent femtosecond pulses in air. Here the prime denotes differentiation with respect to the background intensity \\(\\bar{I}\\). We simplify the analysis by assuming that the perpendicular dependence in essence is one-dimensional. In the coherent case, i.e. \\(\\bar{\\rho}(z,p)=\\bar{I}(z)\\delta(p-p_{0})\\), Eq. (5) yields \\[k_{\\parallel}=-2kp_{0}+i\ u(K+1)\\bar{I}^{K-1}\\pm\\sqrt{k^{2}[k^{2}-2f^{\\prime} (\\bar{I})\\bar{I}]-\ u^{2}(K-1)^{2}\\bar{I}^{2K-2}}, \\tag{6}\\] where \\(k\\) represents the perpendicular wavenumber in the one-dimensional case. Letting \\(k_{\\parallel}=-2kp_{0}-i\\Gamma\\) in (6), where \\(\\Gamma\\) is the filamentation instability growth rate, we thus obtain [19] \\[\\Gamma=-\ u(K+1)\\bar{I}^{K-1}+\\sqrt{k^{2}[2f^{\\prime}(\\bar{I})\\bar{I}-k^{2}]+ \ u^{2}(K-1)^{2}\\bar{I}^{2K-2}}, \\tag{7}\\]which reduces to the well known filamentation instability growth rate in a Kerr medium (i.e. \\(\ u=0\\) and \\(f(I)=\\alpha I\\)). We note that a nonzero \\(\ u\\) gives rise to an overall reduction of the growth rate. In Fig. 1 we have plotted a number of different curves for the growth rate in the coherent case. In the partially coherent case, we investigate the effects of spatial spectral broadening using the Lorentz distribution \\[\\bar{\\rho}(z,p)=\\frac{\\bar{I}(z)}{\\pi}\\frac{\\Delta}{(p-p_{0})^{2}+\\Delta^{2}}, \\tag{8}\\] where \\(\\Delta\\) denotes the width of the distribution around the quasiparticle momenta \\(p_{0}\\). Inserting (8) into (5) and carrying out the integration in a straightforward manner, we obtain \\[k_{\\parallel}=-2kp_{0}+i\ u(K+1)\\bar{I}^{K-1}+2ik\\Delta\\pm\\sqrt{k^{2}[k^{2}-2f ^{\\prime}(\\bar{I})\\bar{I}]-\ u^{2}(K-1)^{2}\\bar{I}^{2K-2}}. \\tag{9}\\] With \\(k_{\\parallel}=-2kp_{0}-i\\Gamma\\) the filamentation instability growth rate is \\[\\Gamma=-\ u(K+1)\\bar{I}^{K-1}-2k\\Delta+\\sqrt{k^{2}[2f^{\\prime}(\\bar{I})\\bar{I} -k^{2}]+\ u^{2}(K-1)^{2}\\bar{I}^{2K-2}}. \\tag{10}\\] In the limit \\(\\Delta\\to 0\\), Eq. (9) reduces to the dispersion relation (6), while for \\(\ u=0\\) the dispersion relation (9) reduces to the standard expression for the filamentation instability growth rate \\[\\Gamma=-2k\\Delta+k\\sqrt{2I_{0}f^{\\prime}(I_{0})-k^{2}}. \\tag{11}\\] In Fig. 2 we have displayed the filamentation instability growth rate (10). The effect of the finite width \\(\\Delta\\) of the quasiparticle distribution can clearly be seen. In particular, multi-photon absorption (here chosen to be a modest \\(K=3\\)), determined by the coefficient \\(\ u\\), as well as multi-photon ionization, represented by the coefficient \\(\\gamma\\), combined with finite spectral width of the optical pulse give rise to a significant reduction of the filamentation instability growth rate. This is evident from Fig. 2, where the plotted normalized growth rate is reduced by as much as a factor of six, compared to the case of full coherence. In practice, optical smoothing techniques, such as the use of random phase plates [20] or other random phase techniques well suited for the results in the present Letter, have been used in inertial confinement fusion studies for quite some time (see, e.g. Ref. [21]). Such spatial partial coherence controls are reproducible and can be tailored as to give a suitable broadband spectrum (as in, e.g. [22], where optical vortices were generated). Thus, in the case of ultra-short pulse propagation in air, such random phase techniques can be used to experimentally prepare an ultra-short optical pulse for a long-distance propagation, and a large spatial bandwidth of optical pulses, in conjunction with multi-photon ionization and absorption, may drastically reduce (down to less than 20 % of the coherent value in the present study) the filamentation instability growth rate. This will lead to a greater long range stability, since the onset of strong optical pulse filamentation is delayed, resulting in several times longer stable propagation. A rough estimate based on the numbers found in the present Letter shows that an optical beam could propagate a distance as much as six times longer with proper random phasing. To summarize, we have investigated the filamentation instability of partially coherent femtosecond optical pulses in air. For this purpose, we introduced the Wigner-Moyal representation on the modified nonlinear Schrodinger equation and obtained a kinetic wave equation for optical pulses that have a spectral bandwidth in wavevector space. A perturbation analysis of the kinetic wave equation gives a nonlinear dispersion relation, which describes the filamentation instability (spatial amplification) of broadband optical pulses. Our results reveal that the latter would not be subjected to filamentation due to spectral pulse broadening. Hence, using partial spatial coherence effects for controlling the filamentational instability, femtosecond optical pulse propagation in air can be improved significantly. The result presented here is also indicative that optical smoothing techniques, as used in inertial confinement studies, could be very useful for ultra-short pulse propagation in air. This can help to optimize current applications of ultra-short laser pulses for atmospheric remote sensing over a long distance. ## Acknowledgments The authors thank one of the referees for helpful suggestions and comments on a previous version, as well as providing valuable references. This research was partially supported by the Swedish Research Council. ## References * (1) A. Braun, G. Korn, X. Liu, D. Du, J. Squier, and G. Mourou, Opt. Lett. **20**, 73 (1995). * (2) E. T. J. Nibbering, P. F. Curley, G. Grillon, B. S. Prade, M. A. Franco, F. Salin, and A. Mysyrowicz, Opt. Lett. **21**, 62 (1996). * (3) H. R. Lange, G. Grillon, J.-F. Ripoche, M. A. Franco, B. Lamouroux, B. S. Prade, A. Mysyrowicz, E. T. J. Nibbering, and A. Chiron, Opt. Lett. **23**, 120 (1998). * (4) M. Mlejnek, E. M. Wright, and J. V. Moloney, Opt. Lett. **23**, 382 (1998). * (5) M. Mlejnek, M. Kolesik, J. V. Moloney, and E. M. Wright, Phys. Rev. Lett **83**, 2938 (1999). * (6) A. Couairon and L. Berge, Phys. Rev. Lett. **88**, 135003 (2002). * (7) V. Skarka, N. B. Aleksic, and V. I. Berezhiani, Phys. Lett. A **319**, 317 (2003). T. T. Xi, X. Lu, and J. Zhang, Phys. Rev. Lett. **96**, 025003 (2006). * (9) L. Berge, S. Skupin, F. Lederer, G. Mejean, J. Yu, J. Kasparian, E. Salmon, J. P. Wolf, M. Rodriguez, L. Woste, R. Bourayou, and R. Sauerbrey, Phys. Rev. Lett. **92**, 225002 (2004). * (10) A. Vincotte and L. Berge, Phys. Rev. Lett. **95**, 193901 (2005). * (11) S. Skupin, L. Berge, U. Peschel, F. Lederer, G. Mejean, J. Yu, J. Kasparian, E. Salmon, J. P. Wolf, M. Rodriguez, L. Woste, R. Bourayou, and R. Sauerbrey, Phys. Rev. E **70**, 046602 (2004). * (12) B. Hall, M. Lisak, D. Anderson, R. Fedele, and V. E. Semenov, Phys. Rev. E **65**, 035602(R) (2002). * (13) E. P. Wigner, Phys. Rev. **40**, 749 (1932). * (14) J. E. Moyal, Proc. Cambridge Philos. Soc. **45**, 99 (1949). * (15) E. I. Ivleva, V. V. Korobkin, and V. N. Sazonov, Sov. J. Quant. Electronics **13**, 754 (1983). * (16) V. V. Korobkin and V. N. Sazonov, Sov. Phys. JETP **54**, 636 (1981). * (17) Yu. L. Klimontovich, _The statistical Theory of Non-Equilibrium Processes in a Plasma_ (Pergamon Press, Oxford, 1967). * (18) J. T. Mendonca, _Theory of Photon Acceleration_ (Institute of Physics Publishing, Bristol, 2001). * (19) A. Couairon and L. Berge, Phys. Plasmas **7**, 193 (2000). * (20) Y. Kato, K. Mima, N. Miyanaga, S. Arinaga, Y. Kitagawa, M. Nakatsuka, and C. Yamanaka, Phys. Rev. Lett. **53**, 1057 (1984). * (21) M. Koenig, B. Faral, J. M. Boudenne, D. Batani, A. Benuzzi, and S. Bossi, Phys. Rev. E **50**, R3314 (1994). * (22) K. J. Moh, X.-C. Yuan, D. Y. Tang, W. C. Cheong, L. S. Zhang, D. K. Y. Low, X. Peng, H. B. Niu, and Z. Y. Lin, Appl. Phys. Lett. **88**, 091103 (2006). Fig. 1. The coherent filamentation instability growth rate, given by (7), plotted for different parameter values; all curves with \\(I_{0}=0.5\\), \\(\\alpha=1\\), and \\(K=3\\). The full thick line represents the standard filamentation instability growth rate for a nonlinear Schrodinger equation, i.e. \\(\ u=\\epsilon=\\gamma=0\\); the thin dashed curve has \\(\ u=\\gamma=0\\), while \\(\\epsilon=0.5\\); the thin dotted curve has \\(\ u=\\epsilon=0\\) and \\(\\gamma=0.5\\); the thin dashed-dotted curve has \\(\ u=0\\) and \\(\\epsilon=\\gamma=0.5\\); the thick dashed curve has \\(\ u=0.1\\) and \\(\\epsilon=\\gamma=0\\); finally, the thick dashed-dotted curve has \\(\ u=0.1\\) and \\(\\epsilon=\\gamma=1/2\\). Fig. 2. The partially coherent filamentation instability growth rate, given by (10), plotted for different parameter values; all curves with \\(I_{0}=0.75\\), \\(\\alpha=1\\), and \\(K=3\\). The full thick line again represents the standard filamentation instability growth rate for a nonlinear Schrodinger equation, i.e. \\(\\Delta=\ u=\\epsilon=\\gamma=0\\); the thin full curve has \\(\ u=\\epsilon=\\gamma=0\\), while \\(\\Delta=0.1\\); the thin dashed curve has \\(\\epsilon=\\gamma=0\\) while \\(\ u=0.05\\) and \\(\\Delta=0.1\\); the thin dotted curve has \\(\ u=0.05\\) and \\(\\gamma=0.1\\) while \\(\\epsilon=0\\). The effects finite width of the background intensity distribution of the optical pulse, as well as the influence of the higher order nonlinearity and losses are clearly seen here. Fig. 1. Fig. 2.
The filamentational instability of spatially broadband femtosecond optical pulses in air is investigated by means of a kinetic wave equation for spatially incoherent photons. An explicit expression for the spatial amplification rate is derived and analyzed. It is found that the spatial spectral broadening of the pulse can lead to stabilization of the filamentation instability. Thus, optical smoothing techniques could optimize current applications of ultra-short laser pulses, such as atmospheric remote sensing.
Give a concise overview of the text below.
arxiv-format/0603179v1.md
Gravitational Instabilities in Gaseous Protoplanetary Disks and Implications for Giant Planet Formation Richard H. Durisen Indiana University Alan P. Boss Carnegie Institution of Washington Lucio Mayer Eidgenossische Technische Hochschule Zurich Andrew F. Nelson Los Alamos National Laboratory Thomas Quinn University of Washington W. K. M. Rice University of Edinburgh ## 1 Introduction Gravitational instabilities (GI's) can occur in any region of a gas disk that becomes sufficiently cool or develops a high enough surface density. In the nonlinear regime, GI's can produce local and global spiral waves, self-gravitating turbulence, mass and angular momentum transport, and disk fragmentation into dense clumps and substructure. The particular emphasis of this review article is the possibility (_Kuiper_, 1951; _Cameron_, 1978), recently revived by _Boss_ (1997, 1998a), that the dense clumps in a disk fragmented by GI's may become self-gravitating precursors to gas giant planets. This particular idea for gas giant planet formation has come to be known as the disk instability theory. We provide here a thorough review of the physics of GI's as currently understood through a wide variety of techniques and offer tutorials on key issues of physics and methodology. The authors assembled for this paper were deliberately chosen to represent the full range of views on the subject. Although we disagree about some aspects of GI's and about some interpretations of available results, we have labored hard to present a fair and balanced picture. Other recent reviews of this subject include _Boss_ (2002c), _Durisen_ (2003), and _Durisen_ (2006). ## 2 Physics of GI's ### Linear Regime The parameter that determines whether GI's occur in thin gas disks is \\[Q=c_{s}\\kappa/\\pi G\\Sigma, \\tag{1}\\] where \\(c_{s}\\) is the sound speed, \\(\\kappa\\) is the epicyclic frequency at which a fluid element oscillates when perturbed from circular motion, \\(G\\) is the gravitational constant, and \\(\\Sigma\\) is the surface density. In a nearly Keplerian disk, \\(\\kappa\\approx\\) the rotational angular speed \\(\\Omega\\). For axisymmetric (ring-like) disturbances, disks are stable when \\(Q>1\\) (_Toomre_, 1964). At high \\(Q\\)-values, pressure, represented by \\(c_{s}\\) in (1), stabilizes short wavelengths, and rotation, represented by \\(\\kappa\\), stabilizes long wavelengths. The most unstable wavelength when \\(Q<1\\) is given by \\(\\lambda_{m}\\approx 2\\pi^{2}G\\Sigma/\\kappa^{2}\\). Modern numerical simulations, beginning with _Papaloizou and Savonije_ (1991), show that nonaxisymmetric disturbances, which grow as multi-armed spirals, become unstable for \\(Q\\lesssim\\) 1.5. Because the instability is both linear and dynamic, small perturbations grow exponentially on the time scale of a rotation period \\(P_{rot}=2\\pi/\\Omega\\). The multi-arm spiral waves that grow have a predominantly trailing pattern, and several modes can appear simultaneously (_Boss_, 1998a; _Laughlin et al._, 1998; _Nelson et al._, 1998; _Pickett et al._, 1998). Although the star does become displaced from the system center of mass (_Rice et al._, 2003a) and one-armed structures can occur (see Fig. 1 of _Cai et al._, 2006), one-armed modes do not play the dominant role predicted by _Adams et al._ (1989) and _Shu et al._ (1990). **2.2 Nonlinear Regime** Numerical simulations (see also Sections 3 and 4) show that, as GI's emerge from the linear regime, they may either saturate at nonlinear amplitude or fragment the disk. Two major effects control or limit the outcome - disk thermodynamics and nonlinear mode coupling. At this point, the disks also develop large surface distortions. Disk Thermodynamics. As the spiral waves grow, they can steepen into shocks that produce strong localized heating (_Pickett et al._, 1998, 2000a; _Nelson et al._, 2000). Gas is also heated by compression and through net mass transport due to gravitational torques. The ultimate source of GI heating is work done by gravity. What happens next depends on whether a balance can be reached between heating and the loss of disk thermal energy by radiative or convective cooling. The notion of a balance of heating and cooling in the nonlinear regime was described as early as 1965 by _Goldreich and Lynden-Bell_ and has been used as a basis for proposing \\(\\alpha\\)-treatments for GI-active disks (_Paczynski_, 1978; _Lin and Pringle_, 1987). For slow to moderate cooling rates, numerical experiments, such as in Fig. 1, verify that thermal self-regulation of GI's can be achieved (_Tomley et al._, 1991; _Pickett et al._, 1998, 2000a, 2003; _Nelson et al._, 2000; _Gammie_, 2001; _Boss_, 2003; _Rice et al._, 2003b; _Lodato and Rice_, 2004, 2005; _Mejia et al._ 2005; _Cai et al._, 2006). \\(Q\\) then hovers near the instability limit, and the nonlinear amplitude is controlled by the cooling rate. Nonlinear Mode Coupling. Using second and third-order governing equations for spiral modes and comparing their results with a full nonlinear hydrodynamics treatment, _Laughlin et al._ (1997, 1998) studied nonlinear mode coupling in the most detail. Even if only a single mode initially emerges from the linear regime, power is quickly distributed over modes with a wide variety of wavelengths and number of arms, resulting in a self-gravitating turbulence that permeates the disk. In this gravitoturbulence, gravitational torques and even Reynold's stresses may be important over a wide range of scales (_Nelson et al._, 1998; _Gammie_, 2001; _Lodato and Rice_, 2004; _Mejia et al._, 2005). Surface Distortions. As emphasized by _Pickett et al._ (1998, 2000, 2003), the vertical structure of the disk plays a crucial role, both for cooling and for essential aspects of the dynamics. There appears to be a relationship between GI spiral modes and the surface or f-modes of stratified disks (_Pickett et al._, 1996; _Lubow and Ogilvie_, 1998). As a result, except for isothermal disks, GI's tend to have large amplitudes at the surface of the disk. Shock heating in the GI spirals can also disrupt vertical hydrostatic equilibrium, leading to rapid vertical expansions that resemble hydraulic jumps (_Boley et al._, 2005; _Boley and Durisen_, 2006). The resulting spiral corrugations can produce observable effects (e.g., masers, _Durisen et al._, 2001). **2.3 Heating and Cooling** Protoplanetary disks are expected to be moderately thin, with \\(H/r\\sim 0.05-0.1\\), where \\(H\\) is the vertical scale height and \\(r\\) is the distance from the star. For hydrostatic equilibrium in the vertical direction, \\(H\\approx c_{s}/\\Omega\\). The ratio of disk internal energy to disk binding energy \\({\\sim{c_{s}}^{2}/(r\\Omega)^{2}\\sim(H/r)^{2}}\\) is then \\(\\lesssim\\) 1%. As growing modes become nonlinear, they tap the enormous store of gravitational energy in the disk. Simulation of the disk energy budget must be done accurately and include all relevant effects, because it is the disk temperature, through \\(c_{s}\\) in equation 1, that determines whether the disk becomes or remains unstable, once the central mass, which governs most of \\(\\kappa\\), and Figure 1: Greyscale of effective temperature \\(T_{eff}\\) in degrees Kelvin for a face-on GI-active disk in an asymptotic state of thermal self-regulation. This figure is for the _Mejia et al._ (2005) evolution of a 0.07 \\(M_{\\odot}\\) disk around a 0.5 \\(M_{\\odot}\\) star with \\(t_{cool}=1\\) outer rotation period at 4,500 yr. The frame is 120 AU on a side. the disk mass distribution \\(\\Sigma\\) have been specified. _2.3.1 Cooling_ There have been three approaches to cooling - make simple assumptions about the equation of state (EOS), include idealized cooling characterized by a cooling time, or treat radiative cooling using realistic opacities. EOS. This approach has been used to study mode coupling (e.g., _Laughlin et al._. 1998) and to examine disk fragmentation in the limits of isentropic and isothermal behavior (e.g., _Boss_, 1998a, 2000; _Nelson et al._, 1998; _Pickett et al._, 1998, 2003; _Mayer et al._, 2004). Isothermal evolution of a disk, where the disk temperature distribution is held fixed in space or when following fluid elements, effectively assumes rapid loss of energy produced by shocks and \\(PdV\\) work. Isentropic evolution, where specific entropy is held fixed instead of temperature, is a more moderate assumption but is still lossy because it ignores entropy generation in shocks (_Pickett et al._, 1998, 2000a). Due to the energy loss, we do not refer to such calculations as adiabatic. Here, we restrict adiabatic evolution to mean cases where the fluid is treated as an ideal gas with shock heating included via an artificial viscosity term in the internal energy equation but no radiative cooling. Such calculations are adiabatic in the sense that there is no energy loss by the system. Examples include a simulation in _Pickett et al._ (1998) and simulations in _Mayer et al._ (2002, 2004). _Mayer et al._ use adiabatic evolution throughout some simulations, but, in others that are started with a locally isothermal EOS, they switch to adiabatic evolution as the disk approaches fragmentation. Simple Cooling Laws. Better experimental control over energy loss is obtained by adopting simple cooling rates per unit volume \\(\\Lambda=\\epsilon/t_{cool}\\), where \\(\\epsilon\\) is the internal energy per unit volume. The \\(t_{cool}\\) is specified either as a fixed fraction of the local disk rotation period \\(P_{rot}\\), usually by setting \\(t_{cool}\\Omega=\\) constant (_Gammie_, 2001; _Rice et al._, 2003b; _Mayer et al._, 2004b, 2005) or \\(t_{cool}=\\) constant everywhere (_Pickett et al._, 2003; _Mejia et al._, 2005). In the _Mayer et al._ work, the cooling is turned off in dense regions to simulate high optical depth. Regardless of \\(t_{cool}\\) prescription, the amplitude of the GI's in the asymptotic state (see Fig. 1), achieved when heating and cooling are balanced, increases as \\(t_{cool}\\) decreases. In addition to elucidating the general physics of GI's, such studies address whether GI's are intrinsically a local or global phenomenon (_Laughlin and Rozyczka_ 1996; _Balbus and Papaloizou_, 1999) and whether they can be properly modeled by a simple \\(\\alpha\\) prescription. When \\(t_{cool}\\) is globally constant, the transport induced by GI's is global with high mass inflow rates (_Mejia et al._, 2005; _Michael et al._, in preparation); when \\(t_{cool}\\Omega\\) is constant, transport is local, except for thick or very massive disks, and the inflow rates are well characterized by a constant \\(\\alpha\\) (_Gammie_, 2001; _Lodato and Rice_, 2004, 2005). Radiative Cooling. The published literature on this so far comes from only three research groups (_Nelson et al._, 2000; _Boss_, 2001, 2002b, 2004a; _Mejia_, 2004; _Cai et al._, 2006), but work by others is in progress. Because Solar System-sized disks encompass significant volumes with small and large optical depth, this becomes a difficult 3D radiative hydrodynamics problem. Techniques will be discussed in Section 3.2. For a disk spanning the conventional planet-forming region, the opacity is due primarily to dust. Complications which have to be considered include the dust size distribution, its composition, grain growth and settling, and the occurrence of fast cooling due to convection. In general, the radiative cooling time is dependent on temperature \\(T\\) and metallicity \\(Z\\). Let \\(\\kappa_{r}\\sim ZT^{\\beta_{r}}\\) and \\(\\kappa_{p}\\sim ZT^{\\beta_{p}}\\) be the Rosseland and Planck mean opacities, respectively, and let \\(\\tau\\sim\\kappa_{r}H\\) be the vertical optical depth to the midplane. For large \\(\\tau\\), \\[t_{cool}\\sim T/{T_{eff}}^{4}\\sim T^{-3}\\tau\\sim T^{-2.5+\\beta_{r}}Z; \\tag{2}\\] for small \\(\\tau\\), \\[t_{cool}\\sim T/\\kappa_{p}T^{4}\\sim T^{-3-\\beta_{p}}/Z. \\tag{3}\\] For most temperatures regimes, we expect \\(-3<\\beta<2.5\\), so that \\(t_{cool}\\) increases as \\(T\\) decreases. As \\(Z\\) increases, \\(t_{cool}\\) increases in optically thick regions, but decreases in optically thin ones. _2.3.2 Heating_ In addition to the internal heating caused by GI's through shocks, compression, and mass transport, there can be heating due to turbulent dissipation (_Nelson et al._, 2000) and other sources of shocks. In addition, a disk may be exposed to one or more external radiation fields due to a nearby OB star (e.g., _Johnstone et al._, 1998), an infalling envelope (e.g., _D'Alessio et al._, 1997), or the central star (e.g., _Chiang and Goldreich_, 1997). These forms of heat input can be comparable to or larger than internal sources of heating and can influence \\(Q\\) and the surface boundary conditions. Only crude treatments have been done so far for envelope irradiation (_Boss_ 2001, 2002b; _Cai et al._, 2006) and for stellar irradiation (_Mejia_, 2004). **2.4 Fragmentation** As shown first by _Gammie_ (2001) for local thin-disk calculations and later confirmed by _Rice et al._ (2003b) and _Mejia et al._ (2005) in full 3D hydro simulations, disks with a fixed \\(t_{cool}\\) fragment for sufficiently fast cooling, specifically when \\(t_{cool}\\Omega\\lesssim 3\\), or, equivalently, \\(t_{cool}\\lesssim P_{rot}/2\\). Finite thickness has a slight stabilizing influence (_Rice et al._, 2003b; _Mayer et al._, 2004a). When dealing with realistic radiative cooling, one cannot apply this simple fragmentation criterion to arbitrary initial disk models. One has to apply it to the asymptotic phase after nonlinear behavior is well-developed (_Johnson and Gammie_, 2003). Cooling times can be much longer in the asymptotic state than they are initially (_Cai et al._, 2006, _Mejia et al._, in preparation). For disks evolved under isothermal conditions, where a simple cooling time cannot be defined, local thin-disk calculations show fragmentation when \\(Q\\lesssim 1.4\\) (_Johnson and Gammie_, 2003). This is roughly consistent with results from global simulations (e.g., _Boss_, 2000; _Nelson et al._, 1998; _Pickett etal._, 2000a, 2003; _Mayer et al._, 2002, 2004a). Fig. 2 shows a classic example of a fragmenting disk. Although there is agreement on conditions for fragmentation, two important questions remain. Do real disks ever cool fast enough for fragmentation to occur, and do the fragments last long enough to contract into permanent protoplanets before being disrupted by tidal stresses, shear stresses, physical collisions, and shocks? ## 3 Numerical Methods A full understanding of disk evolution and the planet formation process cannot easily be obtained using a purely analytic approach. Although numerical methods are powerful, they have flaws and limitations that must be taken into account when interpreting results. Here we describe some commonly used numerical techniques and their limitations. ### Hydrodynamics Numerical models have been implemented using one or the other of two broad classes of techniques to solve the hydrodynamic equations. Each class discretizes the system in fundamentally different ways. On one hand, there are particle-based simulations using Smoothed Particle Hydrodynamics (SPH) (_Benz_, 1990; _Monaghan_, 1992), and, on the other, grid-based techniques (e.g., _Tohline_, 1980; _Fryxell et al._, 1991; _Stone and Norman_, 1992; _Boss and Myhill_, 1992; _Pickett_, 1995). SPH uses a collection of particles distributed in space to represent the fluid. Each particle is free to move in response to forces acting on it, so that the particle distribution changes with the system as it evolves. The particles are collisionless, meaning that they do not represent actual physical entities, but rather points at which the underlying distributions of mass, momentum, and energy are sampled. In order to calculate hydrodynamic quantities such as mass density or pressure forces, contributions from other particles within a specified distance, the smoothing length, are weighted according to a smoothing kernel and summed in pairwise fashion. Mutual gravitational forces are calculated by organizing particles into a tree, where close particles are treated more accurately than aggregates on distant branches. Grid-based methods use a grid of points, usually fixed in space, on which fluid quantities are defined. In the class of finite difference schemes, fluxes of mass, momentum, and energy between adjacent cells are calculated by taking finite differences of the fluid quantities in space. Although not commonly used in simulations of GI's, the Piecewise Parabolic Method (PPM) of _Collela and Woodward_ (1984) represents an example of the class of finite volume schemes. For our purposes, an important distinguishing factor is that while finite difference and SPH methods may require artificial viscosity terms to be added to the equations to ensure numerical stability and produce correct dissipation in shocks, PPM does not. ### Radiative Physics In Section 2.3, we describe a number of processes by which disks may heat and cool. In this section, we discuss various code implementations and their limitations. Fixed EOS evolution is computationally efficient because it removes the need to solve an equation for the energy balance. On the other hand, the gas instantly radiates away all heating due to shocks and, for the isothermal case, due to compressional heating as well. As a consequence, the gas may compress to much higher densities than are realistic, biasing a simulation towards GI growth and fragmentation even when a physically appropriate temperature or entropy scale is used. Although fixed \\(t_{cool}\\)'s represent a clear advance over fixed EOS's, equations 2 and 3 show that increasing the temperature, which makes the disk more stable, also decreases \\(t_{cool}\\). So it is incorrect to view short global cooling times as necessarily equivalent to more rapid GI growth and fragmentation. In order for fragmentation to occur, one needs both a short \\(t_{cool}\\) and a disk that is cool enough to be unstable (e.g., _Rafikov_, 2005). The most physically inclusive simulations to date employ radiative transport schemes that allow \\(t_{cool}\\) to be determined by disk opacity. Current implementations (Section 2.3) employ variants of a radiative diffusion approximation in regions of medium to high optical depth \\(\\tau\\), integrated from infinity toward the disk midplane. On the other hand, radiative losses actually occur from regions where \\(\\tau\\lesssim 1\\), and so the treatment of the interface between optically thick and thin regions strongly influences cooling. Three groups have implemented different approaches. _Nelson et al._ (2000) assume that the vertical structure of the disk can be defined at each point as an atmosphere in thermal equilibrium. In this limit, the interface can be defined by the location of the disk photosphere, where \\(\\tau=2/3\\) (see, e.g., _Mihalas_, 1977). Cooling at each point is then defined as that due to a blackbody with the temper Figure 2: Midplane density contours for the isothermal evolution of a 0.09 \\(M_{\\odot}\\) disk around a 1 \\(M_{\\odot}\\) star. A multi-Jupiter mass clump forms near 12 o’clock by 374 years. The frame in the figure is 40 AU on a side. The figure is adapted from _Boss_ (2000). ature of the photosphere. _Boss_ (2001, 2002b, 2004a, 2005) performs a 3D flux-limited radiative diffusion treatment for the optically thick disk interior (_Bodenheimer et al._, 1990), coupled to an outer boundary condition where the temperature is set to a constant for \\(\\tau<10\\), \\(\\tau\\) being measured along the radial direction. _Mejia_ (2004) and _Cai et al._ (2006) use the same radiative diffusion treatment as _Boss_ in their disk interior, but they define the interface using \\(\\tau=2/3\\), measured vertically, above which an optically thin atmosphere model is self-consistently grafted onto the outward flux from the interior. As discussed in Section 4.2, results for the three groups differ markedly, indicating that better understanding of radiative cooling at the disk surface will be required to determine the fate of GI's. ### Numerical Issues The most important limitations facing numerical simulations are finite computational resources. Simulations have a limited duration with a finite number of particles or cells, and they must have boundary conditions to describe behavior outside the region being computed. A simulation must distribute grid cells or particles over the interesting parts of the system to resolve the relevant physics and avoid errors associated with incorrect treatment of the boundaries. Here we describe a number of requirements for valid simulations and pitfalls to be avoided. For growth of GI's, simulations must be able to resolve the wavelengths of the instabilities underlying the fragmentation. _Bate and Burkert_ (1997) and _Truelove et al._ (1997) each define criteria based on the collapse of a Jeans unstable cloud that links a minimum number of grid zones or particles to either the physical wavelength or mass associated with Jeans collapse. _Nelson_ (2006) notes that a Jeans analysis may be less relevant for disk systems because they are flattened and rotating rather than homogeneous and instead proposes a criterion based on the Toomre wavelength in disks. Generally, grid-based simulations must resolve the appropriate local instability wavelength with a minimum of 4 to 5 grid zones in each direction, while SPH simulations must resolve the local Jeans or Toomre mass with a minimum of a few hundred particles. Resolution of instability wavelengths will be insufficient to ensure validity if either the hydrodynamics or gravitational forces are in error. For example, errors in the hydrodynamics may develop in SPH and finite difference methods because a viscous heating term must be added artificially to model shock dissipation and, in some cases, to ensure numerical stability. In practice, the magnitude of dissipation depends in part on cell dimensions rather than just on physical properties. Discontinuities may be smeared over as many as \\(\\sim 10\\) or more cells, depending on the method. Further, _Mayer et al._ (2004a) have argued that because it takes the form of an additional pressure, artificial viscosity may by itself reduce or eliminate fragmentation. On the other hand, artificial viscosity can promote the longevity of clumps (see Fig. 3 of _Durisen_, 2006). Gravitational force errors develop in grid simulations from at least two sources. First, when _Pickett et al._ (2003) place a small blob within their grid, errors occur in the self-gravitation force of the blob that depend on whether the cells containing it have the same spacing in each coordinate dimension. Ideally, grid zones would have comparable spacing in all directions, but disks are both thin and radially extended. Use of spherical and cylindrical grids tends to introduce disparity in grid spacing. Second, _Boss_ (2000) shows that maximum densities inside clumps are enhanced by orders of magnitude as additional terms in his Poisson solver, based on a \\(Y_{lm}\\) decomposition, are included. SPH simulations encounter a different source of error because gravitational forces must be softened in order to preserve the collisionless nature of the particles. _Bate and Burkert_ (1997) and _Nelson_ (2006) each show that large imbalances between the gravitational and pressure forces can develop if the length scales for each are not identical, possibly inducing fragmentation in simulations. On the other hand, spatially and temporally variable softening implies a violation of energy conservation. Quantifying errors from sources such as insufficiently resolved shock dissipation or gravitational forces cannot be reliably addressed except by experimentation. Results of otherwise identical simulations performed at several resolutions must be compared, and identical models must be realized with more than one numerical method (as in Section 4.4), so that deficiencies in one method can be checked against strengths in another. The disks relevant for GI growth extend over several orders of magnitude in radial range, while GI's may develop large amplitudes only over some fraction of that range. Computationally affordable simulations therefore require both inner and outer radial boundaries, even though the disk may spread radially and spiral waves propagate up to or beyond those boundaries. In grid-based simulations, _Pickett et al._ (2000b) demonstrate that numerically induced fragmentation can occur with incorrect treatment of the boundary. Studies of disk evolution must ensure that treatment of the boundaries does not produce artificial effects. In particle simulations, where there is no requirement that a grid be fixed at the beginning of the simulation, boundaries are no less a problem. The smoothing in SPH requires that the distribution of neighbors over which the smoothing occurs be relatively evenly distributed in a sphere around each particle for the hydrodynamic quantities to be well defined. At currently affordable resolutions (\\(\\sim 10^{5}-10^{6}\\) particles), the smoothing kernel extends over a large fraction of a disk scale height, so meeting this requirement is especially challenging. Impact on the outcomes of simulations has not yet been quantified. ## 4 Key Issues ### Triggers for GI's When disks become unstable, they may either fragment or enter a self-regulated phase depending on the cooling time. It is therefore important to know how and when GI's may arise in real disks and the physical state of the disk at that time. Various mechanisms for triggering GI's are conceivable, but only a few have yet been studied in any detail. Possibilities include - the formation of a massive disk from the collapse of a protostellar cloud (e.g., _Laughlin and Bodenheimer_, 1994; _Yorke and Bodenheimer_, 1999), clumpy infall onto a disk (_Boss_, 1997, 1998a), cooling of a disk from a stable to an unstable state, slow accretion of mass, accumulation of mass in a magnetically dead zone, perturbations by a binary companion, and close encounters with other star/disk systems (_Boffin et al._, 1998; _Lin et al._, 1998). A few of these will be discussed further, with an emphasis on some new results on effects of binarity. Several authors start their disks with stable or marginally stable \\(Q\\)-values and evolve them to instability either by slow idealized cooling (e.g., _Gammie_, 2001; _Pickett et al._, 2003; _Mejia et al._, 2005) or by more realistic radiative cooling (e.g., _Johnson and Gammie_, 2003; _Boss_, 2005, 2006; _Cai et al._, 2006). To the extent tested, fragmentation in idealized cooling cases are consistent with the Gammie criterion (Section 2.4). With radiative cooling, as first pointed out by _Johnson and Gammie_ (2003), it is difficult to judge whether a disk will fragment when it reaches instability based on its initial \\(t_{cool}\\). When _Mayer et al._ (2004a) grow the mass of a disk while keeping its temperature constant, dense clumps form in a manner similar to clump formation starting from an unstable disk. A similar treatment of accretion needs to be done using realistic radiative cooling. Simulations like these suggest that, in the absence of a strong additional source of heating, GI's are unavoidable in protoplanetary disks with sufficient mass (\\(\\sim 0.1M_{\\odot}\\) for a \\(\\sim 1M_{\\odot}\\) star). A disk evolving primarily due to magnetorotational instabilites (MRI's) may produce rings of cool gas in the disk midplane where the ionization fraction drops sufficiently to quell MRI's (_Gammie_, 1996; _Fleming and Stone_, 2003). Dense rings associated with these magnetically dead zones should become gravitationally unstable and may well trigger a localized onset of GI's. This process might lead to disk outbursts related to FU Orionis events (_Armitage et al._, 2001) and induce chondrule-forming episodes (_Boley et al._, 2005). A phase of GI's robust enough to lead to gas giant protoplanet formation might be achieved through external triggers, like a binary star companion or a close encounter with another protostar and its disk. A few studies have explored the effects of binary companions on GI's. _Nelson_ (2000) follows the evolution of disks in an equal-mass binary system with a semimajor axis of 50 AU and an eccentricity of 0.3 and finds that the disks are heated by internal shocks and viscous processes to such an extent as to become too hot for gas giant planet formation either by disk GI's or by core accretion, because volatile ices and organics are vaporized. In a comparison of the radiated emission calculated from his simulation to those from the L1551 IRS5 system, _Nelson_ (2000) finds that the simulation is well below the observed system and therefore that the temperatures in the simulation are underestimates. He therefore concludes that \"planet formation is unlikely in equal-mass binary systems with \\(a\\sim\\) 50 AU.\" Currently, over two dozen binary or triple star systems have known extrasolar planets, with binary separations ranging from \\(\\sim 10\\) AU to \\(\\sim 10^{3}\\) AU, so some means must be found for giant planet formation in binary star systems with relatively small semimajor axes. Using idealized cooling, _Mayer et al._ (2005) find that the effect of binary companions depends on the mass of the disks involved and on the disk cooling rate. For a pair of massive disks (\\(M\\sim 0.1M_{\\odot}\\)), formation of permanent clumps can be suppressed as a result of intense heating from spiral shocks excited by the tidal perturbation (Fig. 3 left panel). Clumps do not form in such disks for binary orbits having a semimajor axis of \\(\\sim 60\\) AU even when \\(t_{cool}<P_{rot}\\). The temperatures reached in these disks are \\(>200\\) K and would vaporize water ice, hampering core ac Figure 3: Face-on density maps for two simulations of interacting \\(M=0.1M_{\\odot}\\) protoplanetary disks in binaries with \\(t_{cool}=0.5P_{rot}\\) viewed face-on. The binary in the left panel has a nearly circular binary orbit with an initial separation of 60 AU and is shown after first pericentric passage at 150 yrs (left) and then at 450 yrs (right). Large tidally induced spiral arms are visible at 150 yrs. The right panel shows a snapshot at 160 yrs from a simulation starting from an initial orbital separation that is twice as large. In this case, fragmentation into permanent clumps occurs after a few disk orbital times. Figures adapted from _Mayer et al._ (2005). cretion, as argued by _Nelson_ (2000). On the other hand, pairs of less massive disks (\\(M\\sim 0.05M_{\\odot}\\)) that would not fragment in isolation since they start with \\(Q\\sim 2\\), can produce permanent clumps provided that \\(t_{cool}\\lesssim P_{rot}\\). This is because the tidal perturbation is weaker in this case (each perturber is less massive) and the resulting shock heating is thus diminished. Finally, the behavior of such binary systems approaches that seen in simulations of isolated disks once the semimajor axis grows beyond \\(100\\) AU (Fig. 3 right panel). Calculations by _Boss_ (2006) of the evolution of initially marginally gravitationally stable disks show that the presence of a binary star companion could help to trigger the formation of dense clumps. The most likely explanation for the difference in outcomes between the models of _Nelson_ (2000) and _Boss_ (2006) is the relatively short cooling times in the latter models (\\(\\sim 1\\) to 2 \\(P_{rot}\\), see _Boss_ 2004a) compared to the effective cooling time in _Nelson_ (2000) of \\(\\sim 15P_{rot}\\) at 5 AU, dropping to \\(\\sim P_{rot}\\) at 15 AU. Similarly, some differences in outcomes between the results of _Boss_ (2006) and _Mayer et al._ (2005) can be expected based on different choices of the binary semimajor axes and eccentricities and differences in the thermodynamics. For example, _Mayer et al._ (2005) turn off cooling in regions with densities higher than \\(10^{-10}\\) g cm\\({}^{-3}\\) to account for high optical depths. Overall, the three different calculations agree that excitation or suppression of fragmentation by a binary companion depends sensitively on the balance between compressional/shock heating and cooling. This balance appears to depend on the mass of the disks involved. Interestingly, lighter disks are more likely to fragment in binary systems according to both _Mayer et al._ (2005) and _Boss_ (2006). ### Disk Thermodynamics As discussed in Sections 2.2 and 4.1, heating and cooling are perhaps the most important processes affecting the growth and fate of GI's. Thermal regulation in the nonlinear regime leads naturally to systems near their stability limit where temporary imbalances in one heating or cooling term lead to a proportionate increase in a balancing term. For fragmentation to occur, a disk must cool quickly enough, or fail to be heated for long enough, to upset this self-regulation. A complete model of the energy balance that includes all relevant processes in a time-dependent manner is beyond the capabilities of the current generation of models. It requires knowledge of all the following - external radiation sources and their influence on the disk at each location, the energy lose rate of the disk due to radiative cooling, dynamical processes that generate thermal energy through viscosity or shocks, and a detailed equation of state to determine how much heating any of those dynamical processes generate. Recent progress towards understanding disk evolution has focused on the more limited goals of quantifying the sensitivity of results to various processes in isolation. In a thin, steady state \\(\\alpha\\)-disk, the heating and cooling times are the same and take a value (_Pringle_, 1981; _Gammie_, 2001): \\[t_{cool}=\\frac{4}{9}\\left[\\gamma(\\gamma-1)\\alpha\\Omega\\right]^{-1}. \\tag{4}\\] For \\(\\alpha\\sim 10^{-2}\\) and \\(\\gamma=1.4\\), equation 4 gives \\(\\sim 12P_{rot}\\). This is a crude upper limit on the actual time scale required to change the disk thermodynamic state. External radiative heating from the star and any remaining circumstellar material can contribute a large fraction of the total heating (_D'Alessio et al._, 1998; _Nelson et al._, 2000), as will any internal heating due to globally generated dynamical instabilities that produce shocks. Each of these processes actually makes the disk more stable by heating it, but, as a consequence, dynamical evolution slows until the disk gains enough mass to become unstable again. The marginally stable state will then be precariously held because the higher temperatures mean that all of the heating and cooling time scales, i.e., the times required to remove or replace all the disk thermal energy, are short (equations 2 and 3). When the times are short, any disruption of the contribution from a single source may be able to change the thermodynamic state drastically within only a few orbits, perhaps beyond the point where balance can be restored. A number of models (Section 2.3) have used fixed EOS evolution instead of a full solution of an energy equation to explore disk evolution. A fixed EOS is equivalent to specifying the outcomes of all heating and cooling events that may occur during the evolution, short-circuiting thermal feedback. If, for example, the temperature or entropy is set much too high or too low, a simulation may predict either that no GI's develop in a system, or that they inevitably develop and produce fragmentation, respectively. Despite this limitation, fixed EOS's have been useful to delineate approximate boundaries for regions of marginal stability. Since the thermal state is fixed, disk stability (as quantified by equation 1) is essentially determined by the disk's mass and spatial dimensions, though its surface density. Marginal stability occurs generally at \\(Q\\approx\\) 1.2 to 1.5 for locally isentropic evolutions, with a tendency for higher \\(Q\\)'s being required to ensure stability with softer EOS's (i.e., with lower \\(\\gamma\\) values) (_Boss_ 1998a; _Nelson et al._, 1998; _Pickett et al._, 1998, 2000a; _Mayer et al._, 2004a). At temperatures appropriate for observed systems (e.g., _Beckwith et al._, 1990), these \\(Q\\) values correspond to disks more massive than \\(\\sim 0.1M_{*}\\) or surface densities \\(\\Sigma\\gtrsim 10^{3}\\) gm/cm\\({}^{2}\\). As with their fixed EOS cousins, models with fixed \\(t_{cool}\\) can quantify boundaries at which fragmentation may set in. They represent a clear advance over fixed EOS evolution by allowing thermal energy generated by shocks or compression to be retained temporarily, and thereby enabling the disk's natural thermal regulation mechanisms to determine the evolution. Models that employ fixed cooling times can address the question of how violently the disk's thermal regulation mechanisms must be disrupted before they can no longer return the system to balance. An example of the value of fixed \\(t_{cool}\\) calculations is the fragmentation criterion \\(t_{cool}\\lesssim 3\\Omega^{-1}\\) (see Section 2.4). The angular momentum transport associated with disk self-gravity is a consequence of the gravitational torques induced by GI spirals (e.g., _Larson_, 1984). The viscous \\(\\alpha\\) parameter is actually a measure of that stress normalized by the local disk pressure. As shown in equation 4 and reversing the positions of \\(t_{cool}\\) and \\(\\alpha\\), the stress in a self-gravitating disk depends on the cooling time and on the equation of state through the specific heat ratio. As long as the dimensionless scale height is \\(H\\lesssim 0.1\\), global simulations by _Lodato and Rice_ (2004) with \\(t_{cool}\\Omega=\\) constant confirm Gammie's assumption that transport due to disk self-gravity can be modeled as a local phenomenon and that equation 4 is accurate. _Gammie_ (2001) and _Rice et al._ (2005) show that there is a maximum stress that can be supplied by such a quasi-steady, self-gravitating disk. Fragmentation occurs if the stress required to keep the disk in a quasi-steady state exceeds this maximum value. The relationship between the stress and the specific heat ratio, \\(\\gamma\\), results in the cooling time required for fragmentation increasing as \\(\\gamma\\) decreases. For \\(\\gamma=7/5\\), the cooling time below which fragmentation occurs may be more like \\(2P_{rot}\\), not the \\(3/\\Omega\\approx F_{rot}/2\\) obtained for \\(\\gamma=2\\) (_Gammie_, 2001; _Mayer et al._, 2004b; _Rice et al._, 2005). Important sources of stress and heating in the disk, that lie outside the framework of Gammie's local analysis, are global gravitational torques due to low-order GI spiral modes. There are two ways this can happen - a geometrically thick massive disk (_Lodato and Rice_, 2005) and a fixed global \\(t_{cool}=\\) constant (_Mejia et al._, 2005). Disks then initially produce large-amplitude spirals, resulting in a transient burst of global mass and angular momentum redistribution. For \\(t_{cool}=\\) constant and moderate masses, the disks then settle down to a self-regulated asymptotic state but with gravitational stresses significantly higher than predicted by equation 4 (_Michael et al._, in preparation). For the very massive \\(t_{cool}\\Omega=\\) constant disks, recurrent episodic redistributions occur. In all these cases, the heating in spiral shocks is spatially and temporally very inhomogeneous, as are fluctuations in all thermodynamic variables and the velocity field. The most accurate method to determine the internal thermodynamics of the disk is to couple the equations of radiative transport to the hydrodynamics directly. All heating or cooling due to radiation will then be properly defined by the disk opacity, which depends on local conditions. This is important because some fraction of the internal heating will be highly inhomogeneous, occurring predominantly in compressions and shocks as gas enters a high density spiral structure, or at high altitudes where waves from the interior are refracted and steepen into shocks (_Pickett et al._ 2000a) and where disks may be irradiated (_Mejia_, 2004; _Cai et al._, 2006). Temperatures and the \\(t_{cool}\\)'s that depend on them will then be neither simple functions of radius, nor a single globally defined value. Depending on whether the local cooling time of the gas inside the high density spiral structure is short enough, fragmentation will be more or less likely, and additional hydrodynamic processes such as convection may become active if large enough gradients can be generated. Indeed, recent simulations of _Boss_ (2002a, 2004a) suggest that vertical convection is active in disks when radiative transfer is included, as expected for high \\(\\tau\\) according to _Ruden and Pollack_ (1991). This is important because convection will keep the upper layers of the disk hot, at the expense of the dense interior, so that radiative cooling is more efficient and fragmentation is enhanced. The results have not yet been confirmed by other work and therefore remain somewhat controversial. Simulations by _Mejia_ (2004) and _Cai et al._ (2006) are most similar to those of _Boss_ and could have developed convection sufficient to induce fragmentation, but none seems to occur. No fragmentation occurs in _Nelson et al._ (2000) either, where convection is implicitly assumed to be efficient through their assumption that the entropy of each vertical column is constant. Recent re-analysis of their results reveals \\(t_{cool}\\sim 3\\) to 10 \\(P_{rot}\\), depending on radius, which is too long to allow fragmentation. These \\(t_{cool}\\)'s are in agreement with those seen by _Cai et al._ (2006) and by _Mejia et al._ (in preparation) for solar metallicity. The _Nelson et al._ results are also interesting because their comparison of the radiated output to SEDs observed for real systems demonstrates that substantial additional heating beyond that supplied by GI's is required to reproduce the observations, perhaps further inhibiting fragmentation in their models. However, using the same temperature distribution between 1 and 10 AU now used in Boss's GI models, combined with temperatures outside this region taken from models by _Adams et al._ (1988), _Boss and Yorke_ (1996) are able to reproduce the SED of the T Tauri system. It is unclear at present why their results differ from those of _Nelson et al._ (2000). The origins of the differences between the three studies are uncertain, but possibilities include differences of both numerical and physical origin. The boundary treatment at the optically thick/thin interface is different in each case (see Section 3.2), influencing the efficiency of cooling, as are the numerical methods and resolutions. _Boss_ and the _Cai/Mejia_ group each use 3D grid codes but with spherical and cylindrical grids respectively, and each with a different distribution of grid zones, while _Nelson et al._ use a 2D SPH code. Perhaps significantly, _Cai/Mejia_ assume their ideal gas has \\(\\gamma=5/3\\) while _Boss_ adopts an EOS that includes rotational and vibrational states of hydrogen, so that \\(\\gamma\\approx 7/5\\) for typical disk conditions. It is possible that differences in the current results may be explained if the same sensitivities to \\(\\gamma\\) seen in fixed EOS and fixed cooling simulations also hold when radiative transfer is included. _Boss and Cai_ (in preparation) are now conducting direct comparison calculations to isolate the cause of their differences. The preliminary indication is that the radiative boundary conditions may be the critical factor. Discrepant results for radiatively cooled models should not overshadow the qualitative agreement reached about the relationship between disk thermodynamics and fragmentation. If the marginally unstable state of a self-regulated disk is upset quickly enough by an increase in cooling or decrease in heating, the disk may fragment. What is still very unclear is whether such conditions can develop in real planet-forming disks. It is key to develop a full 3D portrait of the disk surface, so that radiative heating and cooling sources may be included self-consistently in numerical models. Important heating sources will include the envelope, the central star, neighboring stars, and self-heating from other parts of the disk, all of which will be sensitive to shadowing caused by corrugations in the disk surface that develop and change with time due to the GI's themselves. Preliminary studies of 3D disk structure (_Boley and Durisen_, 2006) demonstrate that vertical distortions, analogous to hydraulic jumps, will in fact develop (see also _Pickett et al._, 2003). If these corrugations are sufficient to cause portions of the disk to be shadowed, locally rapid cooling may occur in the shadowed region, perhaps inducing fragmentation. An implicit assumption of the discussion above is that the opacity is well known. In fact, it is not. The dominant source of opacity is dust, whose size distribution, composition, and spatial distribution will vary with time (_Cuzzi et al._, 2001; _Klahr_, 2003, see also Section 5 below), causing the opacity to vary as a result. So far, no models of GI evolution have included effects from any of these processes, except that _Nelson et al._ model dust destruction while _Cai_ and _Mejia_ consider opacity due to large grains. Possible consequences are a misidentification of the disk photospheric surface if dust grains settle towards the midplane, or incorrect radiative transfer rates in optically thick regions if the opacities themselves are in error. **4.3 Orbital Survival of Clumps** Once dense clumps form in a gravitationally unstable disk, the question becomes one of survival: Are they transient structures or permanent precursors of giant planets? Long-term evolution of simulations that develop clumps is difficult because it requires careful consideration of not only the large-scale dynamical processes that dominate formation but also physical processes that exert small influences over long time scales (e.g., migration and transport due to viscosity). It also requires that boundary conditions be handled gracefully in cases where a clump or the disk itself tries to move outside the original computational volume. On a more practical level, the extreme computational cost of performing such calculations limits the time over which systems may be simulated. As a dense clump forms, the temperatures, densities, and fluid velocities within it all increase. As a result, time steps, limited by the Courant condition, can decrease to as little as minutes or hours as the simulation attempts to resolve the clump's internal structure. So far only relatively short integration times of up to a few\\(\\times 10^{3}\\) yrs have been possible. Here, we will focus on the results of simulations and refer the reader to the chapters by _Papaloizou et al._ and _Levison et al._ for discussions of longer-term interactions. In the simplest picture of protoplanet formation via GI's, structures are assumed to evolve along a continuum of states that are progressively more susceptible to fragmentation, presumably ending in one or more bound objects which eventually become protoplanets. _Pickett et al._ (1998, 2000a, 2003) and _Mejia et al._ (2005) simulate initially smooth disks subject to growth of instabilities and, indeed, find growth of large-amplitude spiral structures that later fragment into arclets or clumps. Instead of growing more and more bound, however, these dense structures are sheared apart by the background flow within an orbit or less, especially when shock heating is included via an artificial viscosity. This suggests that a detailed understanding of the thermodynamics inside and outside the fragments is critical for understanding whether fragmentation results in permanently bound objects. Assuming that permanently bound objects do form, two additional questions emerge. First, how do they accrete mass and how much do they accrete? Second, how are they influenced by the remaining disk material? Recently, _Mayer et al._ (2002, 2004a) and _Lufkin et al._ (2004) have used SPH calculations to follow the formation and evolution of clumps in simulations covering up to 50 orbits (roughly 600 yrs), and _Mayer et al._ (in preparation) are extending these calculations to several thousand years. They find that, when a locally isothermal EOS is used well past initial fragmentation, clumps grow to \\(\\sim 10M_{J}\\) within a few hundred years. On the other hand, in simulations using an ideal gas EOS plus bulk viscosity, accretion rates are much lower (\\(<10^{-6}M_{\\odot}\\)/yr), and clumps do not grow to more than a few \\(M_{J}\\) or \\(\\sim\\) 1% of the disk mass. The assumed thermodynamic treatment has important effects not only on the survival of clumps, but also on their growth. _Nelson and Benz_ (2003), using a grid-based code and starting from a 0.3\\(M_{J}\\) seed planet, show that accretion rates this fast are unphysically high because the newly accreted gas cannot cool fast enough, even with the help of convection, unless some localized dynamical instability is present in the clump's envelope. So, the growth rate of an initially small protoplanet may be limited by its ability to accept additional matter rather than the disk's ability to supply it. They note (see also _Lin and Papaloizou_, 1993; _Bryden et al._, 1999; _Kley_, 1999; _Lubow et al._, 1999; _Nelson et al._, 2000) that the accretion process after formation is self-limiting at a mass comparable to the largest planet masses yet discovered (see the chapter by _Udry et al._). Fig. 4 shows one of the extended _Mayer et al._ simulations, containing two clumps in one disk realized with \\(2\\times 10^{5}\\) particles, and run for about 5,000 years (almost 200 orbits at \\(10\\) AU). There is little hint of inward orbital migration over a few thousand year time scale. Instead, both clumps appear to migrate slowly outward. _Boss_ (2005) uses sink particles (\"virtual planets\") to follow a clumpy disk for about 1,000 years. He also finds that the clumps do not migrate rapidly. In both works, the total simulation times are quite short compared to the disk lifetime and so are only suggestive of the longer-term fate of the objects. Nevertheless, the results are important, because they illustrate shortcomings in current analytic models of migration. Although migration theory is now extremely well developed (see the chapter by _Papaloizou et al._), predictions for migration at the earliest phases of protoplanet formation by GI's are difficult to make, because many of the assumptions on which the theory is based are not well satisfied. More than one protoplanet may form in the same disk, they may form with masses larger than linear theory can accommodate, and they may be significantly extended rather than the point masses assumed by theory. If the disk remains massive, it may also undergo gravitotturbulence that changes the disk's mass distribution on a short enough time scale to call into question the resonance approximations in the theory. If applicable in the context of these limitations, recent investigations into the character of corotation resonances (see the chapter by _Papaloizou et al._) and vortex excitation (_Koller et al._, 2003) in the corotation region may be of particular interest, because a natural consequence of these processes is significant mass transport across the clump's orbit and reduced inward migration, which is in fact seen in the above simulations. ### Comparison Test Cases Disk instability has been studied so far with various types of grid codes and SPH codes that have different relative strengths and weaknesses (Section 3). Whether different numerical techniques find comparable results with nearly identical assumptions is not yet known, although some comparative studies have been attempted (_Nelson et al._, 1998). Several aspects of GI behavior can be highly dependent on code type. For example, SPH codes require artificial viscosity to handle shocks such as those occurring along spiral arms. Numerical viscosity can smooth out the velocity field in overdense regions, possibly inhibiting collapse (_Mayer et al._, 2004a) but, at the same time, possibly increasing clump longevity if clumps form (see Fig. 3 of _Durisen_, 2006). Gravity solvers that are both accurate and fast are a robust feature of SPH codes, while gravity solvers in grid codes can under-resolve the local self-gravity of the gas (_Pickett et al._, 2003). Both types of codes can lead to spurious fragmentation or suppress it when a force imbalance between pressure and gravity results at scales comparable to the local Jeans or Toomre length due to lack of resolution (_Truelove et al._, 1997; _Bate and Burkert_, 1997; _Nelson_, 2006). Another major code difference is in the set up of initial conditions. Although both Eulerian grid-based and Lagrangian particle-based techniques represent an approximation to the continuum fluid limit, noise levels due to discreteness are typically higher in SPH simulations. Initial perturbations are often applied in grid-based simulations to seed GI's (either random or specific modes or both, e.g., _Boss_, 1998a), but are not required in SPH simulations, because they already have built-in Poissonian noise at the level of \\(\\sqrt{N}/N\\) or more, where \\(N\\) is the number of particles. In addition, the SPH calculation of hydrodynamic variables introduces small scale noise at the level of \\(1/N_{neigh}\\), where \\(N_{neigh}\\) is the number of neighboring particles contained in one smoothing kernel. Grid-based simulations require boundary conditions which restrict the dynamic range of the simulations. For example, clumps may reach the edge of a computational volume after only a limited number of orbits (_Boss_, 1998a, 2000; _Pickett et al._, 2000a). Cartesian grids can lead to artificial diffusion of angular momentum in a disk, a problem that can be avoided using a cylindrical grid (_Pickett et al._, 2000a) or spherical grid (_Boss and Myhill_, 1992). _Myhill & Boss_ (1993) find good agreement between spherical and Cartesian grid results for a nonisothermal rotating protostellar collapse problem, but evolution of a nearly equilibrium disk over many orbits in a Cartesian grid is probably still a challenge. In order to understand how well different numerical tech Figure 4: The orbital evolution of two clumps (right) formed in a massive, growing protoplanetary disk simulation described in Mayer et al. (2004). A face-on view of the system after 2264 years of evolution is shown on the left, using a color coded density map (the box is 38 AU on a side). In the right panel, the orbital evolution of the two clumps is shown. Overal, both clumps migrate outward. niques can agree on the outcome of GI's, different codes need to run the same initial conditions. This is being done in a large, on-going code-comparison project that involves eight different codes, both grid-based and SPH. Among the grid codes, there are several adaptive mesh refinement (AMR) schemes. The comparison is part of a larger effort involving several areas of computational astrophysics ([http://krone.physik.unizh.ch/](http://krone.physik.unizh.ch/)\\(\\sim\\)moore/wengen/tests.html). The system chosen for the comparison is a uniform temperature, massive, and initially very unstable disk with a diameter of about 20 AU. The disk is evolved isothermally and has a \\(Q\\) profile that decreases outward, reaching a minimum value \\(\\sim 1\\) at the disk edge. The disk model is created using a particle representation by letting its mass grow slowly, as described in _Mayer et al._ (2004a). This distribution is then interpolated onto the various grids. Here we present the preliminary results of the code comparisons from four codes - two SPH codes called GASOLINE (_Wadsley et al._, 2004) and GADGET2 (_Springel et al._, 2001; _Springel_, 2005), the Indiana University code with a fixed cylindrical grid (_Pickert_, 1995; _Mejia_, 2004), and the Cartesian AMR code called FLASH (_Fryxell et al._, 2000). Readers should consult the published literature for detailed descriptions, but we briefly enumerate some basic features. FLASH uses a PPM-based Riemann solver on a Cartesian grid with directional splitting to solve the Euler equations, and it uses an iterative multi-grid Poisson solver for gravity. Both GASOLINE and GADGET2 solve the Euler equations using SPH and solve gravity using a treecode, a binary tree in the case of GASOLINE and an oct-tree in the case of GADGET2. Gravitational forces from individual particles are smoothed using a spline kernel softening, and they both adopt the _Balsara_ (1995) artificial viscosity that minimizes shear forces on large scales. The Indiana code is a finite difference grid-based code which solves the equations of hydrodynamics using the Van Leer method. Poisson's equation is solved at the end of each hydrodynamic step by a Fourier transform of the density in the azimuthal direction, direct solution by cyclic reduction of the transform in (\\(r\\),\\(z\\)), and a transform back to real space (_Tohline_, 1980). The code's Von Neumann-Richtmeyer artificial bulk viscosity is not used for isothermal evolutions. The two SPH codes are run with fixed gravitational softening, and the local Jeans length (see _Bate and Burkert_, 1997) before and after clump formation is well resolved. Runs with adaptive gravitational softening will soon be included in the comparison. Here we show the results of the runs whose initial conditions were generated from the \\(8\\times 10^{5}\\) particles setup, which was mapped onto a 512x512x52 Cartesian grid for FLASH and onto a 512x1024x64 (\\(r\\),\\(\\phi\\),\\(z\\)) cylindrical grid for the Indiana code. Comparable resolution (cells for grids or gravity softening for SPH runs) is available initially in the outer part of the disk, where the Q parameter reaches its minimum. In the GASOLINE and Figure 5: Equatorial slice density maps of the disk in the test runs after about 100 yrs of evolution. The initial disk is 20 AU in diameter. From top left to bottom right are the results from GASOLINE and GADGET2 (both SPH codes), from the Indiana cylindrical-grid code, and from the AMR Cartesian-grid code FLASH. The SPH codes adopt the shear-reduced artificial viscosity of _Balsara_ (1995). GADGET2 runs, the maximum spatial resolution is set by the gravitational softening at 0.12 AU. Below this scale, gravity is essentially suppressed. The FLASH run has a initial resolution of 0.12 AU at 10 AU, comparable with the SPH runs. The Indiana code has the same resolution as FLASH in the radial direction but has a higher azimuthal resolution of 0.06 AU at 10 AU. As it can be seen from Fig. 5, the level of agreement between the runs is satisfactory, although significant differences are noticeable. More clumps are seen in the Indiana code simulation. On the other end, clumps have similar densities in FLASH and GASOLINE, while they appear more fluffy in the Indiana code than in the other three. The causes are probably different gravity solvers and the non-adaptive nature of the Indiana code. Even within a single category of code, SPH or grid-based, different types of viscosity, both artificial and numerical, might be more or less diffusive and affect the formation and survival of clumps. In fact, tests show that more fragments are present in SPH runs with shear-reduced artificial viscosity than with full shear viscosity. Although still in an early stage, the code comparison has already produced one important result, namely that, once favorable conditions exist, widespread fragmentation is obtained in high-resolution simulations using any of the standard numerical techniques. On the other hand, the differences already noticed require further understanding and will be addressed in a forthcoming paper (_Mayer et al._, in preparation). Although researchers now agree on conditions for disk fragmentation, no consensus yet exists about whether or where real disks fragment or how long fragments really persist. Answers to these questions require advances over current techniques for treating radiative physics and compact structures in global simulations. ## 5 Interactions with Solids The standard model for the formation of giant gaseous planets involves the initial growth of a rocky core that, when sufficiently massive, accretes a gaseous envelope (_Bodenheimer and Pollack_, 1986; _Pollack et al._, 1996). In this scenario, the solid particles in the disk must first grow from micron-sized dust grains to kilometer-sized planetesimals that then coagulate to form the rocky core. In a standard protoplanetary disk, the gas pressure near the disk midplane will generally decrease with increasing radius resulting in an outward pressure gradient that causes the gas to orbit with sub-Keplerian velocities. The solid particles, on the other hand, do not feel the gas pressure and orbit with Keplerian velocities. This velocity difference results in a drag force that generally causes the solid particles to lose angular momentum and to spiral inward toward the central star with a radial drift velocity that depends on the particle size (_Weidenschilling_, 1977). While this differential radial drift can mix together particles of different size and allow large grains to grow by sweeping up smaller grains (_Weidenschilling and Cuzzi_, 1993), it also introduces a potential problem. Depending on the actual disk properties, the inward radial velocity for particles with sizes between \\(1\\) cm and \\(1\\) m can be as high as \\(10^{4}\\) cm s\\({}^{-1}\\) (_Weidenschilling_, 1977), so that these particles could easily migrate into the central star before becoming large enough to decouple from the disk gas. If these particles do indeed have short residence times in the disk, it is difficult to envisage how they can grow to form the larger kilometer-sized planetesimals which are required for the subsequent formation of the planetary cores. The above situation is only strictly valid in smooth, laminar disks with gas pressures that decrease monotonically with increasing radius. If there are any regions in the disk that have local pressure enhancements, the situation can be very different. In the vicinity of a pressure enhancement, the gas velocity can be either super- and sub-Keplerian depending on the local gas pressure gradient. The drag force can then cause solid particles to drift outwards or inwards, respectively (_Haghighipour and Boss_, 2003a,b). The net effect is that the solid particles should drift towards pressure maxima. A related idea is that a baroclinic instability could lead to the production of long-lived, coherent vortices (_Klahr and Bodenheimer_, 2003) and that solid particles would drift towards the center of the vortex where the enhanced concentration could lead to accelerated grain growth (_Klahr and Henning_, 1997). The existence of such vortices is, however, uncertain (_Johnson and Gammie_, 2006). An analogous process could occur in a self-gravitating disk, where structures formed by GI activity, such as the centers of the spiral arms, are pressure and density maxima. In such a case, drag force results in solid particles drifting towards the centers of these structures, with the most significant effect occurring for those particles that would, in a smooth, laminar disk, have the largest inward radial velocities. If disks around very young protostars do indeed undergo a self-gravitating phase, then we would expect the resulting Figure 6: Surface density structure of particles embedded in a self-gravitating gas disk. a) The left-hand panel shows that the distribution of 10 m radius particles is similar to that of the gas disk, because these particles are not influenced strongly by gas drag. b) The right-hand panel illustrates that 50 cm particles are strongly influenced by gas drag and become concentrated into the GI-spirals with density enhancements of an order of magnitude or more. Figures adapted from _Rice et al._ (2004). spiral structures to influence the evolution of the solid particles in the disk (_Haghighipour and Boss_, 2003a,b). A GI-active disk will also transport dust grains small enough to remain tied to the gas across distances of many AU's in only 1,000 yrs or so (_Boss_, 2004b), a potentially important process for explaining the components of primitive meteorites (see the chapter by _Alexander et al._). _Boley and Durisen_ (2006) show that, in only one pass of a spiral shock, hydraulic jumps induced by shock heating can mix gas and entrained dust radially and vertically over length-scales \\(\\sim H\\) through the generation of huge breaking waves. The presence of chondrules in primitive chondritic meteorites is circumstantial evidence that the Solar Nebula experienced a self-gravitating phase in which spiral shock waves provided the flash heating required to explain their existence (_Boss and Durisen_, 2005a,b; _Boley et al._, 2005). To test how a self-gravitating phase in a protostellar disk influences the evolution of embedded particles, _Rice et al._ (2004) perform 3D self-gravitating disk simulations that include particles evolved under the influence of both disk self-gravity and gas drag. In their simulations, they consider both 10 m particles, which, for the chosen disk parameters, are only weakly coupled to the gas, and 50 cm particles that are significantly influenced by the gas drag. Fig. 6a shows the surface density structure of the 10 m particles one outer rotation period after they were introduced into the gas disk. The structure in the particle disk matches closely that of the gas disk (not shown) showing that these particles are influenced by the gravitational force of the gas disk, but not so strongly influenced by gas drag. Fig. 6b shows the surface density structure of 50 cm particles at the same epoch. Particles of this size are influenced by gas drag and Fig. 6b shows that, compared to the 10 m particles, these particles become strongly concentrated into the GI-induced spiral structures. The ability of solid particles to become concentrated in the center of GI-induced structures suggests that, even if giant planets do not form directly via GI's, a self-gravitating phase may still play an important role in giant planet formation. The solid particles may achieve densities that could accelerate grain growth either through an enhanced collision rate or through direct gravitational collapse of the particle sub-disk (_Youdin and Shu_, 2002). _Durisen et al._ (2005) also note that dense rings can be formed near the boundaries between GI-active and inactive regions of a disk (e.g., the central disk in Fig. 1). Such rings are ideal sites for the concentration of solid particles by gas drag, possibly leading to accelerated growth of planetary embryos. Even if processes like these do not contribute directly to planetesimal growth, GI's may act to prevent the loss of solids by migration toward the proto-Sun. The complex and time-variable structure of GI activity should increase the residence time of solids in the disk and potentially give them enough time to become sufficiently massive to decouple from the disk gas. ## 6 Planet Formation The relatively high frequency (\\(\\sim\\)10%) of solar-type stars with giant planets that have orbital periods less than a few years suggests that longer-period planets may be quite frequent. Perhaps \\(\\sim\\) 12 to 25% of G dwarfs may have gas giants orbiting within \\(\\sim\\)10 AU. If so, gas giant planet formation must be a fairly efficient process. Because roughly half of protoplanetary disks disappear within 3 Myr or less (_Bally et al._, 1998; _Haisch et al._, 2001; _Briceno et al._, 2001; _Eisner and Carpenter_, 2003), core accretion may not be able to produce a high frequency of gas giants. There is also now strong theoretical (_Yorke and Bodenheimer_, 1999) and observational (_Osorio et al._, 2003; _Rodriguez et al._, 2005; _Eisner et al._, 2005) evidence that disks around very young protostars should indeed be sufficiently massive to experience GI's. _Rodriguez et al._ (2005) show a 7 mm VLA image of a disk around a Class 0 protostar that may have a mass half that of the central star. Hybrid scenarios may help remove the bottleneck by concentrating meter-sized solids, but it is not clear that they can shorten the overall time scale for core accretion, which is limited by the time needed for the growth of 10 \\(M_{\\oplus}\\) cores and for accretion of a large gaseous envelope. _Durisen et al._ (2005) suggest that the latter might be possible in dense rings, but detailed calculations of core growth or envelope accretion in the environment of a dense ring do not now exist. Disk instability, on the other hand, has no problem forming gas giants rapidly in even the shortest-lived protoplanetary disk. Most stars form in regions of high-mass star formation (_Lada and Lada_, 2003) where disk lifetimes should be the shortest due to loss of outer disk gas by UV irradiation. There is currently disagreement about whether GI's are stronger in low-metallicity systems (_Cai et al._, 2006) or whether their strength is relatively insensitive to the opacity of the disk (_Boss_, 2002a). In either case, if disk instability is correct, we would expect that even low-metallicity stars could host gas giant planets. The growth of cores in the core accretion mechanism is hastened by higher metallicity through the increase in surface density of solids (_Pollack et al._, 1996), although the increased envelope opacity, which slows the collapse of the atmosphere, works in the other direction (_Podolak_, 2003). The recent observation of a Saturn mass object, orbiting the metal-rich star HD 149026, with a core mass equal to approximately half the planet's mass (_Sato et al._, 2005) has been suggested as a strong confirmation of the core accretion model. It has, however, yet to be shown that the core accretion model can produce a core with such a relatively large mass. If this core was produced by core accretion, it seems that it never achieved a runaway growth of its envelope; yet, in the case of Jupiter, the core accretion scenario requires efficient accumulation of a massive envelope around a relatively low-mass core. The correlation of short-period gas giants with high metallicity stars is often interpreted as strong evidence in favor of core accretion (_Laws et al._, 2003; _Fischer et al._,2004; _Santos et al._, 2004). The _Santos et al._ (2004) analysis, however, shows that even the stars with the lowest metallicities have detectable planets with a frequency comparable to or higher than that of the stars with intermediate metallicities. _Rice et al._ (2003c) have shown that the metallicity distribution of systems with at least one massive planet (\\(M_{pl}>5M_{Jup}\\)) on an eccentric orbits of moderate semi-major axis does not have the same metal-rich nature as the full sample of extrasolar planetary systems. Some of the metallicity correlation can be explained by the observational bias of the spectroscopic method in favor of detecting planets orbiting stars with strong metallic absorption lines. The residual velocity jitter typically increases from a few m/s for solar metallicity to 5 - 16 m/s for stars with 1/4 the solar metallicity or less. In terms of extrasolar planet search space, this could account for as much as a factor of two difference in the total number of planets detected by spectroscopy. A spectroscopic search of 98 stars in the Hyades cluster, with a metallicity 35% greater than solar, found nothing, whereas about 10 hot Jupiters should have been found, assuming the same frequency as in the solar neighborhood (_Paulson et al._, 2004). _Jones_ (2004) found that the average metallicity of planet-host stars increased from \\(\\sim\\) 0.07 to \\(\\sim\\) 0.24 dex for planets with semimajor axes of \\(\\sim 2\\) AU to \\(\\sim 0.03\\) AU, suggesting a trend toward shortest-period planets orbiting the most metal-rich stars. Similarly, _Sozzeti_ (2004) showed that both metal-poor and metal-rich stars have increasing numbers of planets as the orbital period increases but only the metal-rich stars have an excess of the shortest period planets. This could imply that the metallicity correlation is caused by inward orbital migration, if low-metallicity stars have long-period giant planets that seldom migrate inward. Lower disk metallicity results in slower Type II inward migration (_Livio and Pringle_, 2003), the likely dominant mechanism for planet migration (see the chapter by _Papaloizou et al._). This is because with increased metallicity, the disk viscosity \\(\ u\\) increases. In standard viscous accretion disk theory (e.g., _Ruden and Pollack_, 1991)\\(\ u=\\alpha c_{s}H\\). Lower disk metallicity leads to lower disk opacity, lower disk temperatures, lower sound speeds, and a thinner disk. As \\(\ u\\) decreases with lowered metallicity, the time scale for Type II migration increases. _Ruden and Pollack_ (1991) found that viscous disk evolution times increased by a factor of about 20 when \\(\ u\\) decreased by a factor of 10. It remains to be seen if this effect is large enough to explain the rest of the correlation. If disk instability is operative and if orbital migration is the major source of the metallicity correlation, then metal-poor stars should have planets on long-period orbits. Disk instability may be necessary to account for the long-period giant planet in the M4 globular cluster (_Sigurdsson et al._, 2003), where the metallicity is 1/20 to 1/30 solar metallicity. The absence of short-period Jupiters in the 47 Tuc globular cluster (_Gillilland et al._, 2000) with 1/5 solar metallicity could be explained by the slow rate of inward migration due to the low metallicity. Furthermore, if 47 Tuc initially contained OB stars, photoevaporation of the outer disks may have occurred prior to inward orbital migration of any giant planets, preventing their evolution into short-period planets, though other factors (i.e., crowding) can also be important in these clusters. The M dwarf GJ 876 is orbited by a pair of gas giants (as well as a much smaller mass planet) and other M dwarfs have giant planets as well (_Butler et al._, 2004), though apparently not as frequently as the G dwarfs. _Laughlin et al._ (2004) found that core accretion was too slow to form gas giants around M dwarfs because of the longer orbital periods. Disk instability does not a have similar problem for M dwarfs, and disk instability predicts that M, L, and T dwarfs should have giant planets. With disk instability, one Jupiter mass of disk gas has at most \\(\\sim 6M_{\\oplus}\\) of elements suitable to form a rock/ice core. The preferred models of the Jovian interior imply that Jupiter's core mass is less than \\(\\sim 3M_{\\oplus}\\) (_Saumon and Guillot_, 2004); Jupiter may even have no core at all. These models seem to be consistent with formation by disk instability and inconsistent with formation by core accretion, which requires a more massive core. As a result, the possibility of core erosion has been raised (_Saumon and Guillot_, 2004). If core erosion can occur, core masses may lose much of their usefulness as formation constraints. Saturn's core mass appears to be larger than that of Jupiter (_Saumon and Guillot_, 2004), perhaps \\(\\sim 15M_{\\oplus}\\), in spite of it being the smaller gas giant. Core erosion would only make Saturn's initial core even larger. Disk instability can explain the larger Saturnian core mass (_Boss et al._, 2002). Proto-Saturn may have started out with a mass larger than that of proto-Jupiter, but its excess gas may have been lost by UV photoevaporation, a process that could also form Uranus and Neptune. Disk instability predicts that inner gas giants should be accompanied by outer ice giant planets in systems which formed in OB associations due to strong UV photoevaporation. In low-mass star-forming regions, disk instability should produce only gas giants, without outer ice giants. Disk instability predicts that even the youngest stars should show evidence of gas giant planets (_Boss_, 1998b), whereas core accretion requires several Myr or more to form gas giants (_Inaba et al._, 2003). A gas giant planet seems to be orbiting at \\(\\sim 10\\) AU around the 1 Myr-old star CoKu Tau/4 (_Forrest et al._, 2004), based on a spectral energy distribution showing an absence of disk dust inside 10 AU (for an alternative perspective, see _Tanaka et al._, 2005). Several other 1 Myr-old stars show similar evidence for rapid formation of gas giant planets. The direct detection of a possible gas giant planet around the 1 Myr-old star GQ Lup (_Neuhauser et al._, 2005) similarly requires a rapid planet formation process. We conclude that there are significant observational arguments to support the idea that disk instability, or perhaps a hybrid theory where core accretion is accelerated by GI's, might be required to form some if not all gas giant planets. Given the major uncertainties in the theories, observational tests will be crucial for determining the relative proportionsof giant planets produced by the competing mechanisms. **Acknowledgments.** R.H.D.'s contribution was supported by NASA grants NAG5-11964 and NNG05GN11G and A.P.B.'s by NASA grants NNG05GH30G, NNG05GL10G, and NCC2-1056. Support for A.F.N. was provided by the U.S. Department of Energy under contract W-7405-ENG-36, for which this is publication LA-UR-05-7851. We would like to thank S. Michael for invaluable assistance in manuscript preparation, an anonymous referee for substantive improvements, and A.C. Mejia, A. Gawryszczak, and V. Springel for allowing us to premier their comparison calculations in Section 4.4. FLASH was in part developed by the DOE-supported ASC/Alliance Center for Astrophysical Thermonuclear Flashes at the University of Chicago and was run on computers at Warsaw's Interdisciplinary Center for Mathematical and Computational Modeling. ## References * [1] Adams F. C., Shu F. H., and Lada C. J. (1988) _Astrophys. J., 326_, 865-883. * [2] Adams F. C., Ruden S. P., and Shu F. H. (1989) _Astrophys. J., 347_, 959-976. * [3] Armitage P. J., Livio M., and Pringle J. E. (2001) _Mon. Not. R. Astron. Soc., 324_, 705-711. * [4] Balbus S. A. and Papaloizou J. C. B. (1999) _Astrophys. J., 521_, 650-658. * [5] Balsara D. S. (1995) _J. Comput. Phys., 121_, 357-372. * [6] Bate M. R. and Burkert A. (1997) _Mon. Not. R. Astron. Soc., 228_, 1060-1072. * [7] Bally J., Testi L., Sargent A., and Carlstrom J. (1998) _Astron. J., 116_, 854-859. * [8] Beckwith S. V. W., Sargent A. I., Chini R. S., and Gusten R. (1990) _Astron. J., 99_, 924-945. * [9] Benz W. (1990) In _The Numerical Modeling of Nonlinear Stellar Pulsations_ (J. R. Buchler, ed.), pp. 269-288. Kluwer, Boston. * [10] Boffin H. M. J., Watkins S. J., Bhattal A. S., Francis N., and Whitworth A. P. (1998) _Mon. Not. R. Astron. Soc., 300_, 1189-1204. * [11] Bodenheimer P. and Pollack J. B. (1986) _Icarus, 67_, 391-408. * [12] Bodenheimer P., Yorke H. W., Rozyczka M., and Tohlinehe J. E. (1990) _Astrophys. J., 355_, 651-660. * [13] Boley A. C. and Durisen R. H. (2006) _Astrophys. J._, in press (astro-ph-0510305). * [14] Boley A. C., Durisen R. H., and Pickett M. K. (2005) In _Chometries and the Protoplanetary Disk_ (A. N. Krot et al., eds.), pp. 839-848. ASP Conference Series, San Francisco. * [15] Boss A. P. (1997) _Science, 276_, 1836-1839. * [16] Boss A. P. (1998a) _Astrophys. J., 503_, 923-937. * [17] Boss A. P. (1998b) _Nature, 395_, 141-143. * [18] Boss A. P. (2000) _Astrophys. J., 363_, L101-L104. * [19] Boss A. P. (2001) _Astrophys. J., 563_, 367-373. * [20] Boss A. P. (2002a) _Astrophys. J., 567_, L149-L153. * [21] Boss A. P. (2002b) _Astrophys. J., 576_, 462-472. * [22] Boss A. P. (2002c) _Earth Planet. Sci. Let., 202_, 513-523. * [23] Boss A. P. (2003) _Lunar Planet. Inst., 34_, 1075-1076. * [24] Boss, A. P. (2004a) _Astrophys. J., 610_, 456-463. * [25] Boss A. P. (2004b) _Astrophys. J., 616_, 1265-1277. * [26] Boss A. P. (2005) _Astrophys. J., 629_, 535-548. * [27] Boss A. P. (2006) _Astrophys. J._, in press. * [28] Boss A. P. and Durisen R. H. (2005a) _Astrophys. J., 621_, L137-L140. * [29] Boss A. P. and Durisen R. H. (2005b) In _Chondrites and the Protoplanetary Disk_ (A. N. Krot et al., eds.), pp. 821-838. ASP Conference Series, San Francisco. * [30] Boss A. P. and Myhill E. A. (1992) _Astrophys. J. Suppl., 83_, 311-327. * [31] Boss A. P. and Yorke H. W. (1996) _Astrophys. J., 496_, 366-372. * [32] Boss A. P., Wetherill G. W., and Haghighipour N. (2002) _Icarus, 156_, 291-295. * [33] Briceno C., Vivas A. K., Calvet N., Hartmann L., Pachecci R. et al. (2001) _Science, 291_, 93-96. * [34] Butler R. P., Vogt S. S., Marcy G. W., Fischer D. A., Wright J. T. et al. (2004) _Astrophys. J., 617_, 580-588. * [35] Bryden G., Lin D. N. C., and Ida S. (2000) _Astrophys. J., 544_, 481-495. * [36] Cai K., Durisen R. H., Michael S., Boley A. C., Mejia A. C., Pickett M. K., and D'Alessio P. (2006) _Astrophys. J., 636_, L149-L152. * [37] Cameron A. G. W. (1978) _Moon Planets, 18_, 5-40. * [38] Chiang E. I. and Goldreich P. (1997) _Astrophys. J., 490_, 368-376. * [39] Colella P. and Woodward P. R. (1984) _J. Comp. Phys., 54_, 174-201. * [40] Cuzzi J. N., Hogan R. C., Paque J. M., and Dobrovolskis A. R. (2001) _Astrophys. J., 546_, 496-508. * [41] D'Alessio P., Calvet N., and Hartmann L. (1997) _Astrophys. J., 474_, 397-406. * [42] D'Alessio P., Canto J., Calvet N., and Lizano S. (1998) _Astrophys. J., 500_, 411-427. * [43] Durisen R. H. (2006) In _A Decade of Extrasolar Planets Around Normal Stars_ (M. Livio, ed.), in press. University Press, Cambridge. * [44] Durisen R. H., Cai K., Mejia A. C., and Pickett M. K. (2005) _Icarus, 173_, 417-424. * [45] Durisen R. H., Mejia A. C., and Pickett B. K. (2003) _Rec. Devel. Astrophys., 1_, 173-201. * [46] Durisen R. H., Mejia A. C., Pickett B. K., and Hartquist T. W. (2001) _Astrophys. J., 563_, L157-L160. * [47] Eisner J. A. and Carpenter J. M. (2003) _Astrophys. J., 598_, 1341-1349. * [48] Eisner J. A., Hillenbrand L. A., Carpenter J. M., and Wolf S. (2005) _Astrophys. J., 635_, 396-421. * [49] Fischer D., Valenti J. A. and Marcy G. (2004) In _IAU Symposium #219: Stars as Suns: Activity, Evolution, and Planets_ (A. K. Dupree and A. O. Benz, eds.), pp. 29-38. APS Conference Series, San Francisco. * [50] Forrest W. J., Sargent B., Furlan E., D'Alessio P., Calvet N. et al. (2004) _Astrophys. J. Suppl., 154_, 443-447. * [51] Fleming T. and Stone J. M. (2003) _Astrophys. J., 585_, 908-920. * [52] Fryxell B., Arnett D., and Muller E. (1991) _Astrophys. J., 367_, 619-634. * [53] Fryxell B., Olson K., Ricker P., Timmes F. X., Zingale M. et al. (2000) _Astrophys. J. Suppl., 131_, 273-334. * [54] Gammie C. F. (1996) _Astrophys. J., 457_, 355-362. * [55] Gammie C. F. (2001) _Astrophys. J., 553_, 174-183. * [56] Gilliland R. L., Brown T. M., Guhathakurta P., Sarajedini A., Milone E. F. et al. (2000) _Astrophys. J., 545_, L47-L51. * [57] Goldreich P. and Lynden-Bell D. (1965) _Mon. Not. R. Astron. Soc., 130_, 125-158. * [58] Haghighipour N. and Boss A. P. (2003a) _Astrophys. J., 583_, 996-1003. * [59] Haghighipour N. and Boss A. P. (2003b) _Astrophys. J., 598_, 1301-1311. * [60] Haisch K. E., Lada E. A., and Lada C. J. (2001) _Astrophys. J.,553_, L153-L156. * (19) Inaba S., Wetherill G. W., and Ikoma M. (2003) _Icarus, 166_, 46-62. * (20) Johnson B. M. and Gammie C. F. (2003) _Astrophys. J., 597_, 131-141. * (21) Johnson B. M. and Gammie C. F. (2006) _Astrophys. J., 636_, 63-74. * (22) Johnstone D., Hollenbach D., and Bally J. (1998) _Astrophys. J., 499_, 758-776. * (23) Jones H. R. A. (2004) In _The Search for Other Worlds: Fourteenth Astrophysics Conference, AIP Conference Proceedings, 713_, pp. 17-26. AIP Conference Proceedings, New York. * (24) Klahr H. H. (2003) In _Scientific Frontiers in Research on Extrasolar Planets_ (D. Deming and S. Seager, eds.), pp. 277-280. ASP Conference Series, San Francisco. * (25) Klahr H. H. and Bodenheimer P. (2003) _Astrophys. J., 582_, 869-892. * (26) Klahr H. H. and Henning T. (1997) _Icarus, 128_, 213-229. * (27) Kley W. (1999) _Mon. Not. R. Astron. Soc., 303_, 696-710. * (28) Koller J., Li H., and Lin D. N. C. (2003) _Astrophys. J., 596_, L91-94. * (29) Kuiper G. P. (1951) In _Proceedings of a Topical Symposium_ (J. A. Hynek, ed.), pp. 357-424. McGraw-Hill, New York. * (30) Lada C. J. and Lada E. A. (2003) _Ann. Rev. Astron. Astrophys., 41_, 57-115. * (31) Larson R. B. (1984) _Mon. Not. R. Astron. Soc., 206_, 197-207. * (32) Laughlin G. and Bodenheimer P. (1994) _Astrophys. J., 436_, 335-354. * (33) Laughlin G. and Rozyczka M. (1996) _Astrophys. J., 456_, 279-291. * (34) Laughlin G., Korchagin V., and Adams F. C. (1997) _Astrophys. J., 477_, 410-423. * (35) Laughlin G., Korchagin V., and Adams F. C. (1998) _Astrophys. J., 504_, 945-966. * (36) Laughlin G., Bodenheimer P., and Adams F. C. (2004) _Astrophys. J., 612_, L73-L76. * (37) Laws C., Gonzalez G., Walker K. M., Tyagi S., Dodsworth J. et al. (2003) _Astron. J., 125_, 2664-2677. * (38) Lin D. N. C. and Papaloizou J. C. B. (1993) In _Protostars and Planets III_ (E. H. Levy and J. I. Lunine, eds.), pp. 749-835. Univ. of Arizona, Tucson. * (39) Lin D. N. C. and Pringle J. E. (1987) _Mon. Not. R. Astron. Soc., 225_, 607-613. * (40) Lin D. N. C., Laughlin G., Bodenheimer P., and Rozyczka M. (1998) _Science, 281_, 2025-2027. * (41) Livio M. and Pringle J. E. (2003) _Mon. Not. R. Astron. Soc., 346_, L42-L44. * (42) Lodato G. and Rice W. K. M. (2004) _Mon. Not. R. Astron. Soc., 351_, 630-642. * (43) Lodato G. and Rice W. K. M. (2005) _Mon. Not. R. Astron. Soc., 358_, 1489-1500. * (44) Lubow S. H. and Ogilvie G. I. (1998) _Astrophys. J., 504_, 983-995. * (45) Lubow S. H., Siebert M., and Artymowicz P. (1999) _Astrophys. J., 526_, 1001-1012. * (46) Lufkin G., Quinn T. Wadsley J., Stadel J., and Governato F. (2004) _Mon. Not. R. Astron. Soc., 347_, 421-429. * (47) Mayer L., Quinn T., Wadsley J., and Stadel J. (2002) _Science 298_, 1756-1759. * (48) Mayer L., Quinn T., Wadsley J., and Stadel J. (2004a) _Astrophys. J., 609_, 1045-1064. * (49) Mayer L., Wadsley J., Quinn T., and Stadel J. (2004b) In _Extrasolar Planets: Today and Tomorrow_ (J.-P. Beaulieu et al., eds.), pp. 290-297. ASP Conference Series, San Francisco. * (50) Mayer L., Wadsley J., Quinn T., and Stadel J. (2005) _Mon. Not. R. Astron. Soc., 363_, 641-648. * (51) Mejia A. C. (2004) Ph.D. dissertation, Indiana University. * (52) Mejia A. C., Durisen R. H., Pickett M. K., and Cai K. (2005) _Astrophys. J., 619_, 1098-1113. * (53) Mihalas D. (1977) _Stellar Atmospheres_. Univ. of Chicago, Chicago. * (54) Monaghan J. J. (1992) _Ann. Rev. Astron. Astrophys., 30_, 543-574. * (55) Myhill E. A. and Boss A. P. (1993) _Astrophys. J. Suppl., 89_, 345-359. * (56) Nelson A. F. (2000) _Astrophys. J., 537_, L65-L69. * (57) Nelson A. F. (2006) _Mon. Not. R. Astron. Soc._, submitted. * (58) Nelson A. F. and Benz W. (2003) _Astrophys. J., 589_, 578-604. * (59) Nelson A. F., Benz W., Adams F. C., and Arnett D. (1998) _Astrophys. J., 502_, 342-371. * (60) Nelson A. F., Benz W., and Ruzmaikina T. V. (2000) _Astrophys. J., 529_, 357-390. * (61) Nelson R. P., Papaloizou J. C. B., Masset F., and Kley W. (2000) _Mon. Not. R. Astron. Soc., 318_, 18-36. * (62) Neuhauser R., Guenther E. W., Wuchtlerl G., Mugrauer M., Bedalov A., and Hauschild P. H. (2005) _Astron. Astrophys., 435_, L13-L16. * (63) Osorio M., D'Alessio P., Muzerolle J., Calvet N., and Hartmann L. (2003) _Astrophys. J., 586_, 1148-1161. * (64) Paczynski B. (1978) _Acta Astron., 28_, 91-109. * (65) Papaloizou J. C. B. and Savonije G. (1991) _Mon. Not. R. Astron. Soc., 248_, 353-369. * (66) Paulson D. B., Saar S. H., Cochran W. D., and Henry G. W. (2004) _Astron. J., 127_, 1644-1652. * (67) Pickett B. K. (1995) Ph.D. dissertation, Indiana University. * (68) Pickett B. K., Durisen R. H., and Davis G. A. (1996) _Astrophys. J., 458_, 714-738. * (69) Pickett B. K., Cassen P., Durisen R. H., and Link R. P. (1998) _Astrophys. J., 504_, 468-491. * (70) Pickett B. K., Cassen P., Durisen R. H., and Link R. P. (2000a) _Astrophys. J., 529_, 1034-1053. * (71) Pickett B. K., Durisen R. H., Cassen P., and Mejia A. C. (2000b) _Astrophys. J., 540_, L95-98. * (72) Pickett B. K., Mejia A. C., Durisen R. H., Cassen P. M., Berry D. K., and Link R. P. (2003) _Astrophys. J., 590_, 1060-1080. * (73) Podolak M. (2003) _Icarus, 165_, 428-437. * (74) Pollack J. B., Hubickyj O., Bodenheimer P., Lissauer J. J., Podolak M., and Greenzweig Y. (1996) _Icarus, 124_, 62-85. * (75) Pringle J. E. (1981) _Ann. Rev. Astron. Astrophys., 19_, 137-162. * (76) Rafikov R. R. (2005) _Astrophys. J., 621_, L69-L72. * (77) Rice W. K. M., Armitage P. J., Bate M. R., and Bonnel I. A. (2003a) _Mon. Not. R. Astron. Soc., 338_, 227-232. * (78) Rice W. K. M., Armitage P. J., Bate M. R., and Bonnell I. A. (2003b) _Mon. Not. R. Astron. Soc., 339_, 1025-1030. * (79) Rice W. K. M., Armitage P. J., Bate M. R. and Bonnell I. A. (2003c) _Mon. Not. R. Astron. Soc., 364_, L36-L40. * (80) Rice W. K. M., Lodato G., and Armitage P. J. (2005) _Mon. Not. R. Astron. Soc., 364_, L56-L60. * (81) Rice W. K. M., Lodato G., Pringle J. E., Armitage P. J., and Bonnell I. A. (2004) _Mon. Not. R. Shu F. H., Tremaine S., Adams F. C., and Ruden S. P. (1990) _Astrophys. J., 358_, 495-514. * [33] Sigurdsson, S., Richer H. B., Hansen B. M., Stairs I. H., and Thorsett S. E. (2003) _Science, 301_, 193-196. * [34] Springel V. (2005) _Mon. Not. R. Astron. Soc., 364_, 1105-1134. * [35] Springel V., Yoshida N., and White S. D. M. (2001) _New Astron., 6_, 79-117. * [36] Sozzetti A. (2004) _Mon. Not. R. Astron. Soc., 354_, 1194-1200. * [37] Stone J. M. and Norman M. L. (1992) _Astrophys. J. Suppl., 80_, 753-790. * [38] Tanaka H., Himeno Y., and Ida S. (2005) _Astrophys. J., 625_, 414-426. * [39] Tohline J. E. (1980) _Astrophys. J., 235_, 866-881. * [40] Toomre A. (1964) _Astrophys. J., 139_, 1217-1238. * [41] Tomley L., Cassen P., and Steiman-Cameron T. Y. (1991) _Astrophys. J., 382_, 530-543. * [42] Truelove J. K., Klein R. I., McKee C. F., Holliman J. H. II, Howell L. H., and Greenough J. A. (1997) _Astrophys. J., 489_, L179-L183. * [43] Wadsley J., Stadel J., and Quinn T. (2004) _New Astron., 9_, 137-158. * [44] Weidenschilling S. J. (1977) _Mon. Not. R. Astron. Soc., 180_, 57-70. * [45] Weidenschilling S. J. and Cuzzi J. N. (1993) In _Protostars and Planets III_ (E. H. Levy and J. I. Lunine, eds.), pp. 1031-1060. Univ. of Arizona, Tucson. * [46] Yorke H. W. and Bodenheimer P. (1999) _Astrophys. J., 525_, 330-342. * [47] Youdin A. N. and Shu F. H. (2002) _Astrophys. J., 580_, 494-505.
Protoplanetary gas disks are likely to experience gravitational instabilites (GI's) during some phase of their evolution. Density perturbations in an unstable disk grow on a dynamic time scale into spiral arms that produce efficient outward transfer of angular momentum and inward transfer of mass through gravitational torques. In a cool disk with rapid enough cooling, the spiral arms in an unstable disk form self-gravitating clumps. Whether gas giant protoplanets can form by such a disk instability process is the primary question addressed by this review. We discuss the wide range of calculations undertaken by ourselves and others using various numerical techniques, and we report preliminary results from a large multi-code collaboration. Additional topics include - triggering mechanisms for GI's, disk heating and cooling, orbital survival of dense clumps, interactions of solids with GI-driven waves and shocks, and hybrid scenarios where GI's facilitate core accretion. The review ends with a discussion of how well disk instability and core accretion fare in meeting observational constraints.
Condense the content of the following passage.
arxiv-format/0603291v1.md
# Refined Parameters of the Planet Orbiting HD 189733 G. A. Bakos12, H. Knutson1, F. Pont5, C. Moutou3, D. Charbonneau1, A. Shporer8, F. Bouchy410, M. Everett6, C. Hergenrother7, D. W. Latham1, M. Mayor5, T. Mazeh8, R. W. Noyes1, D. Queloz5, A. Pal91 and S. Udry5 Footnote 1: affiliation: Harvard-Smithsonian Center for Astrophysics (CFA), 60 Garden Street, Cambridge, MA 02138, USA Footnote 2: affiliation: Hubble Fellow Footnote 3: affiliation: Laboratoire d’Astrophysique de Marseille, Traverse du Siphon, 13013 Marseille, France Footnote 4: affiliation: Observatoire de Haute Provence, 04870 St Michel Footnote 5: affiliation: Observatoire de Genève, 51 ch. des Maillettes, 1290 Sauverny, Switzerland Footnote 6: affiliation: Planetary Science Institute, Fort Lowell Rd.,Tucson, AZ 85719, USA Footnote 7: affiliation: Department of Planetary Sciences & Lunar and Planetary Laboratory, The University of Arizona, 1629 E. University Blvd. Tucson, AZ 85721, USA Footnote 8: affiliation: Wise Observatory, Tel Aviv University, Tel Aviv, Israel 69978 Footnote 9: affiliation: Eötvos Lorand University, Department of Astronomy, H-1518 Budapest, Pf. 32., Hungary Footnote 10: affiliation: Institut d’Astrophysique de Paris, 98bis Bd Arago, 75014 Paris, France ## 1. Introduction HD 189733 is one of nine currently known main sequence stars orbited by a transiting giant planet. The system is of exceptional interest because it is the closest known transiting planet (D = 19.3pc), and thus is amenable to a host of follow-up observations. The discovery paper by Bouchy et al. (2005) (hereafter B05) derived the key physical characteristics of the planet, namely its mass (\\(1.15\\pm 0.04M_{\\rm J}\\)) and radius (\\(1.26\\pm 0.03R_{\\rm J}\\)), based on radial velocity observations of the star made with the ELODIE spectrograph at the 1.93m telescope at Observatoire de Haute Provence (OHP), together with photometric measurements of one complete and two partial transits made with the 1.2m telescope also at OHP. With these parameters HD 189733b had a large radius comparable to HD 209458b (Laughlin et al., 2005), and a density roughly equal to that of Saturn (\\(\\rho\\sim 0.7\\rm g\\,cm^{-3}\\)). Determining precise radii of extrasolar planets in addition to their mass is an important focus of exoplanet research (see e.g. Bouchy et al., 2004; Torres et al., 2004), because the mean density of the planets can shed light on their internal structure and evolution. According to Baraffe et al. (2005), the radii of all known extrasolar planets are broadly consistent with models, except for HD 209458b. This planet with its large radius and low density (\\(\\rho\\sim 0.33\\rm g\\,cm^{-3}\\)) has attracted considerable interest, and various mechanisms involving heat deposition beneath the surface have been suggested (Laughlin et al., 2005, and references therein). An additional motivation for obtaining accurate planetary radii is proper interpretation of follow-up data, notably secondary eclipse and reflected light observations. This is of particular relevance to HD 189733b, which has been recently observed by the Spitzer Space Telescope (Deming et al., 2006), and where the brightness temperature depends on the radius ratio of the planet to the star. Both by extending the current, very limited sample of transiting exoplanets, and by precise determination of the physical parameters it will become possible to refine theoretical models and decide which planets are \"typical\". Close-by, bright stars, such as the host star of HD 189733 are essential in this undertaking. The OGLE project (Udalski et al., 2002a,b,c) and follow-up observations (e.g. Konacki et al., 2004; Moutou et al., 2004; Pont et al., 2005) made a pivotal contribution to the current sample by the discovery of more than half of the known transiting planets. Follow-up observations, however, are cumbersome due to the faintness of the targets, and required the largest available telescopes. The typical errors of mass and radius for these host stars are \\(\\sim 0.06M_{\\odot}\\) and \\(\\sim 0.15R_{\\odot}\\), and the corresponding errors in planetary parameters are \\(\\sim 0.13M_{\\rm J}\\) and \\(\\sim 0.12R_{\\rm J}\\), respectively. However, for planets orbiting bright stars in the solar neighborhood, errors at the level of a few percent can be reached for both the mass and radius. In this paper we report a number of follow-up photometric measurements of HD 189733 using six telescopes spaced around the world. Together with the original OHP photometry, we use these measurements to determine revised values for the transit parameters, and give new ephemerides. First we describe the follow-on photometry in detail (SS2), followed by the modeling which leads to the revised estimate of the planetary radius (SS3), and we conclude the paper in SS4. ## 2. Observations and Data Reduction We organized an extensive observing campaign with the goal of acquiring multi-band photometric measurements of the transits of HD 189733 caused by the hot Jupiter companion. Including the discovery data of B05 that were obtained at OHP, altogether four sites with seven telescopes contributed data to two full and eight partial transits in Johnson B, V, R, I and Sloan r photometric bandpasses. The sites and telescopes employed are spread in geographic longitude, which facilitated gathering the large number (close to 3000) of individual data spanning 2 months. The following telescopes were involved in the photometric monitoring: the 1m telescope at the Wise Observatory, Israel; the 1.2m telescope at OHP; the 1.2m telescope at Fred Lawrence Whipple Observatory (FLWO) of the Smithsonian Astrophysical Observatory (SAO); the 0.11m HAT-5 and HAT-6 wide field telescopes plus the 0.26m TopHAT telescope also at FLWO; and the 0.11m HAT-9 telescope at the Submillimeter Array site at Mauna Kea, Hawaii. An overview of the sites and telescopes is shown in Table 1. A summary of the observations is shown in Table 2. The telescopes are identified by the same names as in Table 1. The transits have been numbered starting with the discovery data \\(N_{tr}\\equiv 0\\), and are identified later in the text using this number. In the following subsections we summarize the observations and reductions that are specific to the sites or instruments. ### Observations by OHP 1.2m telescope These observations and their reduction were already described in B05. Summarizing briefly, the 1.20m f/6 telescope was used together with a \\(\\rm 1K\\times 1K\\) back-illuminated CCD having 0.69''/pix resolution. Typical exposure times were 6 seconds long, followed by a 90 second readout. The images were slightly defocused, with FWHM\\(\\approx\\)2.8''. Full-transit data were obtained in Johnson B-band under photometric conditions for the \\(N_{tr}=0\\) transit. This is shown by the \"OIBEO\" flag in Table 2 indicating that the Out-of-transit part before the Ingers, the Bottom, the Egress, and the Out-of-transit part after egress all have been observed. This is an important part of the combined dataset, as it is the only full transit seen in B-band. In addition, partial transit data were obtained for the \\(N_{tr}=4\\) event using Cousin's R-band filter (\\(R_{C}\\)) under acceptable photometric conditions, and for the \\(N_{tr}=5\\) event using the same filter under non-photometric conditions. The frames were subject to bias, dark and flatfield calibration procedure followed by cosmic ray removal. Aperture photometry was performed in an aperture of 9.6'' using the daophot(Stetson, 1987) package. The B-band light-curve published in the B05 discovery paper used the single comparison star HD 345459. This light-curve suffered from a strong residual trend, as suggested by the \\(\\sim 0.01mag\\) difference between the pre- and post-transit sections. This trend was probably a consequence of differential atmospheric extinction, and was removed by a linear airmass correction, bringing the two sections to the same mean value. This ad-hoc correction, however, may have introduced an error in the transit depth. In this paper, we used six comparison stars in the field of view (selected to have comparable relative flux to HD 189733 before and after transit). A reference light-curve was built by co-adding the normalized flux of all six stars, and was subtracted from the normalized light-curve of HD 189733. The new reduction shows a residual OOT slope 4.2 times smaller than in the earlier reduction. The resulting transit depth in B-band is decreased by about 20% compared to the discovery data. This illustrates the large contribution of photometric systematics that must be accounted for in this kind of measurement. The R-band data set is not as sensitive to the extinction effect as the B-band, hence the selection of comparison stars has a minimal impact on the shape of the transit curve. The B-band light-curve is shown on panel 6 of Fig. 1, the R-band light-curves are exhibited in Fig. 2. ### Observations by the FLWO 1.2m telescope We used the FLWO 1.2m telescope to observe the full transit of \\(N_{tr}=6\\) in Sloan r band. The detector was Keplercam, which is a single chip \\(\\rm 4K\\times 4K\\) CCD with 15\\(\\mu\\)m pixels that correspond to 0.34'' on the sky. The entire field-of-view is 23'. The chip is read out by 4 amplifiers, yielding a 12 second readout with the \\(2\\times 2\\) binning we used. The single-chip design, wide field-of-view, high sensitivity and fast readout make this instrument well-suited for high-quality photometry follow-up. The target was deliberately defocused in order to allow longer exposure times without saturating the pixels, and to smear out the inter-pixel variations that may remain after flatfield calibration. The intrinsic FWHM was \\(\\sim 2\\arcsec\\), which was defocused to \\(\\sim 10\\arcsec\\). While conditions during the transit were photometric11, there were partial \\begin{table} \\begin{tabular}{l c c c c c c c c c} \\hline \\hline \\multicolumn{1}{c}{ Site} & Longitude & Latitude & Alt. & Telescope & Diam. & Detector & Pxs & \\(T_{rd}\\) & FOV \\\\ & & & (meters) & & (meters) & & (′′/pix) & (sec) & \\\\ \\hline OHP & \\(\\rm 05^{\\circ}30^{\\prime}\\) E & \\(\\rm 43^{\\circ}55^{\\prime}\\) N & 650 & OHP1.2 & 1.2 & STe \\(\\rm 1K\\times 1K\\) & 0.69 & 90 & \\(\\rm 11.77^{\\prime}\\) \\\\ FLWO & \\(\\rm 110^{\\circ}53^{\\prime}\\) W & \\(\\rm 31^{\\circ}41^{\\prime}\\) N & 2350 & FLWO1.2 & 1.2 & Fairchild \\(\\rm 4K\\times 4K\\) & 0.34 & 12 & \\(\\rm 23^{\\prime}\\) \\\\ FLWO & \\(\\rm 110^{\\circ}53^{\\prime}\\) W & \\(\\rm 31^{\\circ}41^{\\prime}\\) N & 2345 & HAT-5 & 0.11 & Thomson \\(\\rm 2K\\times 2K\\) & 14.0 & 10 & \\(\\rm 8.2^{\\circ}\\) \\\\ FLWO & \\(\\rm 110^{\\circ}53^{\\prime}\\) W & \\(\\rm 31^{\\circ}41^{\\prime}\\) N & 2345 & HAT-6 & 0.11 & Thomson \\(\\rm 2K\\times 2K\\) & 14.0 & 10 & \\(\\rm 8.2^{\\circ}\\) \\\\ MAuna Kea & \\(\\rm 155^{\\circ}28^{\\prime}\\) W & \\(\\rm 19^{\\circ}49^{\\prime}\\) N & 4163 & HAT-9 & 0.11 & Thomson \\(\\rm 2K\\times 2K\\) & 14.0 & 10 & \\(\\rm 8.2^{\\circ}\\) \\\\ Wise & \\(\\rm 34^{\\circ}35^{\\prime}\\) E & \\(\\rm 30^{\\circ}35^{\\prime}\\) N & 875 & Wise1.0 & 1.0 & Tektronics \\(\\rm 1K\\times 1K\\) & 0.7 & 40 & \\(\\rm 11.88^{\\prime}\\) \\\\ \\hline \\end{tabular} \\end{table} Table 1Summary of instruments used in the observing campaign of HD 189733. clouds before and after. The focus setting was changed twice during the night; first when the clouds cleared, and second, when the seeing improved. In both cases the reason was to keep the signal level within the linear response range of the CCD. We used a large enough aperture that these focus changes did not affect the photometry. All exposures were 5 seconds in length with 12 seconds of readout and overhead time between exposures. We observed the target in a single band so as to maximize the cadence, and to eliminate flatfielding errors that may originate from the imperfect sub-pixel re-positioning of the filter-wheel. Auto-guiding was used to further minimize systematic errors that originate from the star drifting away on the CCD chip and falling on pixels with different (and not perfectly calibrated) characteristics. Reduction and photometryAll images were reduced in the same manner; applying overscan correction, subtraction of the two-dimensional residual bias pattern and correction for shutter effects. Finally we flattened each image using a combined and normalized set of twilight sky flat images. There was a drift of only \\(\\sim 3\\arcsec\\) in pointing during the night, so any large-scale flatfielding errors were negligible. To produce a transit light curve, we chose one image as an astrometric reference and identified star centers for HD 189733 and 23 other bright and relatively uncrowded stars in the field. We measured the flux of each star around a fixed pixel center derived from an astrometric fit to the reference stars, in a \\(20\\arcsec\\) circular aperture using daophot/phot within iraf1 (Tody, 1986, 1993) and estimated the sky using the sigma-rejected mode in an annulus defined around each star with inner and outer radii of \\(33\\arcsec\\) and \\(60\\arcsec\\) respectively. Footnote 1: IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. We calculated the extinction correction based on a weighted mean flux of comparison stars and applied this correction to each of our stars. We iteratively selected our comparison stars by removing any that showed unusually noisy or variable trends in their differential light curves. Additionally, a few exposures in the beginning and very end of our observing sequence were removed because those observations were made through particularly thick clouds. The resulting light curve represents the observed counts for the star corrected for extinction using a group of 6 comparison stars within \\(6\\arcmin\\) separation from HD 189733. The light-curve is shown on panels 3 and 4 of Fig. 1. ### Observations by HATNet An instrument description of the wide-field HAT telescopes was given in Bakos et al. (2002, 2004). Here we briefly recall the relevant system parameters. A HAT instrument contains a fast focal ratio (f/1.8) 0.11m diameter Canon lens and Peltier-cooled CCD with a front-illuminated \\(2{\\rm K}\\times 2{\\rm K}\\) chip having \\(14\\mu{\\rm m}\\) pixel size. The resulting FOV is \\(8.2\\arcdeg\\) with \\(14\\arcsec\\) pixel scale. Using a psf-broadening technique (Bakos et al., 2004), careful calibration procedure, and robust differential photometry, the HAT telescopes can achieve 3mmag precision (rms) light-curves at 300s resolution for bright stars (at \\(I\\approx 8\\)). \\begin{table} \\begin{tabular}{l c c c c c c c c c} \\hline \\hline \\multicolumn{1}{c}{ Telescope} & Filter & \\(N_{tr}\\) & Epoch & Date & Transit & Cond. & \\(\\sigma_{OOT}\\) & \\(\\sigma_{sys}\\) & Cad. & Ap\\({}^{[\\arcsec]}\\) \\\\ & & & (UT) & & & (mmag) & (mmag) & (sec) & (\\({}^{\\arcsec}\\)) \\\\ \\hline OHP1.2 & B & 0 & 53629.4 & 2005-09-15 & OIBED & 5 & 2.6 & 1.3 & 86 & 10 \\\\ Wise1.0 & B & 4 & 53638.3 & 2005-09-24 & -IBE- & 4 & & 42 & 5 \\\\ OHP1.2 & R\\({}_{\\rm c}\\) & 4 & 53638.3 & 2005-09-24 & -BED & 4 & 3.0 & 1.2 & 95 & 10 \\\\ OHP1.2 & R\\({}_{\\rm c}\\) & 5 & 53640.5 & 2005-09-24 & OII\\({}_{-}\\) & 3 & 6.8 & 2.4 & 95 & 10 \\\\ FLWO1.2 & r2 & 6 & 53642.7 & 2005-09-29 & OIBED & 43 & 2.6 & 0.5 & 17 & 20 \\\\ HAT-5 & I\\({}_{\\rm c}\\) & 6 & 53642.7 & 2005-09-29 & OIBED & 44 & 4.4 & 1.3 & 135 & 42 \\\\ HAT-6 & I\\({}_{\\rm c}\\) & 6 & 53642.7 & 2005-09-29 & OIBED & 43 & 4.1 & 1.2 & 108 & 42 \\\\ TopHAT & V & 6 & 53642.2 & 2005-09-29 & OIBED & 44 & 4.6 & 3.0 & 70 & 10 \\\\ HAT-9 & I\\({}_{\\rm c}\\) & 7 & 53644.9 & 2005-10-01 & OIB- & 4 & 4.6 & 1.2 & 99 & 42 \\\\ HAT-9 & I\\({}_{\\rm c}\\) & 16 & 53664.9 & 2005-10-21 & OIB- & 4 & 4.3 & 100 & 42 \\\\ TopHAT & V & 19 & 53671.6 & 2005-10-28 & \\(-\\)\\(-\\)\\(-\\)\\(\\sim\\)0 & 4 & 5.3 & \\(\\cdots\\) & 106 & 10 \\\\ HAT-5 & I\\({}_{\\rm c}\\) & 19 & 53671.6 & 2005-10-28 & \\(-\\)\\(\\sim\\)E0 & 4 & 4.6 & \\(\\cdots\\) & 103 & 42 \\\\ HAT-5 & I\\({}_{\\rm c}\\) & 20 & 53673.8 & 2005-10-30 & DID- & 5 & 3.3 & 0.9 & 85 & 42 \\\\ Wise1.0 & B & 22 & 53678.2 & 2005-11-03 & \\(\\sim\\)E0 & 3 & 5.5 & 1.1 & 49 & 10 \\\\ TopHAT & V & 24 & 53682.6 & 2005-11-08 & OID- & 2 & 5.4 & 2.6 & 108 & 10 \\\\ HAT-9 & I\\({}_{\\rm c}\\) & 29 & 53693.7 & 2005-11-19 & -IBEED & 5 & 6.6 & 1.1 & 90 & 42 \\\\ \\hline \\end{tabular} Note. – The table summarizes _all_ observations that were part of the observing campaign described in this paper. Not all of them were used for refining the ephemerides or parameters of the transit – see Table 3 and Table 5 for reference. \\(N_{tr}\\) shows the number of transits since the discovery data. **Epoch** and **Date** show the approximate time of mid-transit. The **Transit** column describes in a tense format which parts of the transits were observed; Out-of-Transit (OOT) section before the transit, Ingress, Bottom, Egress and OOT after the transit. Missing sections are indicated by ”\\(-\\)”. The **Conditions** column indicates the photometric conditions on the scale of 1 to 5, where 5 is absolute photometric, 4 is photometric most of the time with occasional cirrus/fog (relative photometric), 3 stands for broken cirrus, and 2 for poor conditions. Column \\(\\sigma_{OOT}\\) gives the rms of the OOT section at the **Codence** shown in the next column. If the transit was full, \\(\\sigma_{OOT}\\) was computed separately from the pre- and post-transit data, and the smaller value is shown. Column \\(\\sigma_{sys}\\) shows the estimated amplitude of systematics (for details, see §3.2). **Ap** shows the aperture used in the photometry in arcseconds. \\end{table} Table 2Summary of HD 189733 observations. The HAT instruments are operated in autonomous mode, and carry out robotic observations every clear night. We have set up a longitude-separated, two-site network of six HAT instruments, with the primary goal being detection of planetary transits in front of bright stars. The two sites are FLWO, in Arizona, the same site where the 1.2m telescope is located (SS2.2), and the roof of the Submillimeter Array atop Mauna Kea, Hawaii (MK). In addition to the wide-field HAT instruments, we developed a dedicated photometry follow-up instrument, called TopHAT, which is installed at FLWO. A brief system description was given in Charbonneau et al. (2006) in context of the photometry follow-up of the HD 149026 planetary transit. This telescope is 0.26m diameter, f/5 Ritchey-Cretien design with a Baker wide-field corrector. The CCD is a \\(2{\\rm K}\\times 2{\\rm K}\\) Marconi chip with 13.5\\(\\mu\\)m pixel size. The resulting FOV is 1.3\\({}^{\\circ}\\) with 2.2\\({}^{\\prime\\prime}\\) pixel resolution. Similarly to the HATs, TopHAT is fully automated. Selected stations of the HAT Network, along with TopHAT, observed one full and six partial transits of HD 189733 (for details, see Table 2). Observing conditions of the full-transit event at FLWO at \\(N_{tr}=6\\) have been summarized in SS2.2. This transit was observed by HAT-5 and HAT-6 (both in I-band), and by TopHAT (V-band). The partial transit observations at numerous later epochs included HAT-5 (FLWO, I-band), HAT-9 (MK, I-band) and TopHAT (FLWO, V-band). Typical exposure times for the wide-field instruments were 60 to 90 seconds with 10 second readout. TopHAT exposures were \\(\\sim\\)12sec long with up to 40 second readout and download time. All observations were made at slight defocusing and using the psf-broadening technique. The stellar profiles were 2.5pix (35\\({}^{\\prime\\prime}\\)) and 4.5pix (9.9\\({}^{\\prime\\prime}\\)) wide for the HATs and TopHAT, respectively. Although we have no auto-guiding, real-time astrometry was performed after the exposures, and the telescope's position was kept constant with 20\\({}^{\\prime\\prime}\\) accuracy. Reduction and photometryAll HAT and TopHAT images were subject to overscan correction, two-dimensional residual bias pattern and dark subtraction, and normalization with a master sky-flat frame. Bias, dark and sky-flat calibration frames were taken each night by each telescope, and all object frames were corrected with the master calibration images that belonged to the specific observing session. Saturated pixels were masked before the calibration procedure. We used the 2MASS All-Sky Catalog of Point Sources (Skrutskie et al., 2000; Cutri et al., 2003) as an input astrometric catalogue, where the quoted precision is 120 mas for bright sources. A 4th order polynomial fit was used to transform the 2MASS positions to the reference frame of the individual images. Typical rms of the transformations was 700 mas for the wide-field instruments, and 150 mas for TopHAT. Fixed center aperture photometry was performed for all these stars. For the wide-field HAT telescopes we used an \\(r_{ap}=3\\) pixel (42\\({}^{\\prime\\prime}\\)) aperture, surrounded by an annulus with inner and outer radii of \\(r_{1}=5\\) pix (70\\({}^{\\prime\\prime}\\)) and \\(r_{2}=13\\) pix (3), respectively. For TopHAT, the best aperture was \\(r_{ap}=5\\) pix (10.8\\({}^{\\prime\\prime}\\)) with \\(r_{1}=13\\) pix (29\\({}^{\\prime\\prime}\\)) and \\(r_{2}=21\\) pix (46\\({}^{\\prime\\prime}\\)). The apertures were small enough to exclude any bright neighboring star. A high quality reference frame was selected for the wide-field HAT telescopes from the Mauna Kea HAT-9 data, and separately for TopHAT. Because the HAT wide field instruments are almost identical, we were able to use the HAT-9 reference frame to transform the instrumental magnitudes of HAT-5 and HAT-6 data to a common system. For this, we used 4th order polynomials of the magnitude differences as a function of X and Y pixel positions. In effect, we thereby used \\(\\sim 3000\\) and \\(\\sim 800\\) selected non-variable comparison stars for the HATs, and TopHAT, respectively. This contributes to the achieved precision, which is only slightly inferior to the precision achieved by the bigger diameter telescopes. The amount of magnitude correction for HD 189733 between the reference and the individual images is shown in the \\(\\Delta M_{ext}\\) column of Table 3. The same table also indicates the rms of these magnitude fits in the \\(\\sigma_{mfit}\\) column. Both quantities are useful for further cleaning of the data. Because HD 189733 is a bright source, it was saturated on a small fraction of the frames. Saturated data-points were flagged in the light-curves, and also deselected from the subsequent analysis (flagged as \"C\" in Table 3). After cleaning outliers by automatically de-selecting points where the rms of the magnitude transformations was above a critical threshold (typically 25mmag) the light-curves reached a precision of \\(\\sim\\)4mmag at 90 second resolution for both the HATs and TopHAT. Full-transit data are shown in panels 1, 2 and 5 of Fig. 1, and partial-transit data are shown in Fig. 2. ### Observations by the Wise 1.0m telescope The Wise 1m f/7 telescope was used to observe the \\(N_{tr}=4\\) and \\(N_{tr}=22\\) transits in B-band. The CCD was a \\(1{\\rm K}\\times 1{\\rm K}\\) Tektronics chip with 24\\(\\mu\\)m pixel size that corresponds to 0.696\\({}^{\\prime\\prime}\\)/pix resolution on the sky, and a FOV of 11.88\\({}^{\\prime}\\). The photometric conditions were acceptable on both nights, with FWHM\\(\\approx\\)2\\({}^{\\prime\\prime}\\). Auto-guiding was used during the observations. Frames were calibrated in a similar manner to the FLWO1.2m observations, using twilight flats, and aperture photometry was performed with daophot. Unfortunately, out-of-transit (OOT) data of the first transit (\\(N_{tr}=4\\)) (which was also observed from OHP in R-band) are missing, so it is impossible to obtain useful normalization or to apply extinction correction to the transit curve. The second transit (\\(N_{tr}=22\\)) was processed using an aperture of 10\\({}^{\\prime\\prime}\\) encircled by an annulus with inner and outer radii of 15\\({}^{\\prime\\prime}\\) and 25\\({}^{\\prime\\prime}\\), respectively. Seven comparison stars were used, all of them bright, isolated and far from the boundary of the FOV. Extinction correction, derived from the OOT points only, was applied to the resulting stellar light-curve. The final curve of this transit is plotted in Fig. 2. ### The resulting light-curve All photometry originating from the individual telescopes which contains significant OOT data has been merged, and is presented in Table 3. We give both the ratio of the observed flux to the OOT flux of HD 189733 (\"FR\"), and magnitudes that are very close to the standard Johnson/Cousins system (\"Mag\"). Due to the different observing conditions, instruments, photometry parameters (primarily the aperture) and various systematic effects (changing FWHM), the zero-point of the observations were slightly offset. Even for the same instrument, filter-setup, and magnitude reference frame, the zero-points in the flat OOT section were seen to differ by 0.03mag. The offset can be explained by long-term systematic variations and by intrinsic variation of HD 189733. In order to correct for the offsets, for each transit observation (as indicated by \\(N_{tr}\\) in Table 3) we calculated both the median value from the OOT section by rejecting outliers, and also the rms around the median. The OOT median was used for two purposes. First, we normalized the flux values of the given light-curve segment at \\(N_{tr}\\), which are shown in the \"FR\" (flux-ratio) column of Table 3. Second, we shifted the magnitudes to the standard system in order to present reasonable values in the \"Mag\" column. For the standard system we used the Hipparcos values, except for R-band, which was derived by assuming \\(R-I=0.48\\) from Cox (2000). The formal magnitude errors that are given in the \"Merr\" column are based on the photon-noise of the source and the background-noise (e.g. Newberry, 1991). They are in a self-consistent system, but they underestimate the real errors, which have contributions from other noise factors, such as i) scintillation (Young, 1967; Gilliland & Brown, 1988), ii) calibration frames (Newberry, 1991), iii) magnitude transformations depending on the reference stars and imperfectly corrected extinction (indicators of this error source are the \\(\\Delta M_{ext}\\) extinction and the \\(\\sigma_{mfit}\\) rms of extinction corrections in Table 3). Because it is rather difficult to calculate these factors, we assumed that the observed rms in the OOT section of the light-curves is a relevant measure of the overall noise, and used this to normalize the error estimates of the individual flux-ratios (column \"FRerr\", see later SS2.6). ### Merger analysis HD 189733 has a number of faint, close-by neighbors that can distort the light-curve, and may bias the derived physical parameters. These blends can have the following second-order effects: i) the measured transit will appear shallower, as if the planetary radius was smaller, ii) the depth and shape of the transit will be color-dependent in a different way than one would expect from limb-darkening models, iii) differential extinction can yield an asymmetric light-curve, iv) variability of a faint blend can influence the observed light-curve. Our goal was to calculate the additional flux in the various apertures and bandpasses shown in Table 2, and correct our observed flux-ratios (Table 3, column FR) to a realistic flux-ratio (FR\\({}_{\\rm corr}\\)). The 2MASS point-source catalogue (Cutri et al., 2003) lists some 30 stars within 45\\({}^{\\prime\\prime}\\), which is the aperture used at the HAT-5,6,9 telescopes, 5 stars within 20\\({}^{\\prime\\prime}\\) (FLWO1.2m), and 3 stars within 15\\({}^{\\prime\\prime}\\), which may affect the measurements of the 10\\({}^{\\prime\\prime}\\) apertures of OHP1.2, Wise1.0 and TopHAT (apertures are listed in Table 2). To check the reality of listed blends, and to search for additional merger stars, we inspected the following sources: the Palomar Observatory Sky Survey (POSS I) red plates (epoch 1951), the Palomar Quick-V Survey (QuickV, epoch 1982), the Second Palomar Sky Survey (POSS II) plates (epoch 1990 - 1996), the 2MASS J, H and \\(K_{S}\\) scans (epoch 2000), and our own images. We can use the fact that HD 189733 is a high proper motion star with velocity of \\(\\sim 0.25^{\\prime\\prime}/yr\\) pointing South. It was \\(\\sim 13^{\\prime\\prime}\\) to the North on the POSS-I plates, and \\(\\sim 4^{\\prime\\prime}\\) N on POSS-II, thus we can check its _present_ place when it was not hidden by the glare of HD 189733. The analysis is complicated by the saturation, diffraction spikes, and the limited scan resolution (1.7\\({}^{\\prime\\prime}\\)/pix) of POSS-I, but we can confirm that there is no significant source at the epoch 2005 position of HD 189733 down to \\(\\sim\\)4mag fainter in R-band. The reality of all the 2MASS entries was double checked on the POSS frames. There are only two additional faint sources that are missing from the 2MASS point-source catalogue, but detected by our star-extraction on the 2MASS J, H and K scans; the first one at \\(\\alpha=20^{h}00^{m}45.12^{s}\\), \\(\\delta=+22^{\\circ}42^{\\prime}36.5^{\\prime\\prime}\\) and the second at \\(\\alpha=20^{h}00^{m}43.20^{s}\\), \\(\\delta=+22^{\\circ}42^{\\prime}42.5^{\\prime\\prime}\\). We made sure these sources are not filter-glints or persistence effects on the 2MASS scans; they are also visible on the POSS frames. Their instrumental magnitude was transformed to the J,H,\\(K_{S}\\) system using the other stars in the field that are identified in the point source catalogue. A rough linear transformation was derived between the 2MASS J, H and \\(K_{S}\\) colors and Johnson/Cousins B, V, R and I by cross-identification of \\(\\sim 450\\) Landolt (1992) standard stars, and performing linear regression. The uncertainty in the transformation can be as large as 0.1mag, but this is adequate for the purpose of estimating the extra flux (in BVRI), which is only about a few percent that of HD 189733. We find that the extra flux in a 45\\({}^{\\prime\\prime}\\) aperture is \\(\\delta=1.012\\), 1.016, 1.018 and 1.022 times the flux of HD 189733 in B, V, R and I-bands, respectively. The dominant contribution comes from the red star 2MASS 20004297+2242342 at 11.5\\({}^{\\prime\\prime}\\) distance, which is \\(\\sim\\)4.5mag fainter. This star has been found (Bakos et al., 2006) to be a physical companion to HD 189733 and thus may also be called HD 189733B (not to be confused with HD 189733b). For the 10\\({}^{\\prime\\prime}\\) aperture we assumed that half the flux of HD 189733B is within the aperture. The same \\(\\delta\\) flux contribution the 10\\({}^{\\prime\\prime}\\) aperture is 1.003, 1.005, 1.006 and 1.008 in BVRI. The corrected flux-ratios of the individual measurements to the median of the OOT were calculated in the manner \\({\\rm FR}_{\\rm corr}=1+\\delta\\cdot({\\rm FR}-1)\\), and are shown in Table 3. There is a small difference (\\(\\sim\\)2%) between the 10\\({}^{\\prime\\prime}\\) and 45\\({}^{\\prime\\prime}\\) flux contribution, thus we expect that the former measurements (OHP1.2, Wise1.0, TopHAT) show slightly deeper transits than the FLWO1.2m (r) and wide-field HATNet telescopes (I). ## 3. Deriving the physical parameters of the system We use the full analytic formula for nonlinear limb-darkening given in Mandel and Agol (2002) to calculate our transit curves. In addition to the orbital period and limb-darkening coefficients, these curves are a function of four variables, including the mass (\\(M_{*}\\)) and radius (\\(R_{*}\\)) of the star, the radius of the planet (\\(R_{\\rm P}\\)), and the inclination of the planet's orbit relative to the observer (\\(i_{\\rm P}\\)). Because these parameters are degenerate in the transit curve, we use \\(M_{*}=0.82\\pm 0.03M_{\\odot}\\) from B05 to break the degeneracy. As regards \\(R_{*}\\), there are two possible approaches: i) assume a fixed value from independent measurements (SS3.1), ii) measure the radius of the star directly from the transit curve, i.e. leave it to vary freely in the fit. Our final results are based on detailed analysis (SS3.2) using the first approach. To fully trust the second approach, one would need high precision data with relatively small systematic errors. Nevertheless, in order to check consistency, we also performed an analysis where the stellar radius was left as a variable in the fit, and also checked the effect of systematic variations in the light-curves (see later in SS3.2). Refined ephemerides and center of transit time residuals are discussed in SS3.3. ### The radius of HD 189733 Because the value of the stellar radius we use in our fit linearly affects the size of the planet radius we obtain, we use several independent methods to check its value and uncertainty. First methodFor our first calculation we use 2MASS (Cutri et al., 2003) and Hipparcos photometry (Perryman et al., 1997) to find the V-band magnitude and V-K colors of the star, and use the relation described in Kervella et al. (2004) to find the angular size of the star. Because this relation was derived using Johnson magnitudes, we first convert the 2MASS \\(\\rm K_{S}=5.541\\pm 0.021\\) magnitude to the Bessell-Brett homogenized system, which in turn is based on the SAAO system, and thus is the closest to Johnson magnitude available (Carpenter, 2005). We obtain a value of \\(K=5.59\\pm 0.05\\). Most of the error comes from the uncertainty in the \\(\\rm J-K_{S}\\) color, which is used in the conversion. The Johnson V-band magnitude from Hipparcos is \\(7.67\\pm 0.01\\). This gives a V-K color of \\(2.09\\pm 0.06\\). From Kervella et al. (2004), the limb-darkened angular size of a dwarf star is related to its K magnitude and V-K color by: \\[log(\\theta)=0.0755(V-K)+0.5170-0.2K\\,. \\tag{1}\\] Given the proximity of HD 189733 (\\(19.3\\pm 0.3\\)pc), reddening can be neglected, despite its low galactic latitude. The relation gives an angular size of \\(0.36\\pm 0.02\\)mas for the stellar photosphere, where the error estimate originates from the errors of V-K and K. The small dispersion of the relation was not taken into account in the error estimate, as it was determined by Kervella et al. (2004) using a fit to a sample of stars with known angular diameters to be less than 1%. Using the Hipparcos parallax we find that \\(R_{*}=0.75\\pm 0.05R_{\\odot}\\). Second methodWe also derive the radius of the star directly from the Hipparcos parallax, V-band magnitude, and temperature of the star. We first convert from apparent magnitude to absolute magnitude and apply a bolometric correction (Bessell et al., 1998). To solve for the radius of the star we use the relation: \\[M_{b}=4.74-2.5log\\left[\\left(\\frac{T_{eff,*}}{T_{eff,\\odot}}\\right)^{4}\\left( \\frac{R_{*}}{R_{\\odot}}\\right)^{2}\\right] \\tag{2}\\] For \\(T_{eff,*}=5050\\pm 50\\) K effective temperature (B05), we measure a radius of \\(0.74\\pm 0.03R_{\\odot}\\). Third method - isochronesAn additional test on the stellar radius and its uncertainty comes from stellar evolution models. We find from the Girardi et al. (2002) models that the isochrone gridpoints in the (\\(T_{\\rm eff}\\), \\(\\log g\\)) plane that are closest to the observed values (\\(T_{eff,*}=5050\\pm 50K\\), \\(\\log g=4.53\\pm 0.14\\)) prefer slightly evolved models with \\(M_{*}\\approx 0.80M_{\\odot}\\) and \\(R_{*}\\approx 0.79R_{\\odot}\\). Alternatively, the Hipparcos \\(V=7.67\\pm 0.01\\) magnitude, combined with the \\(m-M=-1.423\\pm 0.035\\) distance modulus yields \\(M_{V}=6.25\\pm 0.04\\) absolute V magnitude, and the closest isochrone gridpoints prefer less evolved stars with \\(M_{*}\\approx 0.80M_{\\odot}\\) and \\(R_{*}\\approx 0.76R_{\\odot}\\). The discrepancy between the above two approaches decreases if we adopt a slightly larger distance modulus. Finally, comparison to the Baraffe et al. (1998) isochrones yields \\(M_{*}\\approx 0.80M_{\\odot}\\) and \\(R_{*}\\approx 0.76R_{\\odot}\\). From isochrone fitting, the error on the stellar radius can be as large as \\(0.03R_{\\odot}\\). Fourth methodRecently Masana et al. (2006) calibrated the effective temperatures, angular semi-diameters and bolometric corrections for F, G, K type stars based on V and 2MASS infrared photometry. \\begin{table} \\begin{tabular}{l l l l l l l l l l l l} \\hline \\hline \\multicolumn{1}{c}{ Tel.} & \\multicolumn{1}{c}{Fil.} & \\multicolumn{1}{c}{\\(N_{tr}\\)} & \\multicolumn{1}{c}{HJD} & \\multicolumn{1}{c}{Mag} & \\multicolumn{1}{c}{Merr} & \\multicolumn{1}{c}{FR} & \\multicolumn{1}{c}{FR\\({}_{corr}\\)} & \\multicolumn{1}{c}{FR\\({}_{err}\\)} & \\multicolumn{1}{c}{\\(\\Delta M_{ext}\\)} & \\multicolumn{1}{c}{\\(\\sigma_{mfit}\\)} & \\multicolumn{1}{c}{Qflag} \\\\ & & & & \\multicolumn{1}{c}{(mag)} & & & & & & \\multicolumn{1}{c}{(mag)} & \\multicolumn{1}{c}{(mag)} & \\multicolumn{1}{c}{(mag)} & \\\\ \\hline OHP1.2 & B & 0 & 2453629.3205430 & 8.6062 & \\(\\cdots\\) & 0.99614 & 0.99612 & \\(\\cdots\\) & \\(\\cdots\\) & \\(\\cdots\\) & \\(\\cdots\\) \\\\ FLWO1.2 & r & 6 & 2453642.6001600 & 7.1886 & 0.0008 & 1.02934 & 1.02972 & 0.00074 & \\(\\cdots\\) & \\(\\cdots\\) & \\(\\cdots\\) \\\\ HAT-5 & I & 6 & 2453642.5903353 & 6.7452 & 0.0017 & 0.99522 & 0.99511 & 0.00156 & -0.147 & 0.0142 & G \\\\ HAT-6 & I & 6 & 2453642.6082715 & 6.7357 & 0.0017 & 1.00397 & 1.00406 & 0.00156 & -0.011 & 0.0160 & G \\\\ TopHAT & V & 6 & 2453642.6042285 & 7.6717 & 0.0009 & 0.99844 & 0.99841 & 0.00083 & -0.097 & 0.0080 & G \\\\ HAT-9 & I & 7 & 2453644.8307479 & 6.7383 & 0.0019 & 1.00157 & 1.00160 & 0.00175 & -0.015 & 0.0081 & G \\\\ Wise1.0 & B & 22 & 2453678.1963150 & 8.6374 & 0.0013 & 0.96792 & 0.96779 & 0.00120 & \\(\\cdots\\) & \\(\\cdots\\) \\\\ \\hline \\end{tabular} Note. – This table is published in its entirety (2938 lines) in the electronic edition of the paper. A portion is shown here regarding its form and content with a sample line for each telescope in the order they observed a transit with an OOT section. Column \\(\\bf N_{tr}\\) is the number of transits since the discovery data by OHP on HJD = 2453629.3. Values in the \\(\\bf Mag\\) (magnitude) column have been derived by shifting the zero-point of the particular dataset at \\(N_{tr}\\) to bring the median of the OOT section to the standard magnitude value in the literature. \\(\\bf Merr\\) (and FR\\({}_{brv}\\)) denote the _formal_ magnitude (and flux-ratio) error estimates based on the photon and background noise (not available for all data). The flux-ratio \\(\\bf FR\\) shows the ratio of the individual flux measurements to the sigma-clipped median value of the OOT at that particular \\(N_{tr}\\) transit observation. The merger-corrected flux-ratio FR\\({}_{corr}\\) is described in detail in §2.6. The \\(\\bf M_{ext}\\) is a measure of the extinction on a relative scale (instrumental magnitude of reference minus image), \\(\\sigma_{mfit}\\) is the rms of the magnitude fit between the reference and the given frame. Both of these quantities are useful measures of the photometric conditions. \\(\\bf Qflag\\) is the quality flag: “G” means good, “C” indicates that the measurement should be used with caution, e.g. the star was marked as saturated. Fit of the transit parameters were performed using the HJD, FR, FR\\({}_{corr}\\) and FR\\({}_{var}\\) columns. \\end{table} Table 3The light-curve of HD 189733. They provide - among other parameters - angular semi-diameters and radii for a large sample of Hipparcos stars. For HD 189733 they derived \\(R=0.758\\pm 0.016R_{\\odot}\\). SummaryAltogether, the various methods point to a stellar radius in the range of 0.74 to 0.79\\(R_{\\odot}\\), with mean value being \\(\\sim 0.76R_{\\odot}\\). In the subsequent analysis we accept the Masana et al. (2006) value of \\(0.758\\pm 0.016R_{\\odot}\\). ### Fitting the transit curve We set the mass and radius of the star equal to \\(0.82\\pm 0.03M_{\\odot}\\) and \\(0.758\\pm 0.016\\)\\(R_{\\odot}\\), respectively, and fit for the planet's radius and orbital inclination. The goodness-of-fit parameter is given by: \\[\\chi^{2}=\\sum_{i=1}^{N}\\left(\\frac{p_{i}-m_{i}}{\\sigma_{m,i}}\\right)^{2} \\tag{3}\\] where \\(m_{i}\\) is the \\(i^{th}\\) measured value for the flux from the star (with the median of the out of transit points normalized to one), \\(p_{i}\\) is the predicted value for the flux from the theoretical transit curve, and \\(\\sigma_{m,i}\\) is the error for each flux measurement. For the OHP1.2 and Wise data, where independent errors for each flux measurement are not available, we set the \\(\\sigma_{m,i}\\) errors on all points equal to the standard deviation of the out of transit points. For the FLWO1.2, HAT and TopHAT data, where relative errors for individual points are available, we set the median error equal to the standard deviation of the out of transit points and use that to normalize the relative errors. We also allow the locations of individual transits to vary freely in the fit. When calculating our transit curves, we use the nonlinear limb-darkening law defined in Claret (2000): \\[I(r)=1-\\sum_{n=1}^{4}c_{n}(1-\\mu^{n/2}) \\tag{4}\\] where \\[\\mu=cos\\theta\\,. \\tag{5}\\] We select the four-parameter nonlinear limb-darkening coefficients from Claret (2000) for a star with \\(T=5000K\\), \\(log(g)=4.5\\), \\([Fe/H]=0.0\\), and a turbulent velocity of 1.0 km/s. The actual parameters for the star, from B05, are rather close to this: \\(T=5050\\pm 50K\\), \\(log(g)=4.53\\pm 0.14\\), and \\([Fe/H]=-0.03\\pm 0.04\\). To determine the best-fit radius for the planet, we evaluate the \\(\\chi^{2}\\) function over all _full_ transits simultaneously, using the same values for the planetary radius and inclination. For this purpose, we employed the downhill simplex minimization routine (amoeba) from Press et al. (1992). The full transits and the fitted curves are exhibited on Fig. 1, the transit parameters are listed in Table 4. In order to determine the 1\\(\\sigma\\) errors, we fit for the inclination and the radius of the planet using the 1\\(\\sigma\\) values for the mass and radius of the star (assuming they are uncorrelated). We find that the mass of the star contributes errors of \\(\\pm 0.004\\)\\(R_{\\rm J}\\) and \\(\\pm 0.12^{\\circ}\\), and the radius of the star contributes errors of \\(\\pm 0.032\\)\\(R_{\\rm J}\\) and \\(\\pm 0.21^{\\circ}\\). Using a bootstrap Monte Carlo method, we also estimate the errors from the scatter in our data, and find that this scatter contributes an error of \\(\\pm 0.005\\)\\(R_{\\rm J}\\) and \\(\\pm 0.03^{\\circ}\\) to the final measurement. This gives us a total error of \\(\\pm 0.032\\)\\(R_{\\rm J}\\) for the planet radius and \\(\\pm 0.24^{\\circ}\\) for the inclination. Our best-fit parameters gave a reduced \\(\\chi^{2}\\) value of 1.23. The excess in the reduced \\(\\chi^{2}\\) over unity is the result of our method for normalizing the relative errors for data taken at \\(N_{tr}=6\\), where the RMS variation in the data increases significantly towards the end of the data set, as the source moved closer to the horizon. For these data we define our errors as the standard deviation of the data before the transit, where the scatter was much smaller. This is justified because we know from several sources (night webcamera, raw photon counts) that the conditions were similar (photometric) before and during the transit, and the errors before the transit better represent those inside the transit. This underestimates the errors for data after the transit, inflating the \\(\\chi^{2}\\) function accordingly. We find that when we exclude the FLWO1.2 data after the end of the transit (the FLWO1.2 data contain significantly more points than any other single data set), the reduced \\(\\chi^{2}\\) for the fit decreases to 0.93. The results of the planet transit fit are shown in Table 4. The value for the radius of the planet \\(R_{\\rm P}=1.15\\pm 0.03\\)\\(R_{\\rm J}\\) is smaller than the B05 value ( \\begin{table} \\begin{tabular}{l r} \\hline \\hline \\multicolumn{1}{c}{ Parameter} & \\multicolumn{1}{c}{Best-Fit Value} \\\\ \\hline \\(R_{\\rm P}(R_{\\rm J})\\) & 1.154 \\(\\pm 0.032\\) \\\\ \\(ip(^{\\circ})\\) & 85.79 \\(\\pm 0.24\\) \\\\ \\(M_{\\star}(M_{\\odot})\\) & 0.82 \\(\\pm 0.03\\)a \\\\ \\(R_{\\star}(R_{\\odot})\\) & 0.758 \\(\\pm 0.016\\)b \\\\ Period (days) & \\(2.218573\\pm 0.000020\\) \\\\ \\(T_{0}\\) (HJD) & \\(2453629.39420\\pm 0.00024\\) \\\\ \\hline \\end{tabular} \\end{table} Table 4Parameters from simultaneous fit of transit curves. Figure 1.— The five full eclipses examined in this work, with best-fit transit curves over-plotted. The figure in the electronic edition is color-coded according to the bandpass used. \\(1.26\\pm 0.03\\)\\(R_{\\rm J}\\)), and the inclination of \\(85.8\\pm 0.2^{\\circ}\\) is slightly larger than the B05 value (\\(85.3\\pm 0.1^{\\circ}\\)). Although our errors are comparable to the errors given by B05, despite the superior quality of the new data, we note that this is a direct result of the larger error (\\(\\pm 0.016\\) instead of \\(\\pm 0.01\\)) for the stellar radius we use in our fits. As discussed in SS3.1, we feel that this error, which is based on the effective temperature and bolometric magnitude of the star, is a more accurate reflection of the uncertainties in the measurement of the radius of the star. We note that the errors are dominated by the uncertainties in the stellar parameters (notably \\(R_{*}\\)). Fitting with unconstrained stellar radiusWe note that when we fit for the stellar radius directly from the transit curves (meaning we fit for the planet radius, orbital inclination, and stellar radius, but set the stellar mass equal to \\(0.82\\)\\(M_{\\odot}\\)), we measure a stellar radius of \\(0.678\\pm 0.015\\)\\(R_{\\odot}\\) and planet radius of \\(R_{\\rm P}=0.999\\pm 0.026R_{\\rm J}\\). The errors for these measurements are from a bootstrap Monte Carlo analysis, and represent the uncertainties in our data alone. To obtain the formal errors, we incorporate the error from the mass of the star and find errors of \\(\\pm 0.017R_{\\odot}\\) and \\(0.029R_{\\rm J}\\), respectively. This means that our data prefer a significantly smaller stellar radius (and a correspondingly smaller planet radius) than our estimates based on temperature, bolometric magnitude, and V-K colors alone would lead us to expect, or a radius smaller than the \\(0.82M_{\\odot}\\) stellar mass implies. With many more points (869 as compared to \\(\\sim 100\\) in the other data-sets) and lower photon-noise uncertainties, the FLWO1.2 data dominate the fit of Eq. 3. However, we repeated our fit with and without these data, and found that the best-fit radius for the star decreased only slightly (to \\(0.666\\)\\(R_{\\odot}\\)) when the FLWO1.2 data were excluded from the fit. Thus, our I, V, and B-band data independently yield values for the stellar radius similar to those implied by the FLWO1.2 data. The effect of systematic errorsThe \\(\\chi^{2}\\) minimization formula (Eq. 3) assumes independent noise, and the presence of covariance in the data (due to systematics in the photometry) means that too much weight may be given to a dataset having small formal errors and a great number of datapoints (e.g. the FLWO1.2 data) compared to the other independent data sets (e.g. other telescopes and filters). This is especially a concern when the datasets yield different transit parameters, and one needs to establish whether this difference is significant. In order to follow-up this issue, we repeated the global fit by assuming that the photometric systematics were dominant in the error budget on the parameters - as suggested by our experience with milli-magnitude rapid time-series photometry. We estimated the amplitude of the covariance from the variance of 20-minute sliding averages on the residuals around the best-fitting transit light-curve for each night following the method of Pont (2005). The fit was repeated using these new weights (listed in Table 2, \\(\\sigma_{sys}\\)), and the resulting parameters (planetary radius, inclination) were within 1% of the values found assuming independent noise. The dispersion of these parameters from the individual nights were found to be compatible with the uncertainties due to the systematics. The amplitude of the systematics is also sufficient to account for the difference in the best-fit stellar radius if it is left as a free parameter. Therefore, with the amplitude of the covariance in the photometry determined from the data itself, we find that the indications of discrepancy between the different data sets and with the assumed primary radius are not compelling at this point. ### Ephemerides The transit curves derived from the full-transits for each bandpass were used in turn to calculate the ephemerides of HD 189733 using _all_ transits that have significant OOT and in-transit sections present (for reference, see Table 2). For each transit (full and partial), the center of transit \\(T_{C}\\) was determined by \\(\\chi^{2}\\) minimization. Partial transits with the fitted curve overlaid are exhibited on Fig. 2. Errors were assigned to the \\(T_{C}\\) values by perturbing \\(T_{C}\\) so that \\(\\chi^{2}\\) increases by unity. The individual \\(T_{C}\\) transit locations and their respective errors are listed in Table 5. The typical timing errors were formally of the order of 1 minute. This, however, does not take into account systematics in the shape of the light-curves. The errors in \\(T_{C}\\) can be estimated from the simultaneous transit observations, for example the \\(N_{tr}=6\\) event Figure 2.— The ten partial eclipses examined in this work, with best-fit transit curves over-plotted. The eclipses are listed sequentially by date, from top left to bottom right. These eclipses were not used in the fit for the planet radius, inclination, stellar mass, and stellar radius. The figure in the electronic edition is color-coded according to the bandpass used. was observed by the FLWO1.2m, HAT-5, HAT-6 and TopHAT telescopes (Table 2) and the rms of \\(T_{C}\\) around the median is \\(\\sim 50\\) seconds, which is in harmony from the above independent estimate of 1 minute. We applied an error weighted least square minimization on the \\(T_{C}=P\\cdot N_{tr}+E\\) equation, where the free parameters were the period \\(P\\) and epoch \\(E\\). The refined ephemeris values are listed in Table 4. They are consistent with both those derived by B05 and by Hebrard & Lecavelier Des Etangs (2006) using Hipparcos and OHP1.2 data, to within \\(1\\sigma\\) using our error bars. We also examined the Observed minus Calculated (O-C) residuals, as their deviation can potentially reveal the presence of moons or additional planetary companions (Holman & Murray, 2005; Agol et al., 2005). The O-C values are listed in Table 4, and plotted in Fig. 3. Using the approximate formula from Holman & Murray (2005), as an example, a \\(0.15M_{\\rm J}\\) perturbing planet on a circular orbit, at 2 times the distance of HD 189733b (\\(P\\approx 6.3^{d}\\)) would cause variations in the transit timings of 2.5 minutes. The radial velocity semi-amplitude of HD 189733 as induced by this hypothetical planet would be 19m/s, which would be barely noticeable (at the \\(1\\sigma\\) level) from the discovery data having residuals of 15m/s and spanning only 30 days. A few, seemingly significant outlier points on the O-C diagram are visible, but we believe that it would be premature to draw any conclusions, because: i) the errorbars do not reflect the effect of systematics, and for example the \\(T_{C}\\) of the \\(N_{tr}=0\\) OHP discovery data moved by \\(\\sim 5\\) minutes after re-calibration of that dataset, ii) all negative O-C outliers are B or V-band data, which is suggestive of an effect of remaining color-dependent systematics. The significance of a few outliers is further diminished by the short dataset we have; no periodicity can be claimed by observing 2 full and 9 partial transits altogether. According to the theory, the nature of perturbations would be such that they appear as occasional, large outliers. Thus, the detection of potential perturbations also benefits from the study of numerous sequential transits, for example, the MOST mission with continuous coverage and uniform data would be suitable for such study (Walker et al., 2003). We also draw the attention to the importance of observing _full_ transits, as they improve the \\(T_{C}\\) center of transit by a significant factor, partly because of the presence of ingress and egress, and also due to a better treatment of the systematics. If a planet is perturbed by another planet, the transit-time variations \\(\\Delta t\\) are proportional to the period \\(P\\) of the perturbed planet (Holman & Murray, 2005). Although HD 189733b is a relatively short period (\\(2.21^{d}\\)) planet compared to e.g. HD 209458b (\\(3.5^{d}\\)), it is a promising target for detecting transit perturbations in the future, because the mass of the host star is low and \\(\\Delta t\\propto 1/M_{*}\\), plus the deep transit of the bright source will result in very precise timing measurements. Observations spanning several months to many years may be needed to say anything definite about the presence or absence of a periodic perturbation. ## 4. Conclusions Our final values for the planet radius and orbital inclination were derived by fixing the stellar radius and mass to independently determined values from B05 and Masana et al. (2006). We analyzed the dataset in two ways: by \\(\\chi^{2}\\) minimization assuming independent errors, and also by assuming that photometric systematics were dominant in the error budget. Both methods yielded the same transit parameters within \\(1\\%\\): if we assume there is no additional unresolved close-in stellar companion to HD 189733 to make the transits shallower, then we find a planet radius of \\(1.154\\pm 0.032R_{\\rm J}\\) and an orbital inclination of \\(85.79\\pm 0.24^{\\circ}\\) (Table 4). The uncertainty in \\(R_{\\rm P}\\) is primarily due to the uncertainty in the value of the stellar radius. We note that the TopHAT V-band full and partial transit data, as well as the Wise partial B-band data appear slightly deeper than the best fit to the analytic model. The precision of the dataset is not adequate to determine if this potential discrepancy is caused by a real physical effect (such as a second stellar companion) or to draw further conclusions. When compared to the discovery data, the radius decreased by 10%, and HD 189733b is in the mass and radius range of \"normal\" exoplanets (Fig. 4). The revised radius estimate is consistent with structural models of \\begin{table} \\begin{tabular}{l c c c c c} \\hline \\hline \\multicolumn{1}{c}{ Telescope} & \\(N_{T}\\) & \\(T_{C}\\) & \\(\\sigma_{\\rm HJD}\\) & (O-C) & \\(\\sigma\\)HJD \\\\ & & (HJD, days) & (days) & (days) & \\\\ \\hline OHP1.2 & 0 & 2453629.39073 & \\(\\pm 0.00059\\) & \\(-0.0035\\) & \\(-5.9\\) \\\\ OHP1.2 & 4 & 2453638.26858 & \\(\\pm 0.00067\\) & \\(0.00035\\) & \\(0.53\\) \\\\ OHP1.2 & 5 & 2453640.48706 & \\(\\pm 0.00174\\) & \\(-0.000079\\) & \\(-0.0045\\) \\\\ FLWO1.2 & 6 & 2453642.70592 & \\(\\pm 0.00022\\) & \\(0.00029\\) & \\(1.24\\) \\\\ HAT-5 & 6 & 2453642.70641 & \\(\\pm 0.00092\\) & \\(0.00077\\) & \\(0.84\\) \\\\ HAT-6 & 6 & 2453642.70649 & \\(\\pm 0.00049\\) & \\(0.00085\\) & \\(1.7\\) \\\\ TopHAT & 6 & 2453642.70536 & \\(\\pm 0.00048\\) & \\(-0.00028\\) & \\(-0.57\\) \\\\ HAT-9 & 7 & 2453644.92720 & \\(\\pm 0.00111\\) & \\(0.0030\\) & \\(2.7\\) \\\\ HAT-9 & 16 & 2453664.89287 & \\(\\pm 0.00108\\) & \\(0.0015\\) & \\(1.4\\) \\\\ HAT-5 & 19 & 2453671.54999 & \\(\\pm 0.00113\\) & \\(0.0029\\) & \\(2.6\\) \\\\ ToplHAT & 19 & 245367.514849 & \\(\\pm 0.00096\\) & \\(0.0014\\) & \\(1.5\\) \\\\ HAT-5 & 20 & 2453673.76725 & \\(\\pm 0.00062\\) & \\(0.0016\\) & \\(2.2\\) \\\\ Wise & 22 & 2453678.20080 & \\(\\pm 0.00050\\) & \\(-0.0020\\) & \\(-4.0\\) \\\\ TopHAT & 24 & 2453682.63715 & \\(\\pm 0.00100\\) & \\(-0.0028\\) & \\(-2.8\\) \\\\ HAT-9 & 29 & 2453693.73327 & \\(\\pm 0.00090\\) & \\(0.00045\\) & \\(0.51\\) \\\\ \\hline \\end{tabular} Note. – These are the best-fit locations for the centers of the fifteen full and partial eclipses examined in this work. We also give the number of elapsed transits \\(N_{T}\\) and O-C residuals for each eclipse. \\end{table} Table 5Best-Fit Transit Locations Figure 3.— The residuals calculated using the period and \\(T_{0}\\) derived in this work. The dashed lines are calculated from the uncertainties in the measurements of P and \\(T_{0}\\). The figure in the electronic edition is color-coded according to the bandpass used. hot Jupiters that include the effects of stellar insolation, and hence it does not require the presence of an additional energy source, as is the case for HD 209458b. On the mass-radius diagram HD 209458b remains an outlier with anomalously low density. We note that the parameters of OGLE-10b are still debated (Konacki et al., 2005; Holman et al., 2006; Santos et al., 2006), but according to the recent analysis of Santos et al. (2006), it also has anomalously low density. With its revised parameters, HD 189733b is quite similar to OGLE-TR-132b (Moutou et al., 2004). The smaller radius leads to a higher density of \\(\\sim 1\\)g cm\\({}^{-3}\\) as compared to the former \\(\\sim 0.75\\)g cm\\({}^{-3}\\). The smaller planetary radius increases the 16\\(\\mu\\)m brightness temperature \\(T^{(16\\mu)}=1117\\pm 42\\)K of Deming et al. (2006) to \\(1279\\pm 90\\)K, which is slightly larger than that of TrES-1 and HD 209458b. We also derived new ephemerides, and investigated the outlier points in the O-C diagram. We have not found any compelling evidence for outliers that could be due to perturbations from a second planet in the system. We note however, that due to the proximity and brightness of the parent star, as well as the deep transit, the system is well suited for follow-on observations. Part of this work was funded by NASA grant NNG04GN74G. Work by G. A. B. was supported by NASA through grant HST-HF-01170.01-A Hubble Fellowship. H. K. is supported by a National Science Foundation Graduate Research Fellowship. D. W. L. thanks the Kepler mission for support through NASA Cooperative Agreement NCC2-1390. A. P. wishes to acknowledge the hospitality of the Harvard-Smithsonian Center for Astrophysics, where part of this work has been carried out. Work of A. P. was also supported by Hungarian OTKA grant T-038437. Research of T. M. and A. S. was partially supported by the German-Israeli Foundation for Scientific Research and Development. This publication makes use of data products from the Two Micron All Sky Survey (2MASS). We thank M. Hicken and R. Kirshner for swapping nights on the FLWO1.2m telescope on short notice. ## References * Agol et al. (2005) Agol, E., Steffen, J., Sari, R., & Clarkson, W. 2005, MNRAS, 359, 567 * As (2000) As. N. Cox. 2000, Allen's astrophysical quantities, 4th ed. Publisher: New York: AIP Press; Springer, 2000 * Bakos et al. (2002) Bakos, G. A., Lazar, J., Papp, I., Sari, P., & Green, E. M. 2002, PASP, 114, 974 * Bakos et al. (2004) Bakos, G. A., Noyes, R. W., Kovacs, G., Stanek, K. Z., Sasselov, D. D., & Domsa, I. 2004, PASP, 116, 266 * Bakos et al. (2001) Bakos, G. A. et al., ApJ, In Press * Baraffe et al. (1998) Baraffe, I., Chabrier, G., Allard, F., & Hauschildt, P. H. 1998, A&A, 337, 403 * Baraffe et al. (2005) Baraffe, I., Chabrier, G., Barman, T. S., Selsis, F., Allard, F., & Hauschildt, P. H. 2005,A&A, 436, L47 * Bessell et al. (1998) Bessell, M.S., Castelli, F., & Plez, B. 1998, A&A, 333, 231 * Bouchy et al. (2004) Bouchy, F., Pont, F., Santos, N. C., Melo, C., Mayor, M., Queloz, D., & Udry, S. 2004, A&A, 421, L13 * Bouchy et al. (2005) Bouchy, F., et al.2005, A&A, 444, L15 * Carpenter (2005) Carpenter, J. 2005, ApJ, 121, 2851 * Charbonneau et al. (2006) Charbonneau, D., et al. 2006, ApJ, 636, 445 * Claret (2000) Claret, A. 2000, A&A, 363, 1081 * Cutri et al. (2000) Cutri, R. M., et al. 2000, Explanatory Supplement to the 2MASS Second Incremental Data Release (Pasadena: Caltech) * Cutri et al. (2003) Cutri, R. M., et al. 2003, VizieR Online Data Catalog, 2246, * Deming et al. (2006) Deming, D., Harrington, J., Seager, S., Richardson, L. J. 2006, astro-ph/0602443 * Gilliland & Brown (1988) Gilliland, R. L., & Brown, T. M. 1988, PASP, 100, 754 * Girardi et al. (2002) Girardi, L. et al. 2002, A&A, 391, 195 * Hebrard & Lecavelier Des Etangs (2006) Hebrard, G., & Lecavelier Des Etangs, A.2006, A&A, 445, 341 * Holman & Murray (2005) Holman, M. J., & Murray, N. W.2005, Science, 307, 1288 * Holman et al. (2005) Holman, M. J., Winn, J. N., Stanek, K. Z., Torres, G., Sasselov, D. D., Allen, R. L., Fraser, W., astro-ph/0506569 * Kervella et al. (2004) Kervella, P., Thevenin, Di Folco, E., & Segransan, D. 2004, A&A, 426, 297 * Konacki et al. (2004) Konacki, M., et al. 2004, ApJ, 609, L37 * Konacki et al. (2005) Konacki, M., Torres, G., Sasselov, D. D., & Jha, S. 2005, ApJ, 624, 372 * Landolt (1992) Landolt, A. U. 1992, AJ, 104, 340 * Laughlin et al. (2005) Laughlin, G. et al. 2005, ApJ, 621, 1072 * Mandel & Agol (2002) Mandel, K., & Agol, L. 2002, ApJ, 580, L171 * Masana et al. (2006) Masana, E., Jordi, C., Ribas, I. 2006, astro-ph/0601049 * Mason et al. (2001) Mason, B. D., Vycoff, G. L., Hartkopf, W. I., Douglass, G. G., & Worley, C. E. 2001, AJ, 122, 3466 * Moutou et al. (2004) Moutou, C., Pont, F., Bouchy, F., & Mayor, M. 2004, A&A, 424, L31 * Newberry (1991) Newberry, M. V. 1991, PASP, 103, 122 * Perryman et al. (1997) Perryman, M. A. C., et al. 1997, A&A, 323, L29 * Pont et al. (2004) Pont, F., Bouchy, F., Queloz, D., Santos, N. C., Melo, C., Mayor, M., & Udry, S. 2004, A&A, 426, L15 * Pont et al. (2005) Pont, F., Bouchy, F., Melo, C., Santos, N. C., Mayor, M., Queloz, D., & Udry, S. 2005, A&A, 438, 1123 * Pont (2005) Pont, F. 2005, astro-ph/0510846 * Press et al. (1992) Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 1992, Numerical Recipes (2d ed.; London: Cambridge Univ. Press * Santos (2006) Santos, N. C. 2006, astro-ph/0601024 * Sato et al. (2005) Sato, B., et al. 2005, ApJ, 633, 465 * Skrutskie et al. (2000) Skrutskie, M. F., et al. 2000, VizieR Online Data Catalog, 1, 2003 * Stetson (1987) Stetson, P. B. 1987, PASP, 99, 191 * Tody (1993) Tody, D. 1993, ASP Conf. Ser. 52: Astronomical Data Analysis Software and Systems II, 52, 173 * Tody (1986) Tody, D. 1986, Proc. SPIE, 627, 733 [MISSING_PAGE_POST] * () Torres, G., Konacki, M., Sasselov, D. D., & Jha, S. 2004, ApJ, 609, 1071 * () Udalski, A. et al. 2002a, Acta Astronomica, 52, 1 * () Udalski, A. et al. 2002b, Acta Astronomica, 52, 115 * () Udalski, A. et al. 2002c, Acta Astronomica, 52, 317 * () Young, A. T. 1967, AJ, 72, 747 * () Walker, G., et al.2003, PASP, 115, 1023
We report on the BVRI multi-band follow-up photometry of the transiting extrasolar planet HD 189733b. We revise the transit parameters and find planetary radius \\(R_{\\rm P}=1.154\\pm 0.032R_{\\rm J}\\) and inclination \\(i_{\\rm P}=85.79\\pm 0.24^{\\circ}\\). The new density (\\(\\sim 1\\rm g\\,cm^{-3}\\)) is significantly higher than the former estimate (\\(\\sim 0.75\\rm g\\,cm^{-3}\\)); this shows that from the current sample of 9 transiting planets, only HD 209458 (and possibly OGLE-10b) have anomalously large radii and low densities. We note that due to the proximity of the parent star, HD 189733b currently has one of the most precise radius determinations among extrasolar planets. We calculate new ephemerides: \\(P=2.218573\\pm 0.000020\\) days, \\(T_{0}=2453629.39420\\pm 0.00024\\) (HJD), and estimate the timing offsets of the 11 distinct transits with respect to the predictions of a constant orbital period, which can be used to reveal the presence of additional planets in the system. Subject headings: stars: individual: HD 189733 - planetary systems
Condense the content of the following passage.
arxiv-format/0603656v2.md
# A scaling law for aeolian dunes on Mars, Venus, Earth, and for subaqueous ripples Philippe Claudin Bruno Andreotti Laboratoire de Physique et Mecanique des Milieux Heterogenes UMR CNRS 7636 ESPCI, 10 rue Vauquelin, 75231 Paris Cedex 05, France. ###### keywords: dune, saltation, Mars, instability Pacs: 45.70.Qj (Pattern formation), 47.20.Ma (Interfacial instability), 96.30.Gc (Mars), in which this instability can be understood. A unique length scale is involved in this description, namely the sand flux saturation length \\(L_{\\rm sat}\\), which scales on the drag length \\(L_{\\rm drag}=\\frac{\\rho_{s}}{\\rho_{f}}d\\), where \\(d\\) is the grain diameter and \\(\\rho_{s}/\\rho_{f}\\) the grain to fluid density ratio. It governs the scaling of the wavelength \\(\\lambda\\) at which dunes or subaqueous ripples nucleate. Footnote 1: The \\(L_{\\rm drag}\\) is the ratio of bump. Conversely, \\(\\tau\\) decreases on the lee side. Assuming that the maximum amount of sand that can be transported by a given flow - the _saturated_ sand flux - is an increasing function of \\(\\tau\\), erosion takes place on the stoss slope as the flux increases, and sand is deposited on the lee of the bump. If the velocity field was symmetric around the bump, the transition between erosion and deposition would be exactly at the crest, and this would lead to a pure propagation of the bump, without any change in amplitude (we call this the '\\(A\\)' effect, see below). In fact, due to the simultaneous effects of inertia and dissipation (viscous or turbulent), the velocity field is asymmetric (even on a symmetrical bump) and the position of the maximum shear stress is shifted upwind the crest of the bump (the '\\(B\\)' effect). In addition, the sand transport reaches its saturated value with a spatial lag, characterized by the saturation length \\(L_{\\rm sat}\\). The maximum of the sand flux \\(q\\) is thus shifted downwind the point at which \\(\\tau\\) is maximum by a typical distance of the order of \\(L_{\\rm sat}\\). The criterion of instability is then geometrically related to the position at which the flux is maximum with respect to the top of the bump: an up-shifted position leads to a deposition of grains before the crest, so that the bump grows. The above qualitative arguments can be translated into a precise mathematical framework, see e.g. [14, 15]. For a small deformation of the bed profile \\(h(t,x)\\), the excess of stress induced by a non-flat profile can be written in Fourier space as: \\[\\hat{\\tau}=\\tau_{0}(A+iB)k\\hat{h}, \\tag{1}\\] where \\(\\tau_{0}\\) is the shear that would apply on a flat bed and \\(k\\) is the wave vector associated to the spatial coordinate \\(x\\). \\(A\\) and \\(B\\) are dimensionless functions of all parameters and of \\(k\\) in particular. The \\(A\\) part is in phase with the bed profile, whereas the \\(B\\) one is out of phase, so that the modes of \\(h\\) and \\(\\tau\\) of wavelength \\(\\lambda\\) have a spatial phase difference of the order of \\(\\lambda B/(2\\pi A)\\). This shift is typically of the order of 10% of the length of the bump as \\(A\\) and \\(B\\) are typically of the same order of magnitude. Expressions for \\(A\\) and \\(B\\) have been derived by Jackson and Hunt [16] in the case of a turbulent flow. As shown Figure 1: Schematic of the instability mechanism showing the stream lines around a bump, the fluid flowing from left to right. A bump grows when the point at which the sand flux is maximum is shifted upwind the crest. The shift of the maximum shear stress scales on the size of the bump. The spatial lag between the shear and flux maxima is the saturation length \\(L_{\\rm sat}\\). by Kroy _et al._[17], \\(A\\) and \\(B\\) only weakly (logarithmically) depend on the wavelength so that they can be considered as constant for practical purpose. If the shear stress is below the dynamical threshold \\(\\tau_{\\rm th}\\), no transport is observed, hence the sand flux is null. Above this threshold, one observes on a flat bed a saturated flux \\(Q\\) which is a function of \\(\\tau_{0}\\). The fact that a wind of a given strength can only transport a finite quantity of sand is due to the negative feedback of the grains transported by the fluid on the flow velocity profile - the moving grains slow down the fluid. This saturation process of the sand flux is still a matter of research [18, 19, 20]. We refer the interested reader to [20, 21] for a review discussion. For our present purpose, we need a first order but quantitative description of the saturated flux. Following [20], Rasmussen _et al._ wind tunnel data [22] are well described by the relationship: \\[Q=25\\,\\frac{\\tau_{0}-\\tau_{\\rm th}}{\\rho_{s}}\\,\\sqrt{\\frac{d}{g}}\\quad{\\rm if }\\quad\\tau_{0}>\\tau_{\\rm th}\\quad{\\rm and}\\quad Q=0\\quad{\\rm otherwise}. \\tag{2}\\] \\(g\\) is gravitational acceleration. The prefactor 25 has been adjusted to fit the data and is reasonably independent of the grain size \\(d\\). The equivalent of the relation (1) for the saturated flux on a modulated surface \\(q_{\\rm sat}\\) can then be Figure 2: Sand transport relaxation lengths in units of \\(L_{\\rm drag}\\) as a function of the rescaled wind velocity \\(u_{*}/u_{\\rm th}\\), as predicted by the theoretical model presented in [20]. The inset shows the same curves in linear scale. Starting from a vanishing flux, the sand transport first increases exponentially over a distance shown with \\(\\circ\\) symbols. The dominant mechanism during this phase is the ejection of new grains when saltons collide the sand bed. The spatial corresponding growth rate diverges at the threshold and rapidly decreases with larger \\(u_{*}\\). As the number of grains transported increases, the wind is slowed down in the saltation curtain, until saturation. The distance over which the flux relaxes towards its saturated value is shown with \\(\\bullet\\) symbols. Except very close to the threshold, the dominant mechanism is the negative feedback of transport on the wind. written as \\[\\hat{q}_{\\rm sat}=Q(\\tilde{A}+i\\tilde{B})k\\hat{h}, \\tag{3}\\] where, by the use of relation (2), the values of \\(\\tilde{A}\\) and \\(\\tilde{B}\\) simply verify \\(A/\\tilde{A}=B/\\tilde{B}=1-\\tau_{\\rm th}/\\tau_{0}\\). As for any approach to an equilibrium state, there exists a relaxation length - or equivalently a relaxation time - scale associated with the sand flux saturation. This was already mentioned by Bagnold who measured the spatial lag needed by the flux to reach its saturated value on a flat sand patch [23]. A saturation length \\(L_{\\rm sat}\\) in dune models has been first introduced by Sauermann _et al._[24], where the dependence of \\(L_{\\rm sat}\\) on \\(\\tau\\) and in particular its divergence as \\(\\tau\\rightarrow\\tau_{\\rm th}\\) has been put phenomenologically in the description. In fact, the saturation length should _a priori_ depend on the mode of transport - at least we are sure that it must exist in all the situations where there is an equilibrium transport. As there can be different mechanisms responsible for a lag before saturation (delay to accelerate the grain to the fluid velocity, delay due to the sedimentation of a transported grain [25], delay due to the progressive ejection of grains during collisions, delay for the negative feedback of the transport on the wind, delay for electrostatic effects between the transported grains and the soil, etc), the dynamics is dominated by the slowest mechanism so that \\(L_{\\rm sat}\\) is the largest among the different possible relaxation lengths. In the aeolian turbulent case, there exists a detailed theoretical analysis [20] providing the dependence of the saturation length on the shear velocity (\\(\\bullet\\) in figure 2). The curve roughly presents two zones. Very close to the threshold, the slowest process is the ejection of grains during collision. It is thus natural to have a divergence of the saturation length at the threshold [24] as the replacement capacity crosses 1 (see the calculation of the dynamical threshold in Appendix). From just above the threshold - say for \\(u_{*}/u_{\\rm th}\\gtrsim 1.05\\) - the saturation length gently increases (roughly linearly) with \\(u_{*}\\). However, in the field, the mean wind strength varies from day to day, as seasons goes by. For practical purposes - e.g. photograph analysis - it is thus of fundamental importance to define an effective saturation length, independent of the wind velocity. Fortunately, the velocity rarely exceeds \\(\\sim 3\\)\\(u_{\\rm th}\\), so that, in the range of interest, the velocity dependence on \\(L_{\\rm sat}\\) is subdominant. Our first important conclusion is thus that even though it should be remembered that \\(L_{\\rm sat}\\) slightly depends on \\(u_{*}/u_{\\rm th}\\) in lab experiments, the dominant parameters are the grain size \\(d\\) and the sand to fluid density ratio. The theoretical analysis [20] shows that \\(L_{\\rm sat}\\) scales on the drag length \\(L_{\\rm drag}=\\frac{\\rho_{s}}{\\rho_{f}}\\,d\\), which is the length needed for a grain in saltation to be accelerated to the fluid velocity. It is worth noting that this inertial effect is however _not_ the mechanism limiting the saturation process (see above). Using the results of a field experiment performed in the Atlantic Sahara of Morocco [15], the prefactor between \\(L_{\\rm sat}\\) and \\(L_{\\rm drag}\\) can be computed and gives: \\[L_{\\rm sat}\\simeq 4.4\\,\\frac{\\rho_{s}}{\\rho_{f}}d. \\tag{4}\\]This result is also consistent with Bagnold's data [23]. Note that the saturation length has never been measured directly in other situations (neither under water nor in high/low pressure wind tunnels). The linear stability analysis of the coupled differential equations of this framework has been performed in [14]. In particular, the wavenumber corresponding to the maximum growth rate is given by \\[k_{\\rm max}\\,L_{\\rm sat}=X^{-1/3}-\\frac{X^{1/3}}{3}\\ \\ \\ {\\rm with}\\ \\ \\ X=\\frac{3\\tilde{B}}{\\tilde{A}}\\ \\left[-3+ \\sqrt{3\\left(1+(\\tilde{A}/\\tilde{B})^{2}\\right)}\\right]. \\tag{5}\\] For typical values of \\(\\tilde{A}\\) and \\(\\tilde{B}\\) determined in the barchan dune context [15], we have \\(\\lambda_{\\rm max}=2\\pi/k_{\\rm max}\\sim 12L_{\\rm sat}\\). Note that we have \\(A/B=\\tilde{A}/\\tilde{B}\\), so that \\(k_{\\rm max}\\,L_{\\rm sat}\\) is independent of \\(\\tau_{0}\\) and \\(\\tau_{\\rm th}\\). Using measurements in water (subaqueous ripples), in air (aeolian dunes) and in the CO\\({}_{2}\\) atmosphere of Mars and of the Venus wind tunnel, we now investigate the scaling relation between \\(\\lambda_{\\rm max}\\) and \\(L_{\\rm drag}\\). To do so, we need to first solve the controversy concerning the size of saltating grains on Mars. Figure 3: Measurement of the typical grain size from the auto-correlation function \\(C(\\delta)\\) of a photograph of the granular bed. **a** Sample of aeolian sand from the mega-barchan of Sidi-Aghfinir (Atlantic Sahara, Morocco), of typical diameter \\(d\\simeq 165\\ \\mu\\)m. Top inset: Probability distribution function of the grain diameter \\(d\\), weighted in mass. The narrow distribution around the maximum of probability is characteristic of the aeolian sieving process. Bottom inset: \\(200\\times 200\\ {\\rm pix}^{2}\\) zoom on a typical photograph on which the computation of the autocorrelation function has been performed. The image resolution is here \\(5\\ {\\rm pix}/d\\). **b** Sample of sand from Mars (at rover Spirit’s landing site: \\(16.6^{\\circ}\\) S, \\(184.5^{\\circ}\\) W). Top inset: photograph showing the presence of milimetric grains as well as much smaller ones which are expected to be the saltons. Bottom inset: same as **a**, but the typical image resolution is \\(3\\ {\\rm pix}/d\\). Size of saltating grains on Mars The determination of the typical size of the grains participating to saltation on martian dunes is a challenging issue. First, the dunes seem to evolve very slowly or may even have become completely static. Second, no sample of the matter composing the bulk of the dunes is available. Third, we do know from the observation of dunes on Earth that they can be covered by larger grains that do not participate to the transport in saltation. With the two rovers Opportunity and Spirit, we now have direct visualisations of the soil [13], and in particular of clear aeolian structures like ripples2 or nabkhas3. Unfortunately, these structures did not lie on the surface of a dune. We thus analyze the available photographs, assuming that, like on Earth, the size of the grains participating in saltation does not vary much from place to place. Footnote 2: Contrarily to dunes, aeolian sand ripples form by a screening instability related to geometrical effects [23, 26]. Footnote 3: As there is a reduction of pressure in the lee of any obstacle, sand accumulates in the form of streaks aligned in the direction opposite to the wind (shadow dunes). These structures are called nabkhas [27]. ### Direct measurement of grain sizes The photographs freely accessible online are not of sufficiently good resolution to determine the shape and the size of each grain composing the surface. Besides, one has to be careful when analyzing such pictures as part of what is visible at the surface corresponds to grains just below it and partly hidden by their neighbours. We have thus specifically developed a method to determine the average grain size in zones where the grains are reasonably monodispersed, when the resolution is typically larger than 3 pixels per grain diameter. This measure can be deduced from the computation of the auto-correlation function \\(C(\\delta)\\) of the picture, which decreases typically over one grain size. More precisely, we proceed with the following procedure: \\(\\bullet\\) The zones covered by anything but sand (e.g. gravels or isolated larger grains) are localized and excluded from the analysis. \\(\\bullet\\) Because in natural conditions the light is generally inhomogeneous, we perform a local smoothing of the picture with a gaussian kernel of radius \\(\\simeq 10\\ d\\). The resulting picture is subtracted from the initial one. \\(\\bullet\\) We compute the local standard deviation of this image difference with the same gaussian kernel and produce, after normalisation, a third picture \\({\\cal I}\\) of null local average and of standard deviation unity. \\(\\bullet\\) The auto-correlation function \\(C(\\delta)\\) is computed on this resulting picture,averaging over all directions: \\[C(\\delta)=\\frac{\\sum_{k,l/k^{2}+l^{2}=\\delta^{2}}\\sum_{i,j}\\mathcal{I}_{i,j} \\mathcal{I}_{i+k,j+l}}{\\sum_{k,l/k^{2}+l^{2}=\\delta^{2}}\\sum_{i,j}1} \\tag{6}\\] Figure 3a shows the curve \\(C(\\delta)\\) obtained from a series of photographs of aeolian sand sampled on a terrestrial dune. For all the resolutions used (between 1 and 5 pix/\\(d\\)), the data collapse on a single curve, which is thus characteristic of the sand sample. The top inset shows the distribution of size, weighted in mass, in log-log representation. It presents a narrow peak around the \\(d_{50}\\) value. The autocorrelation function decreases from 1 to 0 over a size of the order of \\(d_{50}\\) (Lorenzian fit). This is basically due to the fact that the color or the gray level at two points are only correlated if they are inside the same grain. It is thus reasonable to assume that the autocorrelation curve is a function of \\(\\delta/d_{50}\\) only. As a matter of fact, photographs of two samples of comparable polydispersity are similar once rescaled by the mean grain diameter. We will thus use the curve obtained with aeolian sand sampled on Earth as a reference to determine the size of Martian grains. Figure 3b shows the autocorrelation curve obtained from the colour image taken by the rover Spirit at its landing site. \\(C(\\delta)\\) decreases faster than the reference curve but by tuning the value of the martian \\(d_{50}\\), one can superimpose the Mars data with the solid line fairly well. From this picture as well as a series of gray level photographs taken by the rover Opportunity, we have estimated the diameter of the grains composing these aeolian formations to \\(87\\pm 25\\)\\(\\mu\\)m. The first measurement of martian grain sizes dates back to the Viking mission in the 70s. On the basis of thermal diffusion coefficient measurements, Edgett and Christensen have estimated the grain diameter to be around 500 \\(\\mu\\)m and at least larger than those composing dunes on Earth [12]. This size corresponds to larger grains than saltons such as those shown in the top inset of figure 3b. In agreement with our findings, wind tunnel experiments performed in'martian conditions' [28] have shown that grains around 100 \\(\\mu\\)m are the easiest to dislodge from the bed. We shall come back to this point later on. Several theoretical investigations of the size of aeolian martian grains have been conducted, starting with the work of Sagan and Bagnold [29]. In that paper the authors argue that, as Mars is a very arid planet, the cohesion forces due to humidity can be neglected and proposed a cohesion-free computation which predicts that very small grains (typically of one micron) may be put into saltation. Miller and Komar [30] also followed this cohesion-free approach and proposed static threshold curves with no turnup on small particle size. It was soon realized that cohesion forces can occur for reasons other than humidity - namely van der Waals forces - and several authors proposed (static) transport threshold curves with a peaked minimum around 100 \\(\\mu\\)m [31, 32, 33, 34, 35]. However in these papers, cohesion is treated in an empirical fashion, with the assumption that van der Waals forces lead to an attractive force proportional to the grain diameter with a prefactor independent of \\(d\\). We have recomputed the Martian transport diagram (figure 4) using a new derivation of the transport thresholds. It takes into account the hysteresis between the static and dynamic thresholds, the effect of viscosity and - in a more rigorous way - the effect of cohesion. As this derivation, although consequent and original, is not the central purpose of the present paper which is devoted to the test of the dune scaling law, we have developed and discussed it in appendix. We wish to solely discuss here the figure 4, which is useful to prove that the 87 \\(\\mu\\)m sized grains can be transported in saltation. The first striking feature of the Martian transport diagram is the huge hysteresis between the dynamic and static thresholds. Compared to aeolian transport on Earth (figure 2 in the Appendix) for which the static threshold is typically 50% above the dynamic threshold, they are separated on Mars by a factor larger than 3. Quantitatively, if the static threshold is very high compared to the typical wind velocities on Mars (\\(\\sim\\) 150 km/h at 2 m above the soil), the dynamical threshold is only of the order of \\(\\sim\\) 45 km/h at 2 m. From images by the Mars Orbiter Camera (MOC) of reversing dust streaks, Jerolmack _et al._[36] have estimated that modern surface winds can reach velocities as large as Figure 4: Diagram showing the mode of transport on Mars as a function of the grain diameter \\(d\\) and of the turbulent shear velocity \\(u_{\\rm th}\\) (left) or of the wind speed \\(U_{\\rm th}\\) at 2 m above the soil (right). Below the dynamical threshold (dashed line), no grain motion is observed. A grain at rest on the surface of the bed starts moving, dragged by the wind, when the velocity is above the static threshold (solid line). Between the dynamical and static thresholds, there is a zone of hysteresis where transport can sustain due to collision induced ejections. The progressive transition from saltation to suspension as wind fluctuations become more and more important is indicated by the gradient from white to gray color. See appendix for more details on the derivation of this graph. 150 km/h. Looking at figure 4, it can be seen that at such velocities, most of the grain below 100 \\(\\mu\\)m can be suspended and that even millimetric grains can be entrained into saltation. Even in less stormy conditions, sand transport should not be as unfrequent as one could expect, even though the large amplitude of the hysteresis implies an intermittency of sand transport. This suggests that Martian dunes are definitively active. ## 3 A dune wavelength scaling law Keeping in mind that the wavelength \\(\\lambda\\) that spontaneously appears when a flat sand bed is destabilized by a turbulent flow scales on the flux saturation length, the aim of the paper is to plot \\(\\lambda\\) against \\(L_{\\rm drag}\\) in the different situations mentioned in the introduction: aqueous ripples, waves on aeolian dunes, both on Earth and Mars, fresh snow dunes and microdunes in the Venus wind tunnel - numerical values of the parameters corresponding to these different situations are summarized in table 1. Our reference data point in figure 5 (aeolian dunes on Earth) has been obtained after an extensive work on barchan dunes in the Atlantic Sahara of Morocco. In a recent paper [15], we have shown that perturbations such as wind changes generate waves at the surface of dunes by the linear instability described above. Direct measurements of the wavelength are reported in Figure 6a in an histogram. They give \\(\\lambda\\sim 20\\) m for waves on the flanks of medium sized dunes (solid line) and \\(\\sim 28\\) m on the windward side of a mega-barchan (dashed line), where some pattern coarsening occurs. \\(L_{\\rm sat}\\sim 1.7\\) m \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|} \\hline & Earth \\(\\delta\\) & Mars \\(\\sigma^{\\prime}\\) & water \\(\\divide\\) & snow \\(\\times\\) & ‘Venus’ \\(\\circ\\) \\\\ \\hline \\(g\\) (m/s\\({}^{2}\\)) & 9.8 & 3.7 & 9.8 & 9.8 & 9.8 \\\\ \\(\\lambda\\) & 20 m & 600 m & 2 cm & 15–25 m & 10–20 cm \\\\ \\(d\\) (\\(\\mu\\)m) & 165 – 185 & 87 & 150 & 1500 & 110 \\\\ \\(\\rho_{f}\\) (kg/m\\({}^{3}\\)) & 1.2 & 1.5 – 2.2 10\\({}^{-2}\\) & 10\\({}^{3}\\) & 1.2 & 61 \\\\ \\(\\rho_{s}\\) (kg/m\\({}^{3}\\)) & 2650 & 3000 & 2650 & 360 & 2650 \\\\ \\(\ u\\) (m\\({}^{2}\\)/s) & 1.5 10\\({}^{-5}\\) & 6.3 10\\({}^{-4}\\) & 10\\({}^{-6}\\) & 1.5 10\\({}^{-5}\\) & 2.5 10\\({}^{-7}\\) \\\\ \\hline \\end{tabular} \\end{table} Table 1: Comparison of different quantities (gravity \\(g\\), initial wavelength of bed instability \\(\\lambda\\), diameter of saltons \\(d\\) as well as fluid and sediment densities \\(\\rho_{f}\\) and \\(\\rho_{s}\\)) in the air (Earth), in the martian and Venus wind tunnel CO\\({}_{2}\\) atmospheres and in water. As the temperature at the surface of Mars can vary by an amplitude of typically 100 K between warm days and cold nights, the density of the atmosphere displays some variation range. Figure 5: Average wavelength as a function of the drag length. The two black squares (resp. diamonds) labeled ‘Earth’ (\\(\\updelta\\)) (resp. ‘Mars’ (\\(\\upsigma\\))) come from the two different histograms of figure 6a (resp. 6b). The three white circles are the under water (\\(\\upkappa\\)) Coleman and Melville’s data [5, 6], whereas the black circle is that from Hersen _et al._’s experiments [7, 8]. The white triangle has been computed from the Venus (\\(\\upsigma\\)) wind tunnel study [9]. Snow (\\(\\times\\)) dune photos (see figure 8) have been calibrated and complemented with data from [10, 11, 37], and give the white squares. Figure 6: Histograms of the wavelength measured in the Atlantic Sahara (Morocco) and in Mars southern hemisphere (mainly, but not only, in the region 320–350\\({}^{\\circ}\\)W, 45–55\\({}^{\\circ}\\)S). **(a)** Earth (\\(\\updelta\\)). Solid line: wavelengths systematically measured on barchan dunes located in a 20 km \\(\\times\\) 8 km zone; Dash line: wavelengths measured on the windward side of a mega-barchan whose surface is permanently corrugated and where some coarsening occurs. **(b)** Mars (\\(\\upsigma\\)). Solid line: wavelengths measured on dunes in several craters (e.g. Rabe, Russell, Kaiser, Proctor, Hellespontus); Dash line: histogram restricted to Kaiser crater (341\\({}^{\\circ}\\)W, 47\\({}^{\\circ}\\)S). Averaged values of \\(\\lambda\\) are 20 m (solid line on panel a), 28 m (dash line on panel a), 510 m (solid line on panel b) and 606 m (dash line on panel b). Figure 8: Fresh snow aeolian dunes on ice. **(a)** Transverse dunes on the iced baltic sea (credits Bertil Håansson, Swedish Meteorological and Hydrographical Institute/Baltic Air-Sea-Ice Study). The wavelength is around 15 m [37]. **(b)** Snow barchan field in Antartica (credits Stephen Price, Department of Earth and Space Sciences, University of Washington). The shadow of the Twin Otters, which has has a wing span of 19.8 m and a length of 15.7 m, gives the scale. In both pictures, the perspective has been corrected to produce a pseudo-aerial view. Figure 7: Comparison of dune morphologies on Earth and on Mars. **a** Mega-barchan in Atlantic Sahara, Morocco. **b** ‘Kaiser’ crater on Mars (341\\({}^{\\circ}\\)W,47\\({}^{\\circ}\\)S). **c** Closer view of the Kaiser crater dunes. As on Earth, small barchans are visible on the side of the field. was measured independently in the same dune field, and these aeolian grains of 180 \\(\\mu\\)m lead to a drag length of 40 cm. The study of small scale barchans under water [7, 8] provides the second data point: measurements give \\(\\lambda\\sim 2\\) cm with glass beads of size \\(d=150\\)\\(\\mu\\)m, a size which leads to a drag length of 400 \\(\\mu\\)m. This point is the only one in the literature for which it is specified that: (i) the wavelength has been measured during the linear stage of the instability; (ii) the sand bed destabilizes in a homogeneous way and not starting at the entrance of the set-up, due to strong disturbances there; (iii) the height of the tank is much larger than the wavelength. We have added three other underwater points in figure 5, from Coleman and Melville's data [5, 6].The slight discrepancy with the straight line (these points are all above the line) can be explained by the three points risen above: it may be induced from [5] that (i) the initial stage is not clearly resolved; (ii) the waves seem to nucleate on defects as if the transition was sub-critical; (iii) the flume is only 0.28 m deep, a height which is comparable to the observed wavelengths. Recent photos of martian dunes [3] such as that in figure 7 lead to an estimate of the value of \\(\\lambda\\sim 600\\) m on Mars. We focused on the dunes found in craters of the southern hemisphere. For comparison, we have measured wavelengths on dunes in several craters (e.g. Rabe, Russell, Kaiser, Proctor, Hellespontus) and also produced a histogram restricted to Kaiser crater, see figure 6b. As for the drag length, we used the value for the grain diameter estimated in the previous section. The density of martian grains is similar to or slightly higher [36] than that of terrestrial grains. The value of the density of the martian atmosphere however varies within some range as the temperature on Mars surface can change by an amplitude of typically 100 K between warm days and cold nights. In the end, it gives a value of \\(L_{\\rm drag}\\) between 13 and 17 m. The fourth data point is obtained from Greeeley et al.'s experiment in a high pressure CO2 wind tunnel [9], which gives for \\(L_{\\rm drag}\\) a value we expect on Venus. The observed wavelength first decreases from 18 cm to 8 cm with the wind speed from \\(u_{*}=u_{\\rm th}\\) to \\(u_{*}=1.6\\)\\(u_{\\rm th}\\) and then increases up to 27 cm at \\(u_{*}=2.1\\)\\(u_{\\rm th}\\). Above this value, the flat sand bed was again found to be stable. Although some of these features are reminiscent of the \\(L_{\\rm sat}\\) curve discussed above (figure 2), this series of experiments is also questionable: nothing is reported about the nature of the destabilization (homogeneous appearance of the pattern or not) nor on the maturity of the pattern when the wavelength was measured (coarsening?). Remarkably, the crude averaged of the provided data lies on the master curve, even though the smallest wavelength is less than half the predicted value. As for the Venus wind tunnel, the fifth data points are not very precise and correspond to fresh snow dunes formed on Antartic sea ice and on Baltic sea ice. The typical wavelength \\(\\lambda\\) of transverse dunes (for instance in figure 8**(a)** ranges from 15 m to 25 m [37]. Whenever barchan dunes form, they are typically 5 m to 10 m long and are separated by \\(\\sim 20\\) m (see figure 8**(b)**. This is very similar with aeolian ones, although snow barchan dunes look more elongated. The determination of \\(L_{\\rm drag}\\) is more problematic as the snow density rapidly increases once fallen. Snow dunes are rather rare and probably form only at the surface of ice by strong wind, when the snow can remain fresh and not very cohesive. To get the numbers, we have used the measurements performed close to the Antartic dunes by Massom _et al._[10, 11]. Although one may find somehow disappointing that on the graph of figure 5 these data points are located very close to the aeolian sand dunes, they are particularly interesting as they show that one can keep the same \\(\\lambda\\) by changing simultaneously \\(\\rho_{s}\\) and \\(d\\) in a compensating way. Globally, we obtain a consistent scaling law \\(\\lambda\\simeq 53L_{\\rm drag}\\) that covers almost five decades (figure 5). We wish to emphasize again that we do not claim to capture all the dependences on this plot, but that \\(L_{\\rm drag}\\) is the dominant scaling factor. As shown in figure 2, we expect subdominant dependences on the wind speed, on finite size effects, etc Note that this analysis explains for example why martian dunes whose sizes are of the order of a kilometer are not 'complex' or 'compound' as their equivalents on Earth [38]. Following our scaling relation, they correspond to the small terrestrial dunes and the whole dune field on the floor of a crater should rather be considered as a complex martian dune. A'similarity law' for the size of (developed) dunes with \\(L_{\\rm sat}\\) was schematically drawn by Kroy _et al._[39], supported by the existence of centimetric barchans under water [7]. However, this similarity was announced to fail with Martian dunes. In fact, the scaling of large dunes (i.e. whose sizes are much larger than the elementary length \\(\\lambda\\)) with \\(L_{\\rm drag}\\) (or \\(L_{\\rm sat}\\)) is far from obvious as the size selection of barchans involves secondary instabilities related to collisions and fluctuations of wind direction [15, 40, 41]. Finally, we would like to end this section with the prediction of the wavelength at which a flat sand bed should destabilize on Titan where (longitudinal or seif) dunes have been recently discovered [42]. The atmosphere there is approximately four times denser than on Earth, and the grains are believed to be made of water ice [43]. Computing the threshold curves (not shown), one observes that the dynamical and static thresholds are almost identical and that the minimum of threshold shear stress is reached for 160 \\(\\mu\\)m. Using this size for the grain diameter a density ratio of the order of 200, we find a centimetric drag length. Following the scaling law of figure 5, this would lead to a dune wavelength between 1 and 2 meters. Note that, unfortunately, this is well below the resolution of Cassini radar. Discussion: time scales and velocity for the martian dunes Now that we have this unique length scale at hand for dunes, it is interesting to address the question of the corresponding growth time scales and propagation velocities. The linear stability analysis [14, 15] tells that the growth rate \\(\\sigma\\) and the propagation velocity \\(c\\) of bedforms are related to the wavenumber \\(k=2\\pi/\\lambda\\) as \\[\\sigma(k)=Qk^{2}\\,\\frac{\\tilde{B}-\\tilde{A}kL_{\\rm sat}}{1+(kL_{\\rm sat})^{2}} \\qquad\\mbox{and}\\qquad c(k)=Qk\\,\\frac{\\tilde{A}+\\tilde{B}kL_{\\rm sat}}{1+(kL_{ \\rm sat})^{2}}, \\tag{7}\\] where \\(Q\\) is the saturated sand flux - assumed constant - over a flat bed. In order to make these relations quantitative for the bedforms on Earth and Mars, we need an effective time averaged flux \\(\\bar{Q}\\). Recall from equation (2) that \\(Q\\) can be related to the shear velocity \\(u_{*}\\). For simplicity, we suppose that there is sand transport (\\(u_{*}>u_{\\rm th}\\)) a fraction \\(\\eta\\) of the time and that the shear stress is then constant and equal to \\((1+\\alpha)\\rho_{f}u_{\\rm th}^{2}\\). The time averaged sand flux can thus be effectively expressed as \\[\\bar{Q}=25\\,\\alpha\\,\\eta\\,\\sqrt{\\frac{d}{g}}\\,\\frac{\\rho_{f}}{\\rho_{s}}u_{\\rm th }^{2}. \\tag{8}\\] The values of the coefficients \\(\\tilde{A}\\) and \\(\\tilde{B}\\) which come into equation (7) depend on the excess of shear above the threshold as \\(\\tilde{A}/A=\\tilde{B}/B=(1+\\alpha)/\\alpha\\). The timescale over which an instability develops is that of the most unstable mode. This means that \\(\\sigma\\) and \\(c\\) should be evaluated at \\(k=k_{\\rm max}\\) (see equation (5)) and thus scale as: \\[\\sigma\\,{\\propto}\\,\\frac{\\bar{Q}}{L_{\\rm sat}^{2}} \\propto (1+\\alpha)\\,\\eta\\,\\left(\\frac{\\rho_{f}}{\\rho_{s}}\\right)^{3}\\, \\frac{u_{\\rm th}^{2}}{g^{1/2}d^{3/2}}, \\tag{9}\\] \\[c\\,{\\propto}\\,\\frac{\\bar{Q}}{L_{\\rm sat}} \\propto (1+\\alpha)\\,\\eta\\,\\left(\\frac{\\rho_{f}}{\\rho_{s}}\\right)^{2}\\, \\frac{u_{\\rm th}^{2}}{g^{1/2}d^{1/2}}\\,. \\tag{10}\\] Using meteorological data and measurements of the average dune velocities, we estimated that \\(Q_{\\updelta}\\) is between 60 to 90 m\\({}^{2}\\)/year [15] and \\(\\eta_{\\updelta}\\) between 65% and 85%. This gives an effective value \\(\\alpha_{\\updelta}\\) between 1.5 and 2. Calculating explicitly the prefactors, we get a growth time \\(\\sigma_{\\updelta}^{-1}\\sim 2\\) weeks and \\(c_{\\updelta}\\sim 200\\) m/year, values which are consistent with direct observation. Note that time scales for individual fully developed dunes depend on their size, but are also proportional to \\(1/\\bar{Q}\\)[40]. In order to compare these terrestrial values to those on Mars, we need to estimate the different factors which come into the expressions (9) and (10). Recall that, on Earth, for grains of 165 \\(\\mu\\)m, we have \\(u_{\\rm 4th}\\sim 0.2\\) m/s. The corresponding value for the 87 \\(\\mu\\)m sized Martian grains can be found on figure 4 and reads \\(u_{\\sigma\\sf{th}}\\sim 0.64\\) m/s. The Mars to Earth ratios for the fluid and grain densities, as well as the gravity acceleration are known. As discussed in the calculation of the transport thresholds (see Appendix) one expects that \\(\\alpha_{\\sigma^{\\prime}}\\) is of order unity and for simplicity we simply take \\(\\alpha_{\\sigma^{\\prime}}=\\alpha_{\\delta}\\). Finally, the most speculative part naturally concerns the value of \\(\\eta_{\\sigma^{\\prime}}\\). Assuming that the winds on Mars are similar to those on Earth, we computed on our wind data from Atlantic Sahara the fraction of time during which \\(u_{*}>0.64\\) m/s and got \\(\\sim 3\\%\\). This value corresponds to few days per year and is probably realistic as, besides, the soil of Mars is frozen during the winter season. With these numerical values, we find that \\(\\sigma_{\\sigma^{\\prime}}\\) is smaller than \\(\\sigma_{\\delta}\\) by more than five decades. In other words, the typical time over which we could see a significant evolution of the bedforms on martian dunes is of the order of tens to hundreds centuries. Similarly, the ratio \\(c_{\\sigma^{\\prime}}/c_{\\delta}\\) is of the order of \\(10^{-4}\\). Note that these Mars to Earth time and velocity ratios are proportional to \\(\\eta_{\\sigma^{\\prime}}\\), so that larger or smaller values for this speculative parameter do not change dramatically the conclusions, the main contribution being the density ratio to the power 2 or 3 (see equations (9-10)). Therefore, as some satellite high resolution pictures definitively show some evidence of aeolian activity - e.g. avalanche outlines - it may well be that the martian dunes are fully active but not significantly at the the human scale. ## 5 Conclusion As for the last section, we would like to conclude the paper with a summary of the status of the numerous hypothesis and facts we have mixed and discussed. Although coming from recent theoretical works, it is now well accepted that the wavelength \\(\\lambda\\) at which dunes form from a flat sand bed is governed by the so-called saturation length \\(L_{\\rm sat}\\). However, the dependences of \\(L_{\\rm sat}\\) with the numerous control parameters (Shields number, grain and flow Reynolds numbers, Galileo number, grain to fluid density ratio, finite size effects, etc) is still a matter of debate. In this paper, we have collected measurements of \\(\\lambda\\) in various situations that where previously thought of as disconnected: subaqueous ripples, micro-dunes in Venus wind tunnel, fresh snow dunes, aeolian dunes on Earth and dunes on Mars. We show that the averaged wavelength (and thus \\(L_{\\rm sat}\\)) is proportional to the grain size times the grain to fluid density ratio. This does not preclude sub-dominant dependencies of \\(L_{\\rm sat}\\) with the other dimensionless parameters. In each situation listed above, we have faced specific difficulties. _Subaqueous ripples --_ The transport mechanism under water, namely the direct entrainment by the fluid, is different from the four other situations. The saturation process could thus be very different in this case. The longest re laxation length could for instance be controlled by the grain sedimentation, as recently suggested by Charru [25]. As there is also negative feedback of transport on the flow, the destabilization wavelength would be larger than our prediction. The formation of subaqueous ripples has remained controversial mostly because the experimental data are very dispersed. Most of them suffer from finite size effects (water depth or friction on lateral walls), from uncontrolled entrance conditions leading to an inhomogeneous destabilization, or from a late determination of the wavelength after a pattern coarsening period. For the furthest left data point only, we are certain that all these problems were avoided. _'Venus' micro-dunes --_ This beautiful experiment gives the most dispersed data of the plot. Part of it may be due to the defects listed just above. The complex variation of the wavelength with the wind speed observed experimentally could be interpreted as a sub-dominant dependance of the saturation length on the Shields number. However, not only the size of the dunes changes but also their shapes. This is definitely the signature of a second length scale at work. _Fresh snow dunes --_ On the basis of existing photographs, we have clearly identified snow dunes pattern resembling sand aeolian destabilization ones. They form under strong wind, on icy substrate. The obvious difficulty is that the precise state of the snow flakes during the dune formation involves complex thermodynamical processes that do not exist in the other cases. _Aeolian dunes on Earth --_ Although the instability takes place in the field under varying wind conditions, the resulting wavelength is very robust from place to place and time to time. We consider this point as the reference one in the plot. _Martian dunes --_ The specific difficulty of the Martian case does not come from the wavelength measurements, as the available photographs are well resolved in space, but rather from the prior determination of the diameter \\(d\\) of the grains involved. The controversy comes from early determinations based on thermal diffusivity measurements, concluding that dunes are covered by large (500 \\(\\mu\\)m) grains. Such large grains would lead to a significant shift of the data point towards the right of figure 5. A large part of this paper is consequently devoted to an independent determination of \\(d\\), based on the analysis of the Martian rovers photographs. Our determination is very different: \\(87\\pm 25\\)\\(\\mu\\)m. We have computed the sand transport phase diagram with more subtleties than in the available literature (including in particular hysteresis and cohesion) and shown that this size corresponds to saltating grains. A directly related controversial point was the state of the Martian dunes (still active or fossile). We have shown that moderately large winds can transport the grains (the dynamical threshold is much lower than the static one) but that the characteristic time scale over which dunes form is five orders of magnitude larger than on Earth. The Moroccan wavelength histogram has been obtained in collaboration with Hicham Elbelrhiti. We thank Eric Clement and Evelyne Kolb for useful advices on the way cohesion forces can be estimated. We thank Francois Charru and Douglas Jerolmack for discussions. We thank Brad Murray for a careful reading of the manuscript. Part of this work is based on lectures given during the granular session of the Institut Henri Poincare in 2005. This study was financially supported by an 'ACI Jeunes Chercheurs' of the french ministry of research. A model for transport thresholds, including cohesion We have directly measured the grain diameter (\\(87\\pm 25\\)\\(\\mu\\)m) in Martian ripples photographed by the rovers. We then made the double assumption that (i) the grains composing the martian dunes are of the same size and that (ii) these grains participate to saltation transport whenever the wind is sufficiently strong. In order to support hypothesis (ii), we have computed the transport thresholds in the Martian conditions. We find that the grains which move first for an increasing wind are around 65 \\(\\mu\\)m in diameter. The full discussion of the transport thresholds is however too heavy to be incorporated into the body of the article, mainly devoted to the dune wavelength scaling law. We give below a short but self-sufficient derivation of the scaling laws for the static and dynamic transport thresholds. ### Definition of transport thresholds, hysteresis We consider the generic case of a fluid boundary layer over a flat bed composed of identical sand grains. For given grains and surrounding fluid, the shear stress \\(\\tau\\) controls the sand transport.The dominant mechanism for grain erosion depends on the sand to fluid density ratio. In dense fluids grains are directly entrained by the flow whereas in low density fluids grains are mostly splashed up by other grains impacting the sand bed. Two thresholds are associated to these two mechanisms: starting from a purely static sand bed, the first grain is dragged from the bed and brought into motion at the static threshold \\(\\tau_{\\rm sta}\\). Once sand transport is established, it can sustain by the collision/ejection processes down to a second threshold \\(\\tau_{\\rm dyn}\\). The sand transport thus presents an hysteresis - responsible for instance for the formation of streamers. Finally, when the fluid velocity becomes sufficiently high, more and more grains remain in suspension in the flow, trapped in large velocity fluctuations. Although there is no precise threshold associated to suspension, one expect that these fluctuations becomes dominant for the grain trajectories when the shear velocity \\(u_{*}=\\sqrt{\\tau/\\rho_{f}}\\) is much larger than the grain sedimentation velocity \\(u_{\\rm fall}\\). The behaviour of these thresholds in the Martian atmosphere, as well as in water and air, with respect to the grain diameter is summarized in figures 4 and A.2, and we shall now discuss how to compute these curves. ### An analytic expression for the turbulent boundary layer velocity profile In the dune context, the flow is generically turbulent far from the sand bed and the velocity profile is known to be logarithmic. However, our problem is more complicated as we have to relate the velocity \\(u\\) of the flow around the sand grains (i.e. the velocity at the altitude, say, \\(z=d/2\\) that we call \\(v\\) in the following) to the shear velocity \\(u_{*}\\). Depending on the grain based Reynolds number, there either exists a viscous sub-layer between the soil and the fully turbulent zone, or the momentum transfer is directly due to the fluctuations induced by the soil roughness. The first step is thus to derive an expression for the turbulent boundary layer wind profile valid in the two regimes, far or close to the bed. To do so, we express the shear stress as the sum of a visous and a turbulent part as \\[\\tau=\\rho_{f}\ u\\partial_{z}u+\\kappa^{2}(z+rd)^{2}\\rho_{f}|\\partial_{z}u| \\partial_{z}u,\\] (A.1) where \\(\ u\\) is the kinematic viscosity of the fluid, \\(\\kappa\\) the von Karman constant and \\(r\\) the aerodynamic roughness rescaled by the grain diameter \\(d\\). The shear stress is constant all through the turbulent boundary layer and equal to \\(\\rho_{f}u_{*}^{2}\\). Let us define the grain Reynolds number as: \\[Re_{0}=\\frac{2\\kappa u_{*}rd}{\ u}\\,.\\] (A.2) and a non-dimensional distance to the bed: \\[Z=1+\\frac{z}{rd}\\] (A.3) Note that \\(Z\\) is tends to 1 on the sand bed and that it is equal to \\(1+1/2r\\) at the center of the grain. With this notations, the differential equation (A.1) can be rewritten under the form: \\[Z^{2}\\left(\\frac{d(\\kappa u/u_{*})}{dZ}\\right)^{2}+\\frac{2}{Re_{0}}\\left( \\frac{d(\\kappa u/u_{*})}{dZ}\\right)-1=0,\\] (A.4) which is easily integrated into: \\[u=\\left.\\frac{u_{*}}{\\kappa}\\left[\\sinh^{-1}(Re_{0}\\bar{Z})+\\frac{1-\\sqrt{1+ Re_{0}^{2}\\bar{Z}^{2}}}{Re_{0}\\bar{Z}}\\right]_{\\bar{Z}=1}^{Z=Z}.\\right.\\] (A.5) Performing expansions in the purely viscous and turbulent regimes, one can show that a good approximation of the relation between the shear velocity \\(u_{*}\\) and typical velocity around the grain \\(v\\) is given by \\[u_{*}^{2}=\\frac{2\ u}{d}v+\\frac{\\kappa^{2}}{\\ln^{2}(1+1/2r)}v^{2}.\\] (A.6) ### Static threshold: influence of Reynolds number The bed load (tractation) threshold is directly related to the fact that the grains are trapped in the potential holes created by the grains at the sand bed surface. To get the scaling laws, the simplest geometry to consider is a single spherical grain jammed between the two neighbouring (fixed) grains below it, see figure 1a. Let us first discuss the situation in which the cohesive forces between the grains are negligible and the friction at the contacts is sufficient to prevent sliding. It can be inferred from figure 1a that the loss of equilibrium occurs for a value of the driving force \\(F\\) proportional to the submerged weight of the grain: \\(F\\propto(\\rho_{s}-\\rho_{f})gd^{3}\\), where \\(\\rho_{s}\\) is the mass density of the sand grain, and \\(\\rho_{f}\\) is that of the fluid - which is negligible with respect to \\(\\rho_{s}\\) in the case of aeolian transport. As \\(F\\) is proportional to the shear force \\(\\tau d^{2}\\) exerted by the fluid, the non-dimensional parameter controlling the onset of motion is the Shields number, which characterises the ratio of the driving shear stress to the normal stress: \\[\\Theta=\\frac{\\tau}{(\\rho_{s}-\\rho_{f})gd}. \\tag{10}\\] The threshold value can be estimated from the geometry of the piling, and depends on whether rolling or lifting is the mechanism which makes the grain move. Finally, the local slope of the bed modifies the threshold value as traps between the grains are less deep when the bed is inclined. In particular, its value must vanish as the bed slope approaches the (tangent of the) avalanche angle. We here ignore these refinements which can be incorporated into the values of \\(A\\) and \\(B\\) (see section 1). At the threshold, the horizontal force balance on a grain of the bed reads \\[\\frac{\\pi}{6}\\mu(\\rho_{s}-\\rho_{f})gd^{3}=\\frac{\\pi}{8}C_{d}\\rho_{f}v_{\\rm th} ^{2}d^{2}, \\tag{11}\\] where \\(\\mu\\) is a friction coefficient, and \\(C_{d}\\) the drag coefficient which is a function of the grain Reynolds number. With a good accuracy, the drag law for natural Figure 1: **a** Grain packing geometry considered for the computation of the static transport thresholds. **b** Schematic drawing showing the contact between grains at the micron scale. grains can be put under the form: \\[C_{d}=\\left(C_{\\infty}^{1/2}+s\\sqrt{\\frac{\ u}{vd}}\\right)^{2}\\] (A.9) with \\(C_{\\infty}\\simeq 1\\) and \\(s\\simeq 5\\) for natural sand grains [44]. At this stage, we introduce the viscous size \\(d_{\ u}\\), defined as the diameter of grains whose free fall Reynolds number \\(u_{\\rm fall}d/\ u\\) is unity: \\[d_{\ u}=(\\rho_{s}/\\rho_{f}-1)^{-1/3}\\;\ u^{2/3}\\;g^{-1/3}\\] (A.10) It corresponds to a grain size at which viscous and gravity effects are of the same order of magnitude. We define \\(\\tilde{v}\\) as the fluid velocity at the scale of the grain normalised by \\((\\rho_{s}/\\rho_{f}-1)^{1/2}(gd)^{1/2}\\). From the three previous relations, we get the equation for \\(\\tilde{v}_{\\rm th}\\): \\[C_{\\infty}^{1/2}\\tilde{v}_{\\rm th}+s\\left(\\frac{d_{\ u}}{d}\\right)^{3/4}\\tilde {v}_{\\rm th}^{1/2}-\\left(\\frac{4\\mu}{3}\\right)^{1/2}=0,\\] (A.11) which solves into \\[\\tilde{v}_{\\rm th}=\\frac{1}{4C_{\\infty}}\\left[\\left(s^{2}\\left(\\frac{d_{\ u} }{d}\\right)^{3/2}+8\\left(\\frac{\\mu C_{\\infty}}{3}\\right)^{1/2}\\right)^{1/2}-s \\left(\\frac{d_{\ u}}{d}\\right)^{3/4}\\right]^{2}.\\] (A.12) The expression of the static threshold Shields number is finally: \\[\\Theta_{\\rm th}^{\\infty}=2\\left(\\frac{d_{\ u}}{d}\\right)^{3/2}\\tilde{v}_{\\rm th }+\\frac{\\kappa^{2}}{\\ln^{2}(1+1/2r)}\\tilde{v}_{\\rm th}^{2}.\\] (A.13) ### Static threshold: influence of cohesion For small grains, the cohesion of the material strongly increases the static threshold shear stress. Evaluating the adhesion force between grains is a difficult problem in itself. We consider here two grains at the limit of separation and we assume that the multi-contact surface between the grains has been created with a maximum normal load \\(N_{\\rm max}\\), see figure A.1b. The adhesion force \\(N_{\\rmadh}\\) can be expressed as an effective surface tension \\(\\tilde{\\gamma}\\) times the radius of curvature of the contact, which is assumed to scale on the grain diameter: \\[N_{\\rmadh}\\propto\\tilde{\\gamma}d.\\] (A.14)This effective surface tension is much smaller than the actual one, \\(\\gamma\\), as the real area of contact \\(A_{\\rm real}\\) is much smaller than the apparent one \\(A_{\\rm Hertz}\\).: \\[\\tilde{\\gamma}=\\gamma\\,\\frac{A_{\\rm real}}{A_{\\rm Hertz}}\\,.\\] (A.15) The apparent area of contact can be computed following Hertz law for two spheres in contact under a load \\(N_{\\rm max}\\): \\[A_{\\rm Hertz}\\propto\\left(\\frac{N_{\\rm max}d}{E}\\right)^{2/3},\\] (A.16) where E is the Young modulus of the grain [45]. To express the real area of contact, we need to know whether the micro-contacts are in an elastic or a plastic regime. Within a good approximation, \\(A_{\\rm real}\\) can expressed in both cases [46] as: \\[A_{\\rm real}\\propto\\frac{N_{\\rm max}}{M},\\] (A.17) where \\(M\\) is the Young modulus \\(E\\) (elastic regime) or the hardness \\(H\\) (plastic regime) of the material. Altogether, we then get: \\[N_{\\rm adh}\\propto\\gamma\\,\\frac{E^{2/3}}{M}\\,(N_{\\rm max}d)^{1/3}.\\] (A.18) In order to bring into motion such grains, the shear must be large enough to overcome both weight and adhesion, so that, for \\(N_{\\rm max}\\sim\\rho_{s}gd^{3}\\) (i.e. the weight of one grain) the critical Shields number is the sum of two terms and takes the form of \\[\\Theta_{\\rm th}=\\Theta_{\\rm th}^{\\infty}\\left[1+\\frac{3}{2}\\left(\\frac{d_{m}} {d}\\right)^{5/3}\\right],\\] (A.19) with \\[d_{m}\\propto\\left(\\frac{\\gamma}{M}\\right)^{3/5}\\left(\\frac{E}{\\rho_{s}g} \\right)^{2/5}.\\] (A.20) Note that, by contrast to the references [31, 32, 33, 34, 35], we find that the adhesive force finally scales, for \\(d\\to 0\\) with \\(d^{4/3}\\) and the critical Shields number with \\(d^{-5/3}\\) (instead of exponents \\(1\\) and \\(-2\\) respectively). ### Dynamical threshold The dynamical threshold can be defined as the value of the control parameter for which the saturated flux vanishes. Fitting the flux _vs_ shear velocity relation, one can measure the dynamical threshold in a very precise way (see for instance the analysis in [20] of the data obtained by Rasmussen _et al._[22]). As shown in [20], the wind velocity profile is almost undisturbed at the dynamical threshold. The shear stress threshold is thus achieved when the velocity that a grain acquires after a single jump is just sufficient to eject on average one single grain out of the bed after a collision (unit replacement capacity criterion). The impact velocity at this dynamical threshold is thus proportional to the trapping velocity i.e. the velocity needed for a grain to escape from its potential trapping (see Quartier _et al._[47]): \\[v_{\\downarrow}=a\\sqrt{gd\\left[1+\\frac{3}{2}\\left(\\frac{d_{m}}{d}\\right)^{5/3} \\right]},\\] (A.21) An analytical expression of the dynamical threshold has been derived in [20], but only in the limit of large Reynolds numbers. To generate the plots displayed in figures 4 and A.2, we use here a numerical integration of the equations of motion of a grain - with equations (A.5) and (A.9) for the wind profile and drag coefficient. The trajectories are computed for an ejection velocity \\(v_{\\uparrow}\\) equal to \\(e\\,v_{\\downarrow}\\) and a typical ejection angle \\(\\pi/4\\) (see [20]). ### From Earth to Mars In order to get a reasonable transport diagram on Mars, we have first tuned the different parameters controlling the dynamic and static thresholds to repro Figure A.2: Diagram showing the mode of transport in the aeolian **(a)** and underwater **(b)** cases, as a function of the grain diameter \\(d\\) and of the turbulent shear velocity \\(u_{\\rm th}\\) (left) or of the wind speed \\(U_{\\rm th}\\) at 2 m above the soil (right). The dynamical threshold (dashed line) is below the static threshold (solid line) in the aeolian case but much above it underwater. The dark gray is the zone where no transport is possible. Above, the background color codes for the ratio \\(u_{*}/u_{\\rm fall}\\): white corresponds to negligible fluctuations and gray to suspension. The experimental points are taken from (\\(\\circ\\)) Chepil [48] and (\\(\\square\\)) Rasmussen [22, 20] in the aeolian case and from (\\(\\circ\\)) Yalin and Karahan [49] in the underwater case. duce _within the same model_ the underwater and aeolian data (figure A.2). For the friction coefficient \\(\\mu\\), we have simply taken the avalanche slope for typical aeolian grains \\(\\tan(32^{\\circ})\\). As already found in [20], the rescaled soil roughness \\(r\\) is higher than the value found by Bagnold. It is here adjusted to \\(r=1/10\\). The value of \\(a\\), the impact velocity needed to eject statistically one grain rescaled by the trapping velocity, is adjusted to 15. The restitution coefficient was adjusted to \\(e=2/3\\). For obvious reasons it is possible to obtain almost the same curves with different pairs \\((a,\\,e)\\). Finally the cohesion length \\(d_{m}\\) was tuned to 25 \\(\\mu\\)m, which is consistent with the above calculation. We see in figure A.2 that the agreement is excellent both in the aeolian and underwater cases. Due to the low density ratio in water, the collision process is completely inefficient. As a consequence, the dynamical threshold is well above the static one: the only erosion mechanism is the direct entrainment of grains by the fluid (traction). On the other hand, the static threshold - computed with the same formula as that obtained in water - is well above the experimental data in the aeolian case: there is a very important hysteresis in that case. With both sets of data, we have thus a complete calibration of the dynamic and static threshold parameters, which allows to a good degree of confidence for the computation of the transport diagram on Mars (figure 4). ## References * [1] S.G. Fryberger and G. Dean, in _A Study of Global Sand Seas_ (McKee editor), Geol. Surv. Prof. Pap. 1052, Washington, 137 (1979). * [2] K. Pye and H. Tsoar, _Aeolian sand and sand dunes_, Unwin Hyman, London (1990). * [3] See photos on the web site [http://barsoom.msss.com/moc_gallery/](http://barsoom.msss.com/moc_gallery/). * [4] M.S. Yalin, _River mechanics_, Pergamon press, Oxford (1992). * [5] S.E. Coleman and B.W. Melville, J. Hydraulic Eng. **122**, 301 (1996). * [6] S.E. Coleman and B. Eling, J. Hydraulic Res. **38**, 331 (2001). * [7] P. Hersen, S. Douady and B. Andreotti, Phys. Rev. Lett. **89**, 264301 (2002). * [8] P. Hersen, J. Geophys. Res. **110**, F04S07 (2005). * [9] R. Greeley, J.R. Marshall and R.N. Leach, Icarus **60**, 152 (1984) * [10] R.A. Massom, V.I. Lytle, A.P. Worby and I. Allison, J. Geophys. Res. **103**, NO. C11, 24837 (1998). * [11] R.A. Massom _et al._, Rev. Geophys. **39**, 413 (2001). * [12] K.S. Edgett and P.R. Christensen, J. Geophys. Res. **96**, 22765 (1991). * [13] R. Sullivan _et al._, Nature **436**, 58 (2005). * [14] B. Andreotti, P. Claudin and S. Douady, Eur. Phys. J. B **28**, 341 (2002). * [15] H. Elbelrhiti, P. Claudin and B. Andreotti, Nature **437**, 720 (2005). * [16] P.S. Jackson and J.C.R. Hunt, Quart. J. R. Met. Soc. **101**, 929 (1975). * [17] K. Kroy, G. Sauermann and H.J. Herrmann, Phys. Rev. E **66**, 031302 (2002). * [18] J.E. Ungar and and P.K. Haff Sedimentology **34**, 289 (1987). * [19] R.S. Anderson and P.K. Haff, Science **241**, 820 (1980). * [20] B. Andreotti, J. Fluid Mech. **510**, 47 (2004). * [21] R.S. Anderson and P.K. Haff, Acta Mechanica (suppl.) **1**, 21 (1991). * [22] K.R. Rasmussen, J.D. Iversen and P. Rautaheimo, Geomorphology **17**, 19 (1996). * [23] R.A. Bagnold, _The physics of blown sand and desert dunes_, Methuen, London (1941). * [24] G. Sauermann, K. Kroy and H.J. Herrmann, Phys. Rev. E **64**, 031305 (2001). * [25] F. Charru, _Selection of the ripple length on a granular bed sheared by a liquid flow_, preprint (2006). * [26] R.S. Anderson, Earth Science Rev. **29**, 77 (1990). * [27] R. Cooke, A. Warren and A. Goudie, _Desert geomorphology_, UCL Press, London (1993). * [28] R. Greeley, R. Leach, B.R. White, J.D Iversen and J.B. Pollack, Geophys. Res. Lett. **7**, 121 (1980). * [29] C. Sagan and R.A. Bagnold, Icarus **26**, 209 (1975). * [30] M.C. Miller and P.D. Komar, Sedimentology **24**, 709 (1977). * [31] J.D. Iversen, J.B. Pollack, R. Greeley and B.R. White, Icarus **29**, 381 (1976). * [32] J.B. Pollack, R. Haberle, R. Greeley and J.D. Iversen, Icarus **29**, 395 (1976). * [33] J.D. Iversen and B.R. White, Sedimentology **29**, 111 (1982). * [34] Y. Shao and H. Lu, J. Geophys. Res. **105**, 22437 (2000). * [35] W.M. Cornelis, D. Gabriels and R. Hartman, Geomorphology **59**, 43 (2004). * [36] D.J. Jerolmack, D. Mohrig, J.P. Grotzinger, D.A. Fike and W.A. Watters, to appear in J. Geophys. Res. * [37] B. Hakansson, private comm. * [38] C.S. Breed, M.J. Grolier and J.F. MacCauley, J. Geophys. Res. **84**, 8183 (1979). * [39] K. Kroy, S. Fischer and B. Obermayer, J. Phys. Cond. Matt. **17**, S1229 (2005). * [40] P. Hersen, H. Elbelrhiti, K.H. Andersen, B. Andreotti, P. Claudin and S. Douady, Phys. Rev. E **69**, 011304 (2004). * [41] H. Elbelrhiti, B. Andreotti and P. Claudin, _Field characterization of barchan dune corridors_, preprint (2006). * [42] R.D. Lorenz _et al._ Science **312**, 724 (2006). * [43] R.D. Lorenz, J.I. Lunine, J.A. Grier and M.A. Fisher, J. Geophys. Res. **100**, 26377 (1995). * [44] R.I. Ferguson and M. Church, J. Sedim. Res. **74**, 933 (2004). * [45] K.L. Johnson, _Contact mechanics_, Cambridge University Press, Cambridge (1985). * [46] J.-M. Georges, _Frottement, usure et lubrification_, Eyrolles, Paris (2000). * [47] L. Quartier, B. Andreotti, A. Daerr and S. Douady, Phys. Rev. E, **62**, 8299 (2000). * [48] W.S. Chepil, Soil Science. **60**, 397 (1945). * [49] M.S. Yalin, E.J. Karahan, Hydraul. Div., Am. Soc. Civil Eng. **105**, 1433 (1979).
The linear stability analysis of the equations governing the evolution of a flat sand bed submitted to a turbulent shear flow predicts that the wavelength \\(\\lambda\\) at which the bed destabilises to form dunes should scale with the drag length \\(L_{\\rm drag}=\\frac{\\rho_{s}}{\\rho_{f}}\\,d\\). This scaling law is tested using existing and new measurements performed in water (subaqueous ripples), in air (aeolian dunes and fresh snow dunes), in a high pressure CO\\({}_{2}\\) wind tunnel reproducing conditions close to the Venus atmosphere and in the low pressure CO\\({}_{2}\\) martian atmosphere (martian dunes). A difficulty is to determine the diameter of saltating grains on Mars. A first estimate comes from photographs of aeolian ripples taken by the rovers Opportunity and Spirit, showing grains whose diameters are smaller than on Earth dunes. In addition we calculate the effect of cohesion and viscosity on the dynamic and static transport thresholds. It confirms that the small grains visualised by the rovers should be grains experiencing saltation. Finally, we show that, within error bars, the scaling of \\(\\lambda\\) with \\(L_{\\rm drag}\\) holds over almost five decades. We conclude with a discussion on the time scales and velocities at which these bed instabilities develop and propagate on Mars.
Provide a brief summary of the text.
arxiv-format/0604013v2.md
# Is it possible to observationally distinguish adiabatic Quartessence from \\(\\Lambda\\)CDM? L. Amendola INAF/Osservatorio Astronomico di Roma, Via Frascati 33, I-00040 Monte Porito Catone, RM. - Italy M. Makler Centro Brasileiro de Pesquisas Fisicas, CEP 22290-180, Rio de Janeiro, RJ, Brasil R. R. Reis Universidade Federal do Rio de Janeiro, Instituto de Fisica, CEP 21941-972, Rio de Janeiro, RJ, Brazil I. Waga Universidade Federal do Rio de Janeiro, Instituto de Fisica, CEP 21941-972, Rio de Janeiro, RJ, Brazil November 3, 2021 ## I Introduction In the current standard cosmological model, two unknown components govern the dynamics of the Universe: dark matter (DM), responsible for structure formation, and dark energy (DE), that drives cosmic acceleration. Recently, an alternative point of view has started to attract considerable interest. According to it, DM and DE are simply different manifestations of a single unifying dark-matter/energy (UDM) component. Since it is assumed that there is only one dark component in the Universe, besides ordinary matter, photons and neutrinos, UDM is also referred to as quartessence [1]. A prototype candidate for such unification is the quartessence Chaplygin model (QCM) [2]. Although this model is compatible with the background data [3], problems appear when one considers (adiabatic) perturbations. For instance, the CMB anisotropy is strongly suppressed when compared with the \\(\\Lambda\\)CDM model [4]. Further, it was shown that the matter power spectrum presents oscillations and instabilities that reduce the parameter space of the model to a region very close to the \\(\\Lambda\\)CDM limit [5]. However, these oscillations and instabilities in the matter power spectrum and the CMB constraints can be circumvented by assuming silent perturbations [6; 7], i.e., intrinsic entropy perturbations with a specific initial condition (\\(\\delta p=0\\)). In fact, silent perturbations solve the matter power spectrum problem for more generic quartessence [8]. Efforts to solve the matter power spectrum problem have also been put forward in [9] and [10]. However, we understand that these works are not, strictly speaking, quartessence. In fact, [9] introduces what seems to be a particular splitting of the Chaplygin model. It is a two component system although only one component is perturbed. A way to implement silent perturbations is presented in [10], but they use additional fields that can be interpreted as new matter components. In this work we present a possible alternative to solve the above mentioned problems in the context of the more standard adiabatic perturbations scenario. We shall discuss a model in which the quartessence EOS changed its concavity in some instant in the past. We focus our investigation on models with a step-like shape EOS. We show that, in order to be in accordance with observations, the EOS concavity change would have occurred at high redshifts. Similarly to what happens in the Chaplygin case, observations constrain one of the parameters of the model to such a low value that, at least at zero and first orders, the step-like model cannot be observationally distinguished from the \\(\\Lambda\\)CDM model. ## II A new type of quartessence In the quartessence models explicitly analyzed up to now, the EOS is convex, i.e., is such that \\[\\frac{d^{2}p}{d\\rho^{2}}=\\frac{dc_{s}^{2}}{d\\rho}<0. \\tag{1}\\]Stability for adiabatic perturbations and adiabatic sound speed less than \\(c\\,\\)imply \\[0\\leq c_{s}^{2}\\leq 1. \\tag{2}\\] Condition (2) and the fact that \\(p<0\\) immediately implies the existence of a minimum energy density \\(\\rho_{\\min}\\), once the energy conservation equation is used. This is a generic result for any uncoupled fluid model in which \\(w=w\\left(\\rho\\right)\\). It implies that the \\(p=-\\rho\\) line cannot be crossed and that in any such a quartessence model the minimum value of the EOS parameter is \\(w_{\\min}=-1\\). The convexity condition (1) implies that \\(c_{s\\,\\,\\max}^{2}\\) occurs at \\(\\rho=\\rho_{\\min}\\). This last result is only a consequence of the convexity of the EOS. In this case, the epoch of accelerated expansion is also a period of high adiabatic sound speed, causing the oscillations and suppressions in the power spectrum. However, this property is not mandatory for quartessence. Models with concavity changing equations of state may have \\(c_{s}^{2}\\) negligibly small at \\(\\rho\\simeq\\rho_{\\min}\\). As we shall show, it is possible to build models in which a non-negligible \\(c_{s}^{2}\\) is a transient phenomenon and relevant only at a very early epoch, such that only perturbations with relatively large wave numbers (outside the range of current linear power spectrum measurements) are affected. The step-like quartessence, given by a sigmoid, is an example of UDM with concavity changing EOS (see figure 1, left panel), \\[p=-M^{4}\\left\\{\\frac{1}{1+\\exp\\left[\\beta\\left(\\frac{\\rho}{M^{4}}-\\frac{1}{ \\sigma}\\right)\\right]}\\right\\}. \\tag{3}\\] For this model, the adiabatic sound speed has the following expression, \\[c_{s}^{2}=\\beta\\frac{\\exp\\left[\\beta\\left(\\frac{\\rho}{M^{4}}-\\frac{1}{\\sigma} \\right)\\right]}{\\left\\{1+\\exp\\left[\\beta\\left(\\frac{\\rho}{M^{4}}-\\frac{1}{ \\sigma}\\right)\\right]\\right\\}^{2}}. \\tag{4}\\] There are three free parameters in the model. The parameter \\(M\\) is related to the minimum value of the energy density, i.e., the value of \\(\\rho\\) when the asymptotic EOS, \\(p_{\\min}=-\\rho_{\\min}\\), is reached. The parameter \\(\\sigma\\) is related to the value of the energy density at the transition from the \\(p\\simeq 0\\) regime to the \\(p\\simeq-M^{4}\\) one (\\(\\rho_{\\rm trans}=M^{4}/\\sigma\\) ). Notice that if \\(\\sigma\\ll 1\\), the transition takes place well before the minimum density is reached. The parameter \\(\\beta\\) controls the maximum sound velocity \\(c_{s\\,\\max}^{2}\\) as well as the redshift width of the transition region (higher values of \\(\\beta\\) implying faster transitions). For the sigmoid EOS the maximum adiabatic sound speed is given by \\(c_{s\\,\\,\\max}^{2}=\\beta/4\\), and therefore we require \\(0\\leq\\beta\\leq 4\\). In the present model, the \\(\\Lambda\\)CDM limit is not necessarily associated with the maximum sound speed, in contrast to what is found in the convex EOS case. The \\(\\Lambda\\)CDM limit is reached when \\(\\sigma\\to 0\\), which implies \\(p=-\\rho=-M^{4}\\). Another possibility is to take \\(\\beta\\to 0\\). In this case \\(c_{s\\,\\,\\max}^{2}\\to 0\\) and we also have a \\(\\Lambda\\)CDM limit, but now with \\(p=-\\rho=-M^{4}/2\\). Since \\(\\beta\\) strongly affects the redshift width of the transition, these two limits have different characteristics. The case of a nonvanishing \\(\\beta\\ll 1\\) has a drastic effect on the matter power spectrum. In fact, although the maximum sound speed will be small, it will be non negligible during a long redshift range and/or time, practically ruling out these models. We note that a step-like quartessence may be represented by the more generic expression, \\[p=M^{4}f\\left[\\beta\\left(\\frac{\\rho}{M^{4}}-\\frac{1}{\\sigma}\\right)\\right], \\tag{5}\\] where \\(f\\) is a step-like function, with \\(f(+\\infty)=0\\) and \\(f(-\\infty)=-1\\). The maximum adiabatic sound speed is \\(c_{s\\,\\,\\max}^{2}=\\beta f_{max}^{\\prime}\\). For \\(\\sigma\\ll 1\\), \\(p_{\\min}=-M^{4}\\). ## III Observational constraints The zeroth order quantities (such as the luminosity and angular diameter distances), depend only on integrals of the Hubble parameter. Therefore, they are not very sensitive to local features of the function \\(\\rho\\left(a\\right)\\). In particular, they should not depend on the specific form of the transition from \\(p=0\\) to \\(p=-M^{4}\\). For example, for small values of \\(\\sigma\\), the observational data (from SNIa, for instance) constrains only \\(M^{4}\\) and not \\(\\sigma\\) nor \\(\\beta\\). Thus we expect the background observational constraints to be highly degenerate for small \\(\\sigma\\) (\\(\\sigma\\lesssim 0.1\\)). Further, as will be shown, first order tests, such as cosmic microwave background fluctuations or large-scale structure data, constrain the value of \\(\\sigma\\) to be very small (\\(\\sigma\\ll 1\\)). Therefore, a real step function is a good model independent approximation for the background evolution in the type of quartessence we are dealing with in this paper. Figure 1: left panel: pressure (\\(p\\)) as a function of the energy density (\\(\\rho\\)) for the sigmoid EOS. Also shown is the \\(p=-\\rho\\) line. right panel: typical behavior of the EOS parameter (\\(w\\)) and the adiabatic sound speed (\\(c_{s}^{2}\\)) as a function of the redshift (\\(z\\)). Figure 3: Constant confidence contours (68% and 95%) in the (\\(M^{4}/\\rho_{0},\\sigma\\)) plane allowed by CMB (WMAP) (left panel) and matter power spectrum (SDSS) [14] (right panel). Figure 2: Constant confidence contours (68% and 95%) in the (\\(M^{4}/\\rho_{0},\\arctan\\sigma\\)) plane allowed by SNeIa (left panel) and X-ray galaxy clusters (right panel). In the following we derive constraints on the parameters \\(\\sigma\\) and \\(M^{4}/\\rho_{0}\\) from four data sets: SNIa, X-ray cluster gas mass fraction, galaxy power spectrum and CMB fluctuations. Here, \\(\\rho_{0}\\) is the present value of the quartessence energy density. For the sake of simplicity, in our computations we fixed the parameter \\(\\beta\\) to the intermediary value \\(\\beta=2\\). We remark that, for small values of \\(\\sigma\\), \\(-M^{4}/\\rho_{0}\\simeq w_{0}\\), where \\(w_{0}\\) is the present equation of state parameter. It is worth pointing out that \\(w_{0}\\) should not be compared to the usual dark energy EOS \\(w_{DE}\\) but with \\(w_{\\rm eff}\\equiv w_{tot}\\Omega_{tot}\\). In a flat Universe and neglecting the small amount of baryons \\(w_{\\rm eff}\\equiv w_{DE}\\Omega_{DE}\\). Values around \\(M^{4}/\\rho_{0}\\approx 0.7\\) are therefore to be expected. In our SNIa analysis we use the\"gold\" data set of Riess _et al._[11]. To determine the likelihood of the parameters we follow the same procedure described in [7] assuming flat priors when marginalizing over the baryon density parameter \\(\\Omega_{0b}h^{2}\\) and Hubble parameter \\(h\\). For the galaxy cluster analysis, we use the _Chandra_ measurements of the X-ray gas mass fraction data from Allen _et al._[12]. Again, we follow the same procedure described in [7] to determine confidence region of the parameters of the model. We first marginalize analytically over the bias \\(b\\), using a Gaussian prior with \\(b=0.824\\pm 0.089\\) and then, as in the SNIa analysis, we marginalize over \\(\\Omega_{b0}h^{2}\\) and \\(h\\) assuming flat priors. In figure 2 we show constant \\(68\\,\\%\\) and \\(95\\,\\%\\) confidence levels contours on the parameters \\(M^{4}/\\rho_{0}\\) and \\(\\sigma\\) for SNIa and X-ray galaxy clusters. From the figure it is clear that, as expected, background tests impose only weak constraints on the parameter \\(\\sigma\\). In order to obtain constraints on \\(M^{4}/\\rho_{0}\\) and \\(\\sigma\\) from CMB data [13] we follow the procedure described in [7], fixing \\(T_{CMB}=2.726K\\), \\(Y_{He}=0.24\\) and \\(N_{\ u}=3.04\\), and marginalizing over the other parameters, namely, \\(\\Omega_{b0}h^{2}\\), \\(h\\), the spectral index \\(n_{s}\\), the optical depth \\(\\tau\\) and the normalization \\(N\\). In figure 3 (left panel) we show the confidence region on the parameters for CMB. Note that \\(\\sigma\\) plays a decisive role in the evolution of perturbations; now the data constrain this parameter to be \\(\\sigma\\lesssim 3\\times 10^{-3}\\). We next consider the matter power spectrum, comparing the baryon spectrum with data from SD SS [14]. To compute the likelihood, we used a version of the code provided by M. Tegmark [15], cutting at \\(k=0.20\\)\\(h\\)Mpc\\({}^{-1}\\) (19 bands) and marginalizing over \\(\\Omega_{b0}h^{2}\\), \\(h\\), \\(n_{s}\\) and the amplitude. In figure 3 (right panel) we show the \\(68\\,\\%\\) and \\(9\\,5\\%\\) confidence levels on \\(\\sigma\\) and \\(M^{4}/\\rho_{0}\\) from the SD SS power spectrum. This is the most restrictive test we have considered in this work, implying that \\(\\sigma\\lesssim 7\\times 10^{-5}\\). In figure 4 we display the constant (\\(68\\,\\%\\) and \\(95\\,\\%\\)) contours for the combined analysis SNIa + X-ray galaxy clusters + matter power spectrum + CMB data. Our final result (\\(95\\,\\%\\)) is \\(0.68\\lesssim M^{4}/\\rho_{0}\\lesssim 0.78\\) and \\(0<\\sigma\\lesssim 4\\times 10^{-5}\\). It is straightforward to show that the transition redshift from a pressureless epoch to a constant negative pressure period is given by \\(z_{t}\\simeq[(M^{4}/\\rho_{0})(1-\\sigma)/((1-M^{4}/\\rho_{0})\\sigma)]^{1/3}\\). Therefore, assuming \\(M^{4}/\\rho_{0}\\sim 0.7\\) and since \\(\\sigma\\lesssim 4\\times 10^{-5}\\) the transition from \\(p=0\\) to \\(p=-M^{4}\\) would have occurred at \\(z_{t}\\gtrsim 38\\). ## IV Conclusion In this work we presented a new adiabatic quartessence model characterized by a change of concavity in the EOS. We obtained the constraints on the model parameters from SNIa, X-ray gas mass fraction in galaxy clusters, CMB Figure 4: \\(68\\,\\%\\) and \\(95\\,\\%\\) in the \\((M^{4}/\\rho_{0},\\sigma)\\) plane for the combined analysis SNIa + galaxy clusters + matter power spectrum + CMB. and matter power spectra and showed that the model is viable if \\(\\sigma\\lesssim 4\\times 10^{-5}\\). The redshift of the transition from the regime \\(p\\simeq 0\\) to \\(p\\simeq const.<0\\) is, therefore, \\(z_{t}\\gtrsim 38\\). On the other hand, the inclusion of matter power spectrum data for smaller scales (\\(k\\gtrsim 0.2\\) h Mpc\\({}^{-1}\\)) could impose stronger constraints upon \\(\\sigma\\) pushing the minimum redshift of the transition to higher values. We checked that this is, in fact, the case by considering data from the matter power spectrum from the Lyman-alpha forest [17]. However, since there are still systematic uncertainties in this data we did not include them in our analysis. Although differences between quartessence models and \\(\\Lambda\\)CDM may exist in the nonlinear regime [16], the results of the present work, in combination with the results of [5] and [7], indicate that, at zero and first orders, any (convex or not) successful adiabatic quartessence model cannot be observationally distinguished from \\(\\Lambda\\)CDM. ###### Acknowledgements. We thank Roberto Colistete and Miguel Quartin for useful discussions. The CMB computations have been performed at CINECA (Italy) under the agreement INAF@CINECA. We thank the staff for support. RRRR and IW are partially supported by the Brazilian research agencies CAPES and CNPq, respectively. LA thanks the Gunma Nat. Coll. of Techn. (Japan) for hospitality during the later stages of this work and JSPS for financial support. ## References * (1) M. Makler, S.Q. Oliveira, and I. Waga, Phys. Lett. B **555**, 1, 0209486 (2003). * (2) A. Kamenshchik, U. Moschella, and V. Pasquier, Phys. Lett. B **511**, 265 (2001); M. Makler, _Gravitational Dynamics of Structure Formation in the Universe_, PhD Thesis, Brazilian Center for Research in Physics (2001); N. Bilic, G.B. Tupper, and R.D. Viollier, Phys. Lett. B **535**, 17 (2002); M.C. Bento, O. Bertolami, and A.A. Sen, Phys. Rev. D **66**, 043507 (2002). * (3) M. Makler, S.Q. Oliveira, and I. Waga, Phys. Rev. D **68**, 123521 (2003); A. Dev, D. Jain, and J.S. Alcaniz, Astron. Astrophys. **417**, 847 (2004); Z.-H. Zhu, Astron. Astrophys. **423**, 412 (2004); R. Colistete Jr. and J.C. Fabris, Class. Quant. Grav. **22**, 2813 (2005); M.C. Bento, O. Bertolami, N.M.C. Santos, and A.A. Sen, Phys. Rev. D **71**, 063501 (2005). * (4) D. Carturan and F. Finelli, Phys. Rev. D **68**, 103501 (2003); L. Amendola, F. Finelli, C. Burigana and D. Carturan, JCAP **07**, 005 (2003). * (5) H.B. Sandvik, M. Tegmark, M. Zaldarriaga, and I. Waga, Phys. Rev. D **69**, 123524 (2004). * (6) R.R. Reis, I. Waga, M.O. Calvao, and S.E. Joras, Phys. Rev. D **68**, 061302(R) [2003]. * (7) L. Amendola, I. Waga, and F. Finelli, JCAP **11**, 009 (2005). * (8) R.R.R. Reis, M. Makler, and I. Waga, Class. Quantum Grav. **22**, 353 (2005). * (9) M.C. Bento, O. Bertolami and A.A. Sen, Phys. Rev. D **70**, 083519 (2004). * (10) N. Bilic, G.B. Tupper and R.D. Viollier, hep-th/0504082. * (11) A.G. Riess _et al._, Astrophys. J. **607**, 665 (2004). * (12) S.W. Allen _et al._, Monthly Notices of the Royal Astron. Society **353**, 457 (2004). * (13) G. Hinshaw _et al._ [the WMAP collaboration], Astrophys. J. Suppl. **148**, 135 (2003); L. Verde _et al._ [the WMAP collaboration], Astrophys. J. Suppl. **148**, 195 (2003). * (14) M. Tegmark, _et al._ [the SDSS collaboration], Phys. Rev. D **69**, 103501 (2004); M. Tegmark, _et al._ [the SDSS collaboration], Astrophys. J. **606**, 702 (2004). * (15)[http://space.mit.edu/home/tegmark/sdss.html](http://space.mit.edu/home/tegmark/sdss.html) * (16) D. Giannakis and W. Hu, Phys. Rev. D **72**, 063502 (2005). * (17) R.A.C. Croft _et al._, Astrophys. J. **520**, 1 (1999); N.Y. Gnedin and A.J.S. Hamilton, Monthly Notices of the Royal Astron. Society **334**, 107 (2002).
The equation of state (EOS) in quartessence models interpolates between two stages: \\(p\\simeq 0\\) at high energy densities and \\(p\\approx-\\rho\\) at small ones. In the quartessence models analyzed up to now, the EOS is convex, implying increasing adiabatic sound speed (\\(c_{s}^{2}\\)) as the energy density decreases in an expanding Universe. A non-negligible \\(c_{s}^{2}\\) at recent times is the source of the matter power spectrum problem that plagued all convex (non-silent) quartessence models. Viability for these cosmologies is only possible in the limit of almost perfect mimicry to \\(\\Lambda\\)CDM. In this work we investigate if similarity to \\(\\Lambda\\)CDM is also required in the class of quartessence models whose EOS changes concavity as the Universe evolves. We focus our analysis in the simple case in which the EOS has a step-like shape, such that at very early times \\(p\\simeq 0\\), and at late times \\(p\\simeq const<0\\). For this class of models a non-negligible \\(c_{s}^{2}\\) is a transient phenomenon, and could be relevant only at a more early epoch. We show that agreement with a large set of cosmological data requires that the \\(\\Omega\\) transition between these two asymptotic states would have occurred at high redshift (\\(z_{t}\\gtrsim 38\\)). This leads us to conjecture that the cosmic expansion history of any successful non-silent quartessence is (practically) identical to the \\(\\Lambda\\)CDM one.
Summarize the following text.
arxiv-format/0604022v1.md
# A Qualitative Description of Boundary Layer Wind Speed Records Rajesh G. Kavasseri \\(\\dagger\\) and Radhakrishnan Nagarajan \\(\\ddagger\\) \\(\\dagger\\)Department of Electrical and Computer Engineering North Dakota State University, Fargo, ND 58105 - 5285 email : [email protected] \\(\\ddagger\\)629 Jack Stephens Drive, # 3105 University of Arkansas for Medical Sciences, Little Rock, Arkansas 72212 ## 1 Introduction Atmospheric phenomena are accompanied by variations at spatial and temporal scales. In the present study, qualitative aspects of temporal wind speed data recorded at an altitude of 10 ft from the earth's surface are discussed. Suchrecordings fall under the ABL, which is the region 1-2 km from the earths surface [1]. Flows in the ABL, which are generally known to be turbulent, are influenced by a number of factors including shearing stresses, convective instabilities, surface friction and topography [1, 2]. The study of laboratory scale turbulent velocity fields has received a lot of attention in the past (see [3] for a summary). A. N. Kolmogorov [4, 5], (K41) proposed a similarity theory where energy in the inertial sub-range is cascaded from the larger to smaller eddies under the assumption of local isotropy. For the same, K41 statistics is also termed as small-scale turbulence statistics. The seminal work of Kolmogorov encouraged researchers to investigate scaling properties of turbulent velocity fields using the concepts of fractals [6]. Subsequent works of Parisi and Frisch [7], Meneveau and Srinivasan, [8, 9, 10] provided a multifractal description of turbulent velocity fields. While there has been a precedence of scaling behavior in turbulence at microscopic scales [4, 5, 6, 7, 8, 9, 10, 11] it is not necessary that such a scaling manifest itself at macroscopic scales, although there have been indications of \"unified scaling\" models of atmospheric dynamics, [12]. Several factors can significantly affect the behavior of a complex system such as ABL [2, 1]. Thus, an extension of these earlier findings [4, 5, 6, 7, 8, 9, 10, 11] to the present study is neither immediate, nor obvious. Attempts have also been made to simulate the behavior of the ABL [13, 14]. However, there are implicit assumptions made in these studies and often, there can be significant discrepancies between simulated flows and the actual phenomenon when these assumptions are violated [3]. On the other hand, knowledge about the nature of wind speed at ABL has far reaching impact on several fields of research. In particular, the need to obtain accurate statistical descriptions of flows in the ABL from actual site recordings is both urgent and important, given its utility in the planning, design and efficient operation of wind turbines, [15]. Therefore, analysis of wind speed records based on numerical techniques is gaining importance in the recent years. In [16], long term daily records of wind speed and direction were represented as a two dimensional random walk and the results reinforce the important role that memory effects have on the dynamics of complex systems. In [17], the short term memory of recorded surface wind speed records is utilized to build \\(m\\)'th order Markov chain models, from which probabilistic forecasts of short time wind gusts are made. In [18], the authors study the correlations in wind speed data sets over a span of 24 hours, using detrended fluctuation analysis (DFA), [19] and its extension, multifractal-DFA (MF-DFA)[20]. Their studies show that the records display long range correlations with a fluctuation exponent of \\(\\alpha\\sim 1.1\\) along with a broad multifractal spectrum. In addition, they also suggest the need for detailed analysis of data sets from several other stations to ascertain whether such features are characteristic of wind speed records. In [21], it is shown that rare events such as wind gusts in wind speed data sets that are long range correlated are themselves long range correlated. In [22], it is shown that surface layer wind speed records can be characterized by multiplicative cascade models with different scaling relations in the microscale inertial range and the mesoscale. Our previous studies, [23] suggest that at short time scales, hourly average wind speed records are characterized by a scaling exponent \\(\\alpha\\sim 1.4\\) and at large time scales, by an exponent of \\(\\alpha\\sim 0.7\\). A deeper examination of the data sets in [26] using MF-DFA indicated that the records also admitted a broad multifractal spectrum under the assumption of a binomial multiplicative cascade model. Interestingly, scaling phenomena have also been found in fluctuations of meteorological variables that influence wind speed such as air humidity, [24], temperature records and precipitation [25]. In [25], it is observed that while temperature records display long range correlations (\\(\\alpha\\sim 0.7\\)), they do not display a broad multifractal spectrum. On the other hand, precipitation records display a very weak degree of correlation with (\\(\\alpha\\sim 0.5\\), [25]). While it is difficult to directly relate the scaling results of these variables to that of wind speed, greater insight can be gained by analyzing data sets that are recorded over long spans from different meteorological stations. Motivated by these findings, we chose to investigate the temporal aspects of wind speed records dispersed over a wide geographical area. In the present study, we follow a systematic approach in determining the nature of the scaling of wind speed records recorded at an altitude of 10 ft across 28 spatially separated locations spanning an area of approximately 70,000 sq.mi and recorded over a period of nearly 8 years in the state of North Dakota. As noted earlier, convective instabilities and topography can have a prominent impact of the flow in ABL. The air motion over North Dakota is governed by the flow of three distinct air masses with distinct qualities, namely from : (i) the polar regions to the north (cold and dry) (ii) the Gulf of Mexico to the south (warm and moist) and (iii) the northern pacific region (mild and dry). The rapid progression and interaction of these air masses results in the region being subject to considerable climactic variability. These in turn can have a significant impact on the convective instabilities which governs the flow in ABL. On the other hand, the topography of the region has sharp contrasts on the eastern and western parts of the state because of their approximate separation by the boundary of continental glaciation. The eastern regions have a soft topography compared to the western region which comprises mostly of rugged bedrock. In the present study, we show that the qualitative characteristics of the wind speed records do not change across the spatially separated locations despite the contrasting topography. This leads us to hypothesize that the confluence of the air masses as opposed to the topography plays a primary role in governing the wind dynamics over ND. ## 2 Methods Spectral analysis of stationary processes is related to correlations in it by the Wiener-Khinchin theorem, [27] and has been used for detecting possible long-range correlations. In the present study, we observed broad-band power-law decay superimposed with peaks. This spectral signature was consistent across all the 28 stations (Fig. 1(b)). Such power-law processes lack well-defined scales and have been attributed to self-organized criticality, intermittency, self-similarity [28, 29] and multiscale randomness [30]. Superimposed on the power law spectrum, were two high frequency peaks which occur at \\(t_{1}=24\\) and \\(t_{2}=12\\) hours corresponding to diurnal and semi-diurnal cycles respectively. Power-law decay of the power-spectrum (Fig. 1(b)) can provide cues to possible long-range correlations, but, however, it is susceptible to trends and non-stationarities which are ubiquitous in recordings of natural phenomena. While several estimators [31, 32] have been proposed in the past for determining the scaling exponents from the given data, detrended fluctuation analysis (DFA), [19] and its extension, generalized multifractal-DFA (MF-DFA)[20] have been widely used to determine the nature of the scaling in data obtained from diverse settings [20, 33, 34, 35, 36]. In DFA, the scaling exponent for the given monofractal data is determined from least-squares fit to the log-log plot of the second-order fluctuation functions versus the time-scale, i.e. \\(F_{q}(s)\\) vs \\(s\\) where \\(q=2\\). For MF-DFA, the variation of the fluctuation function with time scale is Figure 1: (a) Temporal trace of hourly average wind speed record (miles/hour) at one of the representative stations (Baker 1N, refer to Table 1 for details) over a period of nearly 8 years. (b) The corresponding power spectrum exhibits a power law decay of the form (\\(S(f)\\sim 1/f^{\\beta}\\)). Superimposed on the power spectrum are prominent peaks which correspond to multiple sinusoidal trends. (c) Log-Log plots of the fluctuation function versus time scale, \\(F_{q}(s)\\) vs \\(s\\) for the moments \\(q=-10\\)(*), -6 (triangle ), -0.2(\\(\\times\\)), 2 (.), 6(\\(\\circ\\)) and \\(q=10(+)\\) (d) Multifractal spectrum of the record determined under the assumption of a binomial multiplicative cascade model. determined for varying \\(q\\) (\\(q\ eq 0\\)). The superiority of DFA and MF-DFA to other estimators along with a complete description is discussed elsewhere [33]. DFA and MF-DFA procedures follow a differencing approach that can be useful in eliminating local trends [19]. However, recent studies have indicated the susceptibility of DFA and MF-DFA to higher order polynomial trends. Subsequently, DFA-\\(n\\)[37] was proposed to eliminate polynomial trends up to order \\(n-1\\). In the present, study we have used polynomial detrending of order four. However, such an approach might not be sufficient to remove sinusoidal trends which can be periodic [39] or quasiperiodic (see discussion in Appendix A). Data sets spanning a number of years, as discussed here, are susceptible to seasonal trends that can be periodic or quasiperiodic in nature. Such trends manifest themselves as peaks in the power spectrum and their power is a fraction of the broad-band noise. These trends can also introduce spurious crossovers as reflected by log-log plot of \\(F_{q}(s)\\) vs \\(s\\) and prevent reliable estimation of the scaling exponent. Such crossovers indicate spurious existence of multiple scaling exponents at different scales and shift towards higher time scales with increasing order of polynomial detrending [37]. Thus, it is important to discern correlations that are an outcome of trends from those of the power-law noise. In a recent study, [38], a singular-value decomposition (SVD) based approach was proposed to minimize the effect of the various types of trends superimposed on long-range correlated noise. However, SVD is a linear transform and may be susceptible when linearity assumptions are violated. Therefore, we provide a qualitative argument to identify multifractality in wind speed records superimposed with periodic and/or quasiperiodic trends. Multifractality is reflected by a change in the slope of the log-log fluctuation plots with varying \\(q\\) with (\\(q\ eq 0\\)) [20]. For a fixed \\(q\\), one observes spurious crossovers in monofractals well as multifractal data sets superimposed sinusoidal trends. Thus nonlinearity or a crossover of the log-log plot for a fixed \\(q\\) might be due to trends as opposed to the existence of multiple scaling exponents at different scales. However, we show (see discussion under Appendix A) that the nature of log-log plot of \\(F_{q}(s)\\) vs \\(s\\) does not change with varying \\(q\\) for monofractal data superimposed with sinusoidal trends. However, a marked change in the nature if the log-log plots \\(F_{q}(s)\\) vs \\(s\\) with varying \\(q\\) is observed for multifractal data superimposed with trends. Therefore, in the present study, the log-log plot of \\(F_{q}(s)\\) vs \\(s\\) with varying \\(q\\) is used as a qualitative description of multifractal structure in the given data. For the wind-speed records across the 28 stations, we found the peaks in the power spectrum to be consistent across all the 28 stations. Thus any effects due to the trend on the multifractal structure, we believe would be consistent across the 28 stations. ## 3 Results MF-DFA was applied to the data sets recorded at the 28 stations. The log-log plots of the fluctuation functions (\\(F_{q}(s)\\) vs \\(s\\)) with varying moments \\(q\\) = -10,-6, -0.2, 2, 6, 10) using fourth order polynomial detrending for one of the representative records is shown in Fig.1(c). From Fig.1(c), it can be observed that the data sets exhibit different scaling behavior with varying \\(q\\), characteristic of a multifractal process. This has to be contrasted with monofractal data whose scaling behavior is indifferent to the choice of \\(q\\) in the presence or absence of sinusoidal trends. To compute the \\(q\\) dependence of the scaling exponent \\(h(q)\\), we select the time scale in the range [2.2 - 3.7] where the scaling was more or less constant for a given \\(q\\). Note that this corresponds to variations in a time span of \\([10^{2.2}-10^{3.7}]\\sim[158-5012]\\)**hours**, for a given \\(q\\). In this range, the slope of the fluctuation curves \\(h(q)\\) was calculated for every \\(q\\) and for ever station. The mean of the generalized exponents \\(h(q)\\) over all the twenty eight stations along with the standard deviation bars are shown in Fig. 2(a). It can be noted from Fig. 2(a) that the slopes \\(h(q)\\) decrease as the moment \\(q\\) varies from negative to positive values which signifies that wind speed fluctuations are heterogeneous and thus, a range of exponents is necessary to completely describe their scaling properties. To capture this notion of multifractality, we estimate the classical Renyi exponents \\(\\tau(q)\\) and the singularity spectrum [40] Figure 2: (a) The mean (circle) and the standard deviation (vertical lines) of the generalized Hurst exponent, \\(h(q)\\) vs \\(q\\) across the 28 stations (b) The mean (circle) and the standard deviation (vertical lines) of the multifractal spectrum \\(f(\\alpha)\\) across the 28 stations. (c) Histogram of the multifractal widths (\\(\\Delta\\alpha\\)) across the 28 stations (d) Histogram of the Hurst exponent \\(h(2)\\). under the assumption of binomial multiplicative process [41, 40, 20] (see Appendix A for details). The singularity spectrum of one of the representative stations (Baker, 1N) is shown in Fig.1(d) and its variation across the 28 stations is shown in Fig. 2(b). The fitting parameters \\(a,b\\) for the cascade model, the Hurst exponent \\(h(2)\\) and the multifractal width \\(\\Delta\\alpha\\) for all the stations are summarized in Table 1. These results indicate multifractal scaling consistent across the stations. Earlier studies [42, 43, 20] have suggested the choice of random shuffled surrogates to rule out the possibility that the observed fractal structure is due to correlations as opposed to broad probability distribution function. The wind speeds in the present study follow a two-parameter asymmetric Weibull distribution whose parameters were also similar across the 28 stations. MF-DFA on the random shuffle surrogates of the original records Fig.2(d) indicate a scaling of the form \\(F_{q}(s)\\sim s^{0.5}\\) with varying \\(q\\), characteristic of random noise and loss of multifractal structure. The width of the multifractal spectrum was used to characterize the strength of multifractality. The histogram of the multifractal widths obtained across the 28 stations, Fig. 2(c), was narrow with mean and standard deviation \\(\\Delta(\\alpha)=(0.4866\\pm 0.0599)\\). The multifractal widths and the Hurst exponent \\(h(2)\\) across the twenty eight stations is also shown in Fig.3. In the present study a systematic approach was used to determine possible scaling in the temporal wind-speed records over 28 spatially separated stations in the state of North Dakota. Despite the spatial expanse and contrasting topography the multifractal qualitative characteristics of the wind speed records as Figure 3: The multifractal width for each of the 28 stations is indicated by circles. The Hurst exponent \\(h(2)\\) is indicated by upright triangles. The x-y plane represents the x and y coordinate in the of North Dakota. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|} \\hline Station Number & Name & \\(a\\) & \\(b\\) & \\(h(2)\\) & \\(\\Delta\\alpha\\) \\\\ \\hline 1 & Baker 1N & 0.513 & 0.710 & 0.702 & 0.469 \\\\ \\hline 2 & Beach 9S & 0.523 & 0.76 & 0.643 & 0.539 \\\\ \\hline 3 & Bottineau 14W & 0.525 & 0.720 & 0.699 & 0.456 \\\\ \\hline 4 & Carrington 4N & 0.521 & 0.721 & 0.638 & 0.468 \\\\ \\hline 5 & Dazey 2E & 0.551 & 0.722 & 0.609 & 0.390 \\\\ \\hline 6 & Dickinson 1NW & 0.535 & 0.715 & 0.670 & 0.418 \\\\ \\hline 7 & Edgeley 4SW & 0.531 & 0.752 & 0.573 & 0.510 \\\\ \\hline 8 & Fargo 1NW & 0.505 & 0.719 & 0.641 & 0.510 \\\\ \\hline 9 & Forest River 7WSW & 0.556 & 0.739 & 0.670 & 0.411 \\\\ \\hline 10 & Grand Forks 3S & 0.527 & 0.757 & 0.646 & 0.522 \\\\ \\hline 11 & Hazen 2W & 0.544 & 0.733 & 0.672 & 0.430 \\\\ \\hline 12 & Hettinger 1NW & 0.523 & 0.748 & 0.610 & 0.516 \\\\ \\hline 13 & Hillsboro 7SE & 0.522 & 0.751 & 0.620 & 0.526 \\\\ \\hline 14 & Jamestown 10 W & 0.547 & 0.739 & 0.636 & 0.434 \\\\ \\hline 15 & Langdon 1E & 0.545 & 0.710 & 0.644 & 0.381 \\\\ \\hline 16 & Linton 5N & 0.531 & 0.711 & 0.623 & 0.422 \\\\ \\hline 17 & Minot 4S & 0.550 & 0.706 & 0.629 & 0.459 \\\\ \\hline 18 & Oakes 4S & 0.507 & 0.755 & 0.657 & 0.574 \\\\ \\hline 19 & Prosper 5NW & 0.504 & 0.720 & 0.671 & 0.513 \\\\ \\hline 20 & Mohall 1W & 0.513 & 0.753 & 0.654 & 0.554 \\\\ \\hline 21 & Streeter 6NW & 0.521 & 0.742 & 0.675 & 0.509 \\\\ \\hline 22 & Turtle Lake 4N & 0.546 & 0.729 & 0.693 & 0.418 \\\\ \\hline 23 & Watford City 1W & 0.524 & 0.768 & 0.631 & 0.551 \\\\ \\hline 24 & St. Thomas 2WSW & 0.550 & 0.744 & 0.633 & 0.434 \\\\ \\hline 25 & Sidney 1NW & 0.518 & 0.772 & 0.635 & 0.593 \\\\ \\hline 26 & Cavalier 5W & 0.506 & 0.736 & 0.651 & 0.539 \\\\ \\hline 27 & Williston 5SW & 0.514 & 0.759 & 0.635 & 0.562 \\\\ \\hline 28 & Robinson 3NNW & 0.514 & 0.739 & 0.620 & 0.525 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Names and locations of the 28 recording stations. The fitting parameters \\((a,b)\\) of the cascade model, the Hurst exponent \\(h(2)\\) and the multifractal width \\((\\Delta\\alpha)\\) are also indicted. reflected by singularity spectrum, were found to be similar. Thus multifractality may be an invariant feature in describing the dynamics long-term motion of wind speed records in ABL over North Dakota. We also believe that the irregular recurrence and confluence of the air masses from Polar, Gulf of Mexico and the northern Pacific may play an important role in explaining the observed multifractal structure. ## Acknowledgments The financial support from ND-EPSCOR through NSF grant EPS 0132289 is gratefully acknowledged. ## References * [1] J. R. Garratt, _The Atmospheric Boundary Layer_, _Cambridge Univ. Press_, (1994). * [2] A. A. Monin and A. M. Obukhov, _Basic laws of turbulent mixing in the ground layer of the atmosphere_, _Trans. Geophys. Inst. Akad. Nauk. USSR_**151**, 163-187, (1954). * [3] Z. Warhaft, _Turbulence in nature and in the laboratory_, _PNAS_**99**, 2481-2486, (2002). * [4] A. N. Kolmogorov, _The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers_, _Dokl. Acad. Nauk. SSSR_**30**, 301-305, (1941). * [5] A. N. Kolmogorov, _Dissipation of energy in the locally isotropic turbulence_, _Dokl. Akad. Nauk. SSSR_**31**, 538-540, (1941). * [6] B. Mandelbrot, _Intermittent turbulence in self-similar cascades : divergence of high moments and dimension of the carrier_, _J. Fluid Mech._**62**, 331-358, (1974). * [7] G. Parisi and U. Frisch, _On the singularity structure of fully developed turbulence_, _Turbulence and Predictability in Geophysical Fluid Dynamics and Climate Dynamics_. (eds: M. Ghil, R. Benzi and G. Parisi) 71, (1985). * [8] C. Meneveau and K. R. Sreenivasan, _Simple multifractal cascade model for fully developed turbulence_, _Phy. Rev. Lett_**59**, 1424-1427, (1987). * [9] C. Meneveau and K. R. Sreenivasan, _The multifractal spectrum of the dissipation field in turbulent flows_, _Nuclear Physics B (Proc. Suppl.)_**2**, 49-76, (1987). * [10] C. Meneveau and K. R. Sreenivasan, _The multifractal nature of turbulent energy dissipation_, _Journal of Fluid Mechanics_**224**, 429-484, (1991) * [11] F. Argoul, _Wavelet analysis of turbulence reveals the multifractal nature of the Richardson cascade_, _Nature_**338**, 51-53, (1989). * [12] S. Lovejoy, D. Schertzer and J. D. Stanway, _Direct evidence of multifractal atmospheric cascades from planetary scales down to 1 km_, _Phy. Rev. Lett_**86**, 5200-5203, (2001). * [13] F. Ding, S. Pal Arya and Y. L. Lin, _Large eddy simulations of atmospheric boundary layer using a new sub-grid scale model_, _Environmental Fluid Mechanics_**1**, 49-69, (2001). * [14] C-H. Moeg, _A large eddy simulation model for the study of planetary boundary layer_, _J. Atmos. Sci._**41**, 2052-2062, (1984). * [15] J. Peinke, S. Barth, F. Bottcher, D. Heinemann and B. Lange, _Turbulence, a challenging problem for wind energy_, _Physica A_, **338**, 187-193, (2004). * [16] B. M. Schulz, M. Schulz and S. Trimper, _Wind direction and strength as a two dimensional random walk_, _Physics Letters A_, **291**, 87-91, (2001). * [17] H. Kantz, D. Holstein, M. Ragwitz and N. K. Vitanov, _Markov chain model for turbulent wind speed data_, _Physica A_, **342**, 315-321, (2004). * [18] R. B. Govindan and H. Kantz, _Long term correlations and multifractality in surface wind_, _Europhysics Letters_, **68**, 184-190, (2004). * [19] C. K. Peng _et.al._, _Mosaic organization of DNA nucleotides_, _Phys. Rev. E_**49**, 1685-1689, (1994) * [20] J. W. Kantelhardt, S. A. Zschiegner, S. Havlin, A. Bunde and H. E. Stanley, _Multifractal detrended fluctuation analysis of nonstationary time series_, _Physica A_**316**, 87-114, (2002). * [21] M. S. Santhanam and H. Kantz, _Long range correlations and rare events in boundary layer wind fields_, _Physica A_, **345**, 713-721, (2005). * [22] M. K. Lauren, M. Menabde and G. L. Austin, _Analysis and simulation of surface layer winds using multiplicative cascaded models with self similar probability densities_, _Boundary Layer Meteorology_, **100**, 263-286, (2001). * 2262, (2004). * [24] G. Vattay and A. Harnos, _Physical Review Letters_, _Scaling behavior in daily air humidity fluctuations_, **73**(5), 768-771, 1994. - Interdisciplinary Applications_ - edited by M. Gell-Mann and C. Tsallis, New York Oxford University Press, (2003). * 173, (2005). * [27] A. Papoulis, _Random Variables and Stochastic Processes_, _Mc Graw Hill_ (1994). * [28] P. Bak, C. Tang and K. Wiesenfeld, _Self-organized criticality: an explanation of 1/f Noise_, _Phys. Rev. Lett._**59**, 381-384, (1987). * [29] P. Manneville, _Intermittency, self-similarity and 1/f spectrum in dissipative dynamical systems_, _Journal de Physique_**41**, 1235-1243, (1980). * [30] J. M. Haussdorf and C-K. Peng, _Multiscaled randomness: A possible source of 1/f scaling in biology_, _Physical Review E_**54**, 2154-2157, (1994). * [31] H. E. Hurst, _Long-term storage capacity of reservoirs_, _Trans. Amer. Soc. Civ. Engrs._**116**, 770-808, (1951) * [32] P. Abry and D. Veitch, _Wavelet analysis of long-range-dependent traffic_, _IEEE Trans. on Information Theory_**44**, 2-15, (1998). * [33] J. W. Kantelhardt _et.al_, _Multifractality of river runoff and precipitation: Comparison of fluctuation analysis and wavelet methods_, _Physica A_**33**, 240-245, (2003) * [34] V. Livina _et.al_, _Physica A_**330**, 283-290, (2003). * [35] S. Havlin _et.al_, _Physica A_**274**, 99-110, (1999). * [36] Y. Ashkenazy _et.al_, _Phy. Rev. Lett_**86**, 1900-1903, (2001). * [37] J. W. Kantelhardt _et.al_, _Detecting long-range correlations with detrended fluctuation analysis_, _Physica A_**295**, 441-454, (2001). * [38] R. Nagarajan and R. G. Kavasseri, _Physica A_, **354**, pp : 182-198, (2005) * [39] K. Hu. _et.al_, _Effects of trends on detrended fluctuation analysis_, _Phy. Rev. E_**64**, 011114:1-19, (2001). * [40] J. Feder, _Fractals_, Plenum Press, New York (1988). * [41] A. Barabasi and T. Vicsek, _Multifractality of self-affine fractals_, _Ph. Rev. A_**44**, 2730-2733, (1991) * [42] B. Mandelbrot and J. Wallis, _Noah, Joseph and operational hydrology_, _Water Resources Research_**4**, 909-918, (1968). * [43] P. Ch. Ivanov _et.al_, _Multifractality in human heartbeat dynamics_, _Nature_**399**, 461-465, (1999). * [44] J. Levy Vehel and R. Reidi, _Fractional Brownian motion and data traffic modeling: The other end of the spectrum_, _Fractals in Engineering_, (Eds. J. Levy Vehel, E. Lutton and C. Tricot), Springer Verlag, (1996). Data Acquisition, MF-DFA algorithm and Discussion ### Data Acquisition The wind speed records at the 28 stations spanning nearly 8 years were obtained from part of the climatological archives of the state of North Dakota. Stations were selected to represent the general climate of the surrounding area. Wind speeds were recorded by means of conventional cup type anemometers located at a height of 10 ft. The anemometers have a range of 0 to 100 mph with an accuracy of \\(\\pm 0.25\\) mph. Wind speeds acquired every five seconds are averaged over a 10 minute interval to compute the 10 minute average wind speed. The 10 minute average wind speeds are further averaged over a period of one hour to obtain the hourly average wind speed. ### Multifractal Detrended Fluctuation Analysis (MF-DFA): MF-DFA, [20] a generalization of DFA has been shown to reliably extract more than one scaling exponent from a time series. A brief description of the algorithm is provided here for completeness. A detailed explanation can be found elsewhere [20]. Consider a time series \\(\\{x_{k}\\},k=1\\ldots N\\). The MF-DFA algorithm consists of the following steps. 1. The series \\(\\{x_{k}\\}\\) is integrated to form the integrated series \\(\\{y_{k}\\}\\) given by \\[y(k)=\\sum_{i=1}^{i=k}[x(i)-\\bar{x}]\\ \\ \\ k=1,\\ldots N\\] (1) where \\(\\bar{x}\\) represents the average value. 2. The series \\(\\{y_{k}\\}\\) is divided in to \\(n_{s}\\) non-overlapping boxes of equal lengths where \\(n_{s}=int(N/s)\\). To accommodate the fact that some of the data points may be left out, the procedure is repeated from the other end of the data set [20]. 3. The local polynomial trend \\(y_{v}\\) with order \\(v\\) is fit to the data in each box, the corresponding variance is given by \\[F^{2}(v,s)=\\left\\{\\frac{1}{s}\\sum_{i=1}^{i=s}\\{y[N-(v-n_{s})s+i]-y_{v}(i)\\right\\} ^{2}\\] (2) for \\(v=1,\\ldots n_{s}\\). Polynomial detrending of order m is capable of eliminating trends up to order m-1. [20]4. The \\(qth\\) order fluctuation function is calculated from averaging over all segments. \\[F_{q}(s)=\\left\\{\\frac{1}{2n_{s}}\\sum_{i=1}^{i=2n_{s}}[F^{2}(v,s)]^{q/2}\\right\\}^{ 1/q}\\] (3) In general, the index \\(q\\) can take any real value except zero. 5. Step 3 is repeated over various time scales \\(s\\). The scaling of the fluctuation functions \\(F_{q}(s)\\) versus the time scale \\(s\\) is revealed by the log-log plot. 6. The scaling behavior of the fluctuation functions is determined by analyzing the log-log plots \\(F_{q}(s)\\) versus \\(s\\) for _each_\\(q\\). If the original series \\(\\{x_{k}\\}\\) is power-law correlated, the fluctuation function will vary as \\[F_{q}(s)\\sim s^{h(q)}\\] (4) The MF-DFA algorithm [20] was used to compute the multifractal fluctuation functions. The slopes of the fluctuation functions (\\(h(q)\\)) for each \\(q=(-10,-6,-0.2,2,6,10)\\) was estimated by linear regression over the time scale range \\(s=[2.2,3.7]\\). The generalized Hurst exponents (\\(h(q)\\)) are related to the Renyi exponents \\(\\tau(q)\\) by \\(qh(q)=\\tau(q)+1\\). The multifractal spectrum \\(f(\\alpha_{h})\\) defined by, [40]\\(\\alpha_{h}=\\frac{d\\tau(q)}{dq},\\ \\ f(\\alpha_{h})=q\\alpha_{h}-\\tau(q)\\). Under the assumption of a binomial multiplicative cascade model [40] the generalized exponents \\(h(q)\\) can be determined from \\(h(q)=\\frac{1}{q}-\\frac{ln(a^{q}+b^{q})}{qln2}\\). The parameters \\(a\\) and \\(b\\) for each station was determined using a nonlinear least squares fit of the preceding formula with those calculated numerically. Finally, the multifractal width was calculated using \\(\\Delta\\alpha=\\Delta\\alpha_{h}=h(-\\infty)-h(\\infty)=\\frac{(ln(b)-ln(a))}{ln2}\\), [20]. ### Discussion The power spectrum of the wind speed records considered in the present study exhibited a power law decay of the form (\\(S(f)\\sim 1/f^{\\beta}\\)) superimposed with prominent peaks which corresponds to multiple sinusoidal trends. Such a behavior is to expected on data sets spanning several years. The nature of the power spectral signature was consistent across all stations. This enables us to compare the nature of scaling across the 28 stations. Recent studies had indicated the susceptibility of MF-DFA to sinusoidal trends in the given data, [39]. Sinusoidal trends can give rise to spurious crossovers and nonlinearity in the log-log plot that indicate the existence of more than one scaling exponent. In a recent study [38], we had successfully used the singular-value decomposition to minimize the effect of offset, power-law, periodic and quasi-periodic trends. However, SVD is a linear transform and may be susceptible when linearity assumptions are violated. Estimating the SVD for large-embedding matrices is computationally challenging. Therefore, in the present study we opted for a qualitative description of multifractal structure by inspecting the nature of the log-log plots of the fluctuation function versus time scale \\(F_{q}(s)\\) vs \\(s\\) with varying moments \\(q\\). We show that the nature of the log-log plot does not show appreciable change with varying moments \\(q\\) for monofractal data superimposed with sinusoidal trends. However, a marked change in the nature of the log-log plot is observed for multifractal data superimposed with sinusoidal trends. Moreover, the nature of the trends as reflected by the power spectrum is consistent across the 28 stations. This enables us to compare the multifractal description obtained across the stations. The effectiveness of the qualitative description is demonstrated with synthetic monofractal and multifractal data sets superimposed with sinusoidal trends. #### a.3.1 MF-DFA results of monofractal and multifractal data superimposed with sinusoidal trends Consider a signal \\(y_{1}(n)\\) consisting of monofractal data \\(s_{1}(n)\\) superimposed with a sinusoidal trend \\(t_{1}(n)\\). Let \\(y_{2}(n)\\) be a signal consisting of multifractal data \\(s_{2}(n)\\) superimposed with sinusoidal trend \\(t_{2}(n)\\). The trends are described by, \\[t_{1}(n)=A_{1}\\sin(2\\pi n/T_{1})+A_{2}\\sin(2\\pi n/T_{2})+A_{3} \\sin(2\\pi n/T_{3}),n=1\\ldots N_{1}\\] \\[t_{2}(n)=B_{1}\\sin(2\\pi n/T_{1b})+B_{2}\\sin(2\\pi n/T_{2b}),n=1 \\ldots N_{2}\\] The signals are given by, \\(y_{i}(n)=s_{i}(n)+t_{i}(n),i=1,2\\) where \\(s_{1}(n)\\) is monofractal data with \\(\\alpha=0.9\\) and \\(s_{2}(n)\\) is multifractal data is that of internet log traffic, [44]. The dominant spectral peaks Fig.4(a) and Fig.4(b) reflect the presence of these trends in signals \\(y_{1}\\) and \\(y_{2}\\) respectively. The MF-DFA plots \\(F_{q}(s)\\) vs \\(s\\) with fourth order detrending and \\(q=-10,-8,-6,-4,-2,2,4,6,8,10\\) for signals \\(y_{1}\\) and \\(y_{2}\\) are shown in Fig.4(c) and Fig.4(d) respectively. For the monofractal data, the trends introduce spurious crossover at \\(s\\sim 2.2\\) in the log-log plot of the \\(F_{q}(s)\\) vs \\(s\\) for a given \\(q\\). However, the nature of the log-log plots fail to show appreciable change with varying \\(q\\), Fig.4(c). For multifractal data with trends, spurious crossovers are still noted at \\(s\\sim 2.2\\) in the log-log plot of the \\(F_{q}(s)\\) vs \\(s\\) for a given \\(q\\). However, in this case, Fig.4(d), the nature of the log-log plots show a significant change with varying \\(q\\) indicating multifractal scaling in the given data, unlike the case with monofractal data with trends. **Parameters:** \\(A_{1}=6,\\ A_{2}=3,\\ A_{3}=2,\\ T_{1}=2^{6},\\ T_{2}=2^{4},\\ T_{3}=2^{2},N_{1}=2^ {17},\\ B_{1}=6000,\\ B_{2}=3000,\\ T_{1b}=2^{6},\\ T_{2b}=2^{4},N_{2}=2^{15}\\). The data sets are available from [http://www.physionet.org/physiobank/database/synthetic/tns/](http://www.physionet.org/physiobank/database/synthetic/tns/). Figure 4: MF-DFA studies of monofractal and multifractal data sets superimposed with multiple sinusoidal trends.
The complexity of the atmosphere endows it with the property of turbulence by virtue of which, wind speed variations in the atmospheric boundary layer (ABL) exhibit highly irregular fluctuations that persist over a wide range of temporal and spatial scales. Despite the large and significant body of work on microscale turbulence, understanding the statistics of atmospheric wind speed variations has proved to be elusive and challenging. Knowledge about the nature of wind speed at ABL has far reaching impact on several fields of research such as meteorology, hydrology, agriculture, pollutant dispersion, and more importantly wind energy generation. In the present study, temporal wind speed records from twenty eight stations distributed through out the state of North Dakota (ND, USA), (\\(\\sim\\) 70,000 square-miles) and spanning a period of nearly eight years are analyzed. We show that these records exhibit a characteristic broad multifractal spectrum irrespective of the geographical location and topography. The rapid progression of air masses with distinct qualitative characteristics originating from Polar regions, Gulf of Mexico and Northern Pacific account for irregular changes in the local weather system in ND. We hypothesize that one of the primary reasons for the observed multifractal structure could be the irregular recurrence and confluence of these three air masses. **keywords : wind speed, self-similarity, multifractal scaling, atmosphere, boundary layer**
Summarize the following text.
arxiv-format/0604158v1.md
# Rainfall Advection using Velocimetry by Multiresolution Viscous Alignment 1 Footnote 1: This material is supported in part by NSF ITR 0121182 and DDDAS 0540259. Sai Ravela Earth, Atmospheric and Planetary Sciences Virat Chatdaarong Civil and Environmental Engineering Massachusetts Institute of Technology [email protected] April 10, 2006 ## 1 Introduction Environmental data assimilation is the methodology for combining imperfect model predictions with uncertain data in a way that acknowledges their respective uncertainties. The proper framework for state estimation includes sequential [15], ensemble-based [14] and variational [20, 5] methods. The difficulties created by improperly represented error are particularly apparent in mesoscale meteorological phenomena such as thunderstorms, squall-lines, hurricanes, precipitation, and fronts. We are particularly interested in rainfall data-assimilation, where rainfall measurements from satellite data, radar data, or in-situ measurements are used to condition a rainfall model. Such conditional simulations are valuable both for producing estimates at the current time (nowcasting), as well as for short-term forecasting. There are a countless number of models developed to simulate the rainfall process. In general, there are two types of models that can deal with spatial and temporal characteristics of rainfall. The first category is the meteorological model or the quantitative precipitation forecasting model. It involves a large, complex set of differential equations seeking to represent complete physical processes controlling rainfall and other weather related variables. Examples of these models include the fifth-generation Mesoscale Model (MM5) [3, 4, 16], the step-mountain Eta coordinate model [1, 2, 13], and the Regional Atmospheric Modeling System (RAMS) [7, 12], etc. The second type is the spatiotemporal stochastic rainfall model. It aims to summarize the spatial and temporal characteristics of rainfall by a small set of parameters [6, 18, 11, 8, 22, 25]. This type of model usually simulates the birth and decay of rain-cells and evolve them through space and time using simple physical descriptions. Despite significant differences among these rainfall models, the concept of propagating rainfall through space and time are relatively similar. The major ingredient required to advect rainfall is a velocity field. Large spatial-scale (synoptic) winds are inappropriate for this purpose for a variety of reasons. Ironically, synoptic observations can be sparse to be used directly and although synoptic-scale wind analyses produced from them (and models) do produce dense spatial estimates, such estimates often do not contain variability at the meso-scales of interest. The motion of mesoscale convective activity is a natural source for velocimetry. Indeed, there exist products that deduce \"winds\" by estimating the motion of temperature, vapor and other fields evolving in time [9, 10]. In this paper, we present an algorithm for velocimetry from observed motion from satellite observations such as GOES, AMSU, TRMM, or radar data such as NOWRAD. This algorithm follows from a Bayesian formulation of the motion estimation problem, where a dense displacement field is estimated from two images of cloud-top temperature of rain-cells separated in time. Ordinarily, the motion estimation problem is ill-posed, because the displacement field has far too many degrees of freedom than the motion. Therefore, some form of regularization becomes necessary and by imposing smoothness and non-divergence as desirable properties of the estimated displacement vector field solutions can be obtained. This approach provides marked improvement over other methods in conventional use. In contrast to correlation based approaches used for deriving velocity from GOES imagery, the displacement fields are dense, quality control is implicit, and higher-order and small-scale deformations can be easily handled. In contrast with optic-flow algorithms [21, 17], we can produce solutions at large separations of mesoscale features between large time-steps or where the deformation is rapidly evolving. After formulating the motion estimation problem and providing a solution, we extend the algorithm using a multi-resolution procedure. The primary advantage of a multi-resolution approach is to produce displacement fields quickly. The secondary advantage is to structure the estimation homotopically; coarse or low-frequency information is used first to produce velocity estimates over which deformation adjustments from finer-scale structures is superposed. The result is a powerful algorithm for velocimetry by alignment. As such, it is useful in a variety of situations including, for example, (a) estimating winds, (b) estimating transport of tracers, (c) Particle Image Velocimetry, (d) Advecting Rainfall models etc. ## 2 Related Work There are two dominant approaches to computing flow from observations directly. The first is correlation-based and the second is based on optic flow. In correlation based approaches [19], a region of interest (or patch) is identified in the first image and correlated within a search window in the second image. The location of the best match is then used to compute a displacement vector. When the input image or field is tiled, possibly overlapping, and regions of interest are extracted from each tile location, the result is velocimetry at regular intervals and is most commonly used for Particle Image Velocimetry (PIV). In certain instances it is useful to define interest-points or salient features around which to extract regions of interest. In particular, if the field has many areas with negligible spatial variability, then matches are undefined. As a quality control measure then, matching is restricted only to those regions of interest that have interesting variability, or interest points. There are several disadvantages to correlation-based approaches. First, by construction it is assumed that the entire ROI purely translates from one image to the other. This is not always the case, but is a reasonable approximation when the right length scale can be found. However, when higher-order deformations (shears for example) are present, correlation based approaches cannot be expected to work well. Second, correlation based approaches assume that a unique match can be found in a way that is substantially better than correlation elsewhere. This is only true if the features are well-defined and identified. Third, there is no implicit consistency across regions of interest in correlation-based flow. Neighboring regions of interest can and often do match at wildly different and inconsistent locations. This calls for a significant overhead in terms of quality control. Fourth, it is not clear how the search window size (that is the area over which a region of interest is matched in the subsequent frame) is determined. This window size varies both in space (as the velocity varies spatially) and time (as velocity varies with time). A larger search window portends a larger probability to miss the real target, and a smaller search window can lead to false negatives or false positives. Finally, where interest points are used as a preprocessing step to correlation, the velocity field produced is necessarily sparse, and therefore, leaves hanging the question of how to produce dense flow fields. Our proposed algorithm handles all these issues in a simple and direct way. More closely related to the proposed approach is optic flow [21, 17]. This method arises from what is known as the brightness constraint equation, which is a statement of conservation of brightness (intensity) mass, expressed by the continuity equation evaluated at each pixel or grid node of \\(X\\). \\[\\frac{\\partial X}{\\partial t}+\\mathbf{q}\\cdot\ abla X=0 \\tag{1}\\] Here \\(X\\) is the brightness or intensity scalar field and \\(\\mathbf{q}\\) a displacement vector-field. Solutions to the optic flow equation can be formulated using the well-known method by [21], which can be stated as a solution to the following system of equations: \\[(\ abla X)(\ abla X)^{T}\\mathbf{q}=-(\ abla X)\\frac{\\partial X}{\\partial t} \\tag{2}\\] The right-hand side is completely determined from a pair of images and the coefficient or stiffness matrix on the left-hand side is the second-derivative of the auto correlation matrix, also known as the windowed second-moment matrix, or Harris interest operator, which is sensitive to \"corners\" in an image. This formulation arises directly from a quadratic formulation, which can in turn be synthesized from a Bayesian formulation under a Gaussian assumption. Thus, we can write that we seek to minimize \\[J(\\mathbf{q})=\\left|\\left|X(\\mathbf{r}-\\mathbf{q})-Y\\right|\\right| \\tag{3}\\]Then solve this problem via the Euler-Lagrange equation: \\[\\frac{\\partial J(\\mathbf{q})}{\\partial\\mathbf{q}} = \ abla X|_{\\mathbf{r}-\\mathbf{q}}(X(\\mathbf{r}-\\mathbf{q})-Y)=0 \\tag{4}\\] The solution is obtained by _linearizing_ (4), that is, \\[\ abla X|_{\\mathbf{r}-\\mathbf{q}}(X(\\mathbf{r})-\ abla X\\cdot \\mathbf{q}-Y) = 0\\] \\[\ abla X(\ abla X)^{T}\\mathbf{q} = -\ abla X(Y-(X(\\mathbf{r})) \\tag{5}\\] There are several disadvantages to this algorithm. First, much like correlation with feature detection, equation 5 is evaluated at pixels where the second-moment matrix is full-rank, which corresponds to locations where features are present. There is no clear way of propagating information obtained at sparse locations to locations where direct computation of displacement is not possible due to poor conditioning of the second-moment matrix. For the same reason, it cannot handle tangential flows. The brightness constraint equation can only represent flows along brightness streamlines. When tangential motion is present, detected motion at extreme ends a moving curve cannot be propagated easily into the interior. Our method provides some degree of spatial smoothness common in geophysical fluid transport, and uses regularization constraints to propagate flow information to nodes where feature strengths are weak. Second, the linearization implicit in (5) precludes large displacements; structures must be closely overlapping in successive images, which can also be seen from the continuity equation (1). Therefore, this method is very useful for densely sampled motion, such as ego-motion resulting from a moving, jittering camera, but is not as useful for sparsely sampled flow arising from structures moving in a scene. In the latter case, to ameliorate the effects of large expected displacement, multi-resolution approaches have been proposed. Even so, much like determining the size of the search window in correlation, determining the number of resolutions is an ad-hoc procedure. Our method can handle large displacements and we also propose a multi-resolution approach, but the primary motivation there is improved computational speed. ## 3 Velocimetry by Field Alignment The main approach consists of solving a nonlinear quadratic estimation problem for a field of displacements. Solutions to this problem are obtained by regularizing the an ill-posed inverse problem. The material presented in this section is derived directly from work by Ravela [24], and Ravela et al. [23]. Here we reformulate their original formulation to allow only position adjustments. To make this framework more explicit it is useful to introduce some notation. Let \\(X=X(\\mathbf{r})=\\{X[\\underline{r}_{1}^{T}]\\ldots X[\\underline{r}_{m}^{T}]\\}\\) be the first image, written as a vector, defined over a spatially discretized computational grid \\(\\Omega\\), and \\(\\mathbf{r}^{\\mathbf{T}}=\\{\\underline{r}_{i}=(x_{i},y_{i})^{T},i\\in\\Omega\\}\\) be the position indices. Let \\(\\mathbf{q}\\) be a _vector_ of displacements, that is \\(\\mathbf{q^{T}}=\\{\\underline{q}_{i}=(\\Delta x_{i},\\Delta y_{i})^{T},i\\in\\Omega\\}\\). Then the notation \\(X(\\mathbf{r}-\\mathbf{q})\\) represents _displacement_ of \\(X\\) by \\(\\mathbf{q}\\). The displacement field \\(\\mathbf{q}\\) is real-valued, so \\(X(\\mathbf{r}-\\mathbf{q})\\) must be evaluated by interpolation if necessary. It is important to understand that this displacement field represents a warping of the underlying grid, whose effect is to move structures in the image around, see Figure 1. In a probabilistic sense, we may suppose that finding \\(\\mathbf{q}\\) that has the maximum a posteriori probability in the distribution \\(P(\\mathbf{q}|\\mathcal{X},\\mathcal{Y})\\) is appropriate. Without loss of generality, \\(\\mathcal{X}\\) is a random variable corresponding to the image or field at a given time and \\(\\mathcal{Y}\\) is random variable for a field at a future time. Using Bayes rule we Figure 1: A graphical illustration of field alignment. State vector on a discretized grid is moved by deforming its grid (\\(\\mathbf{r}\\)) by a displacement (\\(\\mathbf{q}\\)). obtain \\(P(Q={\\bf q}|{\\cal X}=X,{\\cal Y}=Y)\\propto P({\\cal Y}=Y,{\\cal X}=X|{\\bf q})P({\\bf q})\\). If we make a Gaussian assumption of the component densities, we can write: \\[P(X,Y|{\\bf q})=\\frac{1}{(2\\pi)^{\\frac{n}{2}}\\left|R\\right|^{\\frac{1}{2}}}e^{- \\frac{1}{2}(Y-X({\\bf r}-{\\bf q}))^{T}R^{-1}(Y-X({\\bf r}-{\\bf q}))} \\tag{6}\\] This equation says that the observations separated in time can be related using a Gaussian model to the displaced state X(**r**- **q**), where X(**r**) is defined on the original grid, and **q** is a displacement field. We use the linear observation model here, and therefore, \\(Y=HX({\\bf r}-{\\bf q})+\\eta,\\eta\\sim N(0,R)..\\) We should emphasize here that the observation vector is fixed. It's elements are always defined from the original grid. In fully observed fields, H is an identity matrix, and for many applications R, reflecting the noise in the field, can also be modeled as an identity matrix. \\[P({\\bf q})=\\frac{1}{C}e^{-L({\\bf q})} \\tag{7}\\] This equation specifies a _displacement prior_. This prior is constructed from an energy function \\(L({\\bf q})\\) which expresses constraints on the displacement field. The proposed method for constructing \\(L\\) is drawn from the nature of the expected displacement field. Displacements can be represented as smooth flow fields in many fluid flows and smoothness naturally leads to a Tikhonov type formulation [26] and, in particular, \\(L(\\mathbf{q})\\) is designed as a gradient and a divergence penalty term. These constraints, expressed in quadratic form are: \\[L(\\mathbf{q})=\\frac{w_{1}}{2}\\sum_{j\\in\\Omega}\\mathbf{tr}\\{[\ abla\\underline{q}_{ j}][\ abla\\underline{q}_{j}]^{T}\\}+\\frac{w_{2}}{2}\\sum_{j\\in\\Omega}[\ abla \\cdot\\underline{q}_{j}]^{2} \\tag{8}\\] In Equation 8, \\(\\mathbf{q}_{j}\\) refers to the \\(j^{th}\\) grid index and \\(\\mathbf{tr}\\) is the trace. Equation 8 is a _weak constraint_, weighted by the corresponding weights \\(w_{1}\\) and \\(w_{2}\\). Note that the constant C can be defined to make Equation 7 a proper probability density. In particular, define \\(Z(\\mathbf{q})=e^{-L(\\mathbf{q})}\\) and define \\(C=\\int\\limits_{\\mathbf{q}}Z(\\mathbf{q})d\\mathbf{q}\\). This integral exists and converges. With these definitions of probabilities, we are in a position to construct an objective by evaluating the log probability. We propose a solution using Euler-Lagrange equations. Defining \\(\\mathbf{p}=\\mathbf{r}-\\mathbf{q}\\)These can be written as: \\[\\frac{\\partial J}{\\partial\\mathbf{q}} = \ abla X|_{\\mathbf{p}}H^{T}R^{-1}\\left(H\\ X\\left(\\mathbf{p} \\right)-Y\\right)+\\frac{\\partial L}{\\partial\\mathbf{q}}=0 \\tag{9}\\] Using the regularization constraints ( 9) at a node \\(i\\) now becomes: \\[w_{1}\ abla^{2}\\underline{q}_{i}+w_{2}\ abla(\ abla\\cdot\\underline{q}_{i})+ \\left[\ abla X^{fT}|_{\\mathbf{p}}H^{T}R^{-1}\\left(H\\left[X^{f}\\left(\\mathbf{p }\\right)\\right]-Y\\right)\\right]_{i}=0 \\tag{10}\\] Equation 10 is the field alignment formulation. It introduces a forcing based on the residual between the model- and observation-fields. The constraints on the displacement field allow the forcing to propagate to a consistent solution. Equation 10 is also non-linear, and is solved iteratively, as a Poisson equation. During each iteration \\(\\mathbf{q}\\) is computed by holding the forcing term constant. The estimate of displacement at each iteration is then used to deform a copy of the original forecast model-field using bi-cubic interpolation for the next iteration. The process is repeated until a small displacement residual is obtained, the misfit with observations does not improve, or an iteration limit is reached. Upon convergence, we have an aligned image \\(X(\\mathbf{\\hat{p}})\\), and a displacement field \\(\\mathbf{\\hat{q}}=\\sum\\limits_{d=1}^{N}q^{(d)}\\), for individual displacements \\(q^{(d)}\\) at iterations \\(d=1\\ldots D\\) ### Multi-resolution Alignment and Velocimetry The convergence of solution to the alignment equation is super-linearly dependent on the expected displacement between the two fields. Therefore, it is desirable to solve it in a coarse-to-fine manner, which serves two principal advantages. The first, as the following construction will show, is to substantially speed-up the time to alignment because decimated (or coarse-resolution) representations of a pair of fields has smaller expected displacement than a pair at finer resolution. Second, decimation or resolution reduction also implies that finer structure or higher spatial frequencies will be attenuated. This smoothness in the coarsened-field intensities directly translates to smoothness in flow-fields using ( 9). Thus, a coarse-to-fine method for alignment can incrementally add velocity contributions from higher-frequencies, that is it incrementally incorporates higher-order variability in the displacement field. Many of the advantages of a multi-resolution approach have been previously explored in the context of visual motion estimation, including the famous pyramid algorithm and architecture for matching and flow and our implementation borrows from this central idea. The multi-resolution algorithm is depicted in Figure 2 for two levels. The Figure 2: The multi-resolution algorithm is shown for two-levels and requires five steps, labeled (1) through (5). See text for explanation. input images \\(X\\) and \\(Y\\) are decimated to generate coarse resolution images \\(X_{1}\\) and \\(Y_{1}\\) respectively (step 1). Let us suppose that this scaling is by a factor of \\(0<s<1\\) (most commonly \\(s=0.5\\)). Displacement is computed for this level first, and let us call this \\(\\mathbf{\\hat{q}_{1}}\\) (step 2). This displacement field is downscaled by a factor of \\(s\\), using simple (bicubic) interpolation, to produce a prior estimate of displacement at level 0, written \\(\\mathbf{\\hat{q}_{10}}=s^{-1}\\mathbf{\\hat{q}_{0}}(s^{-1}\\mathbf{r})\\) (step 3). The source image at level 0, that is \\(X_{0}=X\\) is displaced by \\(\\mathbf{\\hat{q}_{10}}\\) (step 4) and thus \\(X(\\mathbf{r}-\\mathbf{\\hat{q}_{10}})\\) is aligned with \\(Y_{0}\\) to produce a displacement estimate \\(\\mathbf{\\hat{q}_{0}}\\) (step 5). The total displacement relating source image \\(X\\) with target field \\(Y\\) is simply \\(\\mathbf{\\hat{q}_{0}}+\\mathbf{\\hat{q}_{10}}\\). Multiple levels of resolution can be implemented from this framework recursively. ## 4 Example Figure 3: CIMSS Winds derived from GOES data at 2006-04-06-06Z (left) and pressure (right). The velocity vectors are sparse and contain significant divergence. The performance of this algorithm is illustrated in a velocimetry computation. To compare, we use CIMSS wind-data satellite data [10], depicted in Figure 3, and Figure 4 obtained from CIMSS analysis on 2006-06-04 at 06Z and 09Z respectively. CIMSS wind-data is shown over the US great plains, and were obtained from the'sounder.' The red dots indicate the original location of the data. The left subplot shows wind speed (in degree/hr). The right ones show pressure, and the location of raw measurements in red. It can be seen in the maps shown in Figure 3 and Figure 4 that current method to produce winds generate sparse vectors and, further, has substantial divergence. Whilst this can be thought of as accurately representing turbulence, in reality these vectors are more likely the result of weak quality control. The primary methodology used here is to identify features in an image, extract regions of interest around Figure 4: CIMSS Winds derived from GOES data at 2006-04-06-09Z (left) and pressure (right). The velocity vectors are sparse and contain significant divergence. them and search for them in subsequent frames. This, by definition produces sparse velocity estimates (features are sparse), leaving unanswered how to systematically incorporate appropriate spatial interpolation functions for the velocity. Since regions of interest are essentially treated as being statistically independent, mismatches can produce widely varying displacement vectors. Such mis-matches can easily occur in correlation based approaches when the features are not distinguishing or substantial deformations occur from one time to another in a region of interest. A more detailed discussion is presented in Section 2. In contrast, our method produces dense flow fields, and quality control is implicit from regularization constraints. Figure 5(a,b) shows a pair of NOWRAD images at 2006-06-01-0800Z and 2006-06-01-0900Z respectively, and the computed flow field in Figure 5(c). Similarly, Figure 5(d,e,f) show the GOES images and velocity from the same time frame over the deep convective rainfall region in the Great Plains example. The velocities are in good agreement with CIMSS derived winds where magnitudes are concerned, but the flow-fields are smooth and visual confirmation of the alignment provides convincing evidence that they are correct. ## 5 Conclusions Our method is a Bayesian perspective of the velocimetry problem. It has several distinct advantages: (a) It is useful for a wide range of observation modalities. (b) Our approach does not require features to be identified for computing velocity. This is a significant advantage because features cannot often be clearly delineated, and are by definition sparse. (c) Our approach implicitly uses quality control in terms of smoothness, and produces dense flow-fields. (d) our approach can be integrated easily with current operational implementations, thereby making this effort more likely to have a real impact. Finally, it should be noted that the regularization constraint in field alignment is a weak constraint and the weights determine how strongly the constraints influence the flow field. The constraint in \\(L\\) is modeled as such because we expect the fluid flow to be smooth. From a regularization point of view, there can be other choices [27] as well. The proposed method can be used for a variety of velocimetry applications including PIV, velocity from tracer-transport, and velocity from GOES and other satellite data, and an application of this is to advect rain-cells produced by a rainfall model, with realistic wind-forcing. ## References * [1] T. L. Black. The new nmc moesoscale eta model: Description and forecast examples. _Weather and Forecasting_, 9(2):265-278, 1994. * [2] T. L. Black, D. Deaven, and G. DiMego. The step-mountain eta coordinate model: 80 km early version and objective verifications. _NWS/NOAA Tech. Procedures Bull., 1993. 412: p. 31._, 412:31, 1993. * [3] F. Chen and J. Dudhia. Coupling an advanced land surface-hydrology model with the penn state-ncar mm5 modeling system. part i: Model implementation and sensitivity. _Monthly Weather Review_, 129(4):569-585, 2001. * [4] F. Chen and J. Dudhia. Coupling an advanced land surface-hydrology model with the penn state-ncar mm5 modeling system. part ii: Preliminary model validation. _Monthly Weather Review_, 129(4):587-604, 2001. * [5] P. Courtier. Variational methods. _J. Meteor. Soc. Japan_, 75, 1997. * [6] P. Cowpertwait. Further developments of the neyman-scott clustered point process for modeling rainfall. _Water Resource Research_, 27(7), 1991. * [7] A. Orlandi et al. Rainfall assimilation in rams by means of the Kuo parameterisation inversion: Method and preliminary results. _Journal of Hydrology_, 288(1-2):20-35, 2004. * [8] C. Onof et al. Rainfall modelling using poisson-cluster processes: A review of developments. _Stochastic Environmental Research and Risk Assessment_, 2000. * [9] C. S. Velden et al. Upper-tropospheric winds derived from geostationary satellite water vapor observations. _Bulletin of the American Meteorological Society_, 78(2):173-195, 1997. * [10] C. Velden et al. Recent innovations in deriving tropospheric winds from meteorological satellites. _Bulletin of the American Meteorological Society_, 86(2):205-223, 2005. * [11] H. Moradkhani et al. Dual state-parameter estimation of hydrological models using ensemble kalman filter. _Advances in Water Resources_, 28(2):135-147, 2005. * [12] R. A. Pielke et al. A comprehensive meteorological modeling system rams. _Meteorology and Atmospheric Physics_, 49(1-4):69-91, 1992. * [13] R. Rogers et al. Changes to the operational \"early\" eta analysis forecast system at the national centers for environmental prediction. _Weather and Forecasting_, 11(3):391-413, 1996. * [14] G. Evensen. The ensemble kalman filter: Theoretical formulation and practical implementation. _Ocean Dynamics_, 53:342-367, 2003. * [15] A. Gelb. _Applied Optimal Estimation_. MIT Press, 1974. * [16] G. Grell, J. Dudhia, and D.R. Stauffer. A description of the fifth generation penn state/ncar mesoscale model (mm5). Technical Report TN-398+IA, NCAR, 1993. * [17] D. J. Heeger. Optical flow from spatiotemporal filters. _International Journal of Computer Vision_, pages 279-302, 1988. * [18] M. N. Khaliq and C. Cunnane. Modelling point rainfall occurrences with the modified brattlett-lewis rectangular pulses model. _Journal of Hydrology_, 180(1):109-138, 1996. * [19] D. T. Lawton. Processing translational motion sequences. _Computer Vision, Graphics and Image Processing_, 22:116-144, 1983. * [20] A. C. Lorenc. Analysis method for numerical weather predictin. _Q. J. R. Meteorol. Soc._, 112:1177-1194, 1986. * [21] H.-H Nagel. Displacement vectors derived from second order intensity variations in image sequences. _Computer Vision, Graphics and Image Processing_, 21:85-117, 1983. * [22] T. M. Over and V. K. Gupta. A space-time theory of mesoscale rainfall using random cascades. _Journal of Geophysical Research_, 101(D21):319-332, 1996. * [23] S. Ravela. Amplitude-position formulation of data assimilation. In _ICCS 2006, Lecture Notes in Computer Science_, number 3993 in Part III, pages 497-505, 2006. * to appear_, 2006. * [25] I. Rodriguez-Iturbe, D.R. Cox, and V. Isham. A point process model for rainfall: Further developments. _Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences_, 417(1853):283-298, 1988. * [26] A.N. Tikhonov and V. Y. Arsenin. _Solutions of Ill-Posed Problems_. Wiley, New York, 1977. * [27] G. Wabha and J. Wendelberger. Some new mathematical methods for variational objective analysis using splines and cross-validation. _Monthly Weather Review_, 108, 1980. Figure 5: Deriving velocimetry information from satellite observations, Nexrad (top), GOES (bottom). See text for more information.
An algorithm to estimate motion from satellite imagery is presented. Dense displacement fields are computed from time-separated images of of significant convective activity using a Bayesian formulation of the motion estimation problem. Ordinarily this motion estimation problem is ill-posed; there are far too many degrees of freedom than necessary to represent the motion. Therefore, some form of regularization becomes necessary and by imposing smoothness and non-divergence as desirable properties of the estimated displacement vector field, excellent solutions are obtained. Our approach provides a marked improvement over other methods in conventionaluse. In contrast to correlation based approaches, the displacement fields produced by our method are dense, spatial consistency of the displacement vector field is implicit, and higher-order and small-scale deformations can be easily handled. In contrast with optic-flow algorithms, we can produce solutions at large separations of mesoscale features between large time-steps or where the deformation is rapidly evolving.
Give a concise overview of the text below.
arxiv-format/0604459v3.md
# Quintessence models with an oscillating equation of state and their potentials Wen Zhao Department of Physics, Zhejiang University of Technology, Hangzhou, 310014, People's Republic of China ###### Introduction Recent observations on the Type Ia Supernova (SNIa)[1], Cosmic Microwave Background Radiation (CMB)[2] and Large Scale Structure (LSS)[3] all suggest that the universe mainly consists of dark energy (73%), dark matter (23%) and baryon matter (4%). How to understand the physics of the dark energy is an important issue, which has the EoS of \\(\\omega<-1/3\\), and leads to the recent accelerating expansion of the universe. Several scenarios have been put forward as a possible explanation of it. A positive cosmological constant is the simplest candidate, however it needs the extreme fine tuning to account for the observations. As the alternative to the cosmological constant, a number of dynamic models have been proposed[4; 5; 6; 7]. Among them, the quintessence is the most natural model[8], in which the dark energy is described by a scalar field \\(\\phi\\) with lagrangian density \\(\\mathrm{L}_{\\phi}=\\frac{1}{2}\\dot{\\phi}^{2}-V(\\phi)\\). These models can naturally give the EoS with \\(-1\\leq\\omega_{\\phi}\\leq 1\\). Usually, people discuss these models with monotonic potential functions i.e. the models with the exponential potentials and invert power law potentials. These models have some interesting characters, such as some models have late time attractor solutions with \\(\\omega_{\\phi}<0\\)[9], and some have the track solutions, which can naturally answer the cosmic \"coincidence problem\"[10]. Recently, a number of authors have considered the dark energy with oscillating EoS in the quintessence models[11], in the quintom models[12], in the ideal-liquid models and in the scalar-tensor dark energy models[13]. They discussed that this kind of dark energy may give a naturally answer for the \"coincidence problem\" and \"fine-tuning problem\". And in some models, it is a naturally way to relate the very early inflation and recent accelerating expansion. The most interesting is that these models are likely to be marginally suggested by some observations[14]. In this paper, we will mainly discuss the quintessence models with the oscillating EoS. First, we construct the potentials from the parametrization \\(\\omega_{\\phi}=\\omega_{0}+\\omega_{1}\\sin z\\). We find these potentials are all the oscillating functions, and the oscillating amplitudes are increasing (decreasing) with the field \\(\\phi\\). This character can be analyzed from the evolutive equation of \\(\\phi\\). This suggests the way to build the potential functions which can follow the oscillating EoS. Then we discuss three kinds of potentials, which are the combinations of the invert power law functions and the oscillating functions, and find that they indeed give the oscillating EoS. The plan of this paper is as follows: in Section 2, using the parametrized EoS \\(\\omega_{\\phi}=\\omega_{0}+\\omega_{1}\\sin z\\), we build their potentials, and investigate their general characters by discussing the kinetic equation of the quintessence field; then we build three kinds of models, and discuss the evolutions of their potentials, EoS and energy densities in Section 3; at last we will give a conclusion in Section 4. We use the units \\(\\hbar=c=1\\) and adopt the metric convention as \\((+,-,-,-)\\) throughout this paper. ## II Construction of the potentials First, we will study the general characters of the potentials, which can follow the oscillating EoS. We note that many periodic or nonmonotonic potentials have been put forward for dark energy, but rarely give rise to the periodic \\(\\omega_{\\phi}(z)\\). As one well-studied example, the potential for a pseudo-Nambu Goldstone boson (PNGB) field[15] can be written as \\(V(\\phi)=V_{0}[\\cos(\\phi/f)+1]\\), clearly periodic, where \\(f\\) is a (axion) symmetry energy scale. However, unless the field has already rolls through the minimum, the relation \\(\\omega_{\\phi}(z)\\) is monotonic and indeed can well described by the usual \\(\\omega_{\\phi}(a)=\\omega_{0}+\\omega_{1}(1-a)\\). Then what kind of potentials can naturally give the oscillating EoS? In the Ref.[11], the authors built an example quintessence model, which has the potential \\(V(\\phi)=V_{0}\\exp(-\\lambda\\phi\\sqrt{8\\pi G})[1+A\\sin(\ u\\phi\\sqrt{8\\pi G})]\\), where \\(\\lambda\\), \\(A\\) and \\(\ u\\) are all the constant numbers. They found that this model can indeed give an oscillating EoS, if choosing appropriate parameters. In this part, we will study the general characters of these models by constructing potential functions from the parametrized oscillating EoS. This method has been advised by Guo et al. in Ref.[16]. First, we will give a simple review of this method. The lagrangian density of the quintessence is \\[\\text{L}_{\\phi}=\\frac{1}{2}\\dot{\\phi}^{2}-V(\\phi). \\tag{1}\\] and the pressure, energy density and EoS are \\[p_{\\phi}=\\frac{1}{2}\\dot{\\phi}^{2}-V(\\phi),\\ \\ \\ \\ \\ \\rho_{\\phi}=\\frac{1}{2} \\dot{\\phi}^{2}+V(\\phi), \\tag{2}\\] \\[\\omega_{\\phi}\\equiv\\frac{p_{\\phi}}{\\rho_{\\phi}}=\\frac{\\dot{\\phi}^{2}-2V(\\phi) }{\\dot{\\phi}^{2}+2V(\\phi)} \\tag{3}\\] respectively. When the energy transformation is from kinetic energy to potential energy, the value of \\(\\omega_{\\phi}\\) is damping, and on the contrary, when the energy transformation is from potential energy to kinetic energy, the value of \\(\\omega_{\\phi}\\) is raising. So the evolution of \\(\\omega_{\\phi}\\) reflects the energy transformation relation of the quintessence field. This suggests the fact that it is impossible to get an oscillating EoS from the monotonic potentials, where the quintessence fields trend to run to the minimum of their potentials. Consider the Flat-Robertson-Walker (FRW) universe, which is dominated by the non-relativistic matter and a spatially homogeneous quintessence field \\(\\phi\\). From the expression of the pressure and energy density of the quintessence field, we have \\[V(\\phi)=\\frac{1}{2}(1-\\omega_{\\phi})\\rho_{\\phi}, \\tag{4}\\] \\[\\frac{1}{2}\\dot{\\phi}^{2}=\\frac{1}{2}(1+\\omega_{\\phi})\\rho_{\\phi}. \\tag{5}\\] These two equations relate the potential \\(V\\) and field \\(\\phi\\) to the only function \\(\\rho_{\\phi}\\). So the main task below is to build the function form \\(\\rho_{\\phi}(z)\\) from the parametrized EoS. This can be realized by the energy conservation equation of the quintessence field \\[\\dot{\\rho_{\\phi}}+3H(\\rho_{\\phi}+p_{\\phi})=0, \\tag{6}\\] where \\(H\\) is the Hubble parameter, which yields \\[\\rho_{\\phi}(z)=\\rho_{\\phi 0}\\exp\\left[3\\int_{0}^{z}(1+\\omega_{\\phi})d\\ln(1+z) \\right]\\equiv\\rho_{\\phi 0}E(z), \\tag{7}\\] where \\(z\\) is the redshift which is given by \\(1+z=a_{0}/a\\) and subscript \\(0\\) denotes the value of a quantity at the redshift \\(z=0\\) (present). In term of \\(\\omega_{\\phi}(z)\\), the potential can be written as a function of the redshift \\(z\\): \\[V[\\phi(z)]=\\frac{1}{2}(1-\\omega_{\\phi})\\rho_{\\phi 0}E(z). \\tag{8}\\] With the help of the Friedmann equation \\[H^{2}=\\frac{\\kappa^{2}}{3}(\\rho_{m}+\\rho_{\\phi}), \\tag{9}\\] where \\(\\kappa^{2}=8\\pi G\\) and \\(\\rho_{m}\\) is the matter density, one can get \\[\\tilde{V}[\\phi]=\\frac{1}{2}(1-\\omega_{\\phi})E(z), \\tag{10}\\] \\[\\frac{d\\tilde{\\phi}}{dz}=\\mp\\sqrt{3}\\frac{1}{(1+z)}\\left[\\frac{(1+\\omega)E(z) }{r_{0}(1+z)^{3}+E(z)}\\right]^{1/2}, \\tag{11}\\] where we have defined the dimensionless quantities \\(\\tilde{\\phi}\\) and \\(\\tilde{V}\\) as \\[\\tilde{\\phi}\\equiv\\kappa\\phi,~{}~{}~{}~{}~{}\\tilde{V}\\equiv V/\\rho_{\\phi 0}, \\tag{12}\\] and \\(r_{0}\\equiv\\Omega_{m0}/\\Omega_{\\phi 0}\\) is the energy density ratio of matter to quintessence at present time. The upper (lower) sign in Eq.(11) applies if \\(\\dot{\\phi}>0(\\dot{\\phi}<0)\\). These two equations relate the quintessence potential \\(V(\\phi)\\) to the equation of state function \\(\\omega_{\\phi}(z)\\). Given an effective equation of state function \\(\\omega_{\\phi}(z)\\), the construction Eqs.(10) and (11) will allow us to construct the quintessence potential \\(V(\\phi)\\). Here we consider a most general oscillating EoS as \\[\\omega_{\\phi}=\\omega_{0}+\\omega_{1}\\sin z, \\tag{13}\\] where \\(|\\omega_{0}|+|\\omega_{1}|\\leq 1\\) must be satisfied for quintessence field. We choose the cosmological parameters as \\(\\Omega_{\\phi 0}=0.7\\), and \\(\\Omega_{m0}=0.3\\). For the initial condition, we choose two different sets of parameters: case 1 with \\(\\omega_{0}=-0.7\\), \\(\\omega_{1}=0.2\\) and \\(\\tilde{\\phi}_{0}=1.0\\); case 2 with \\(\\omega_{0}=-0.4\\), \\(\\omega_{1}=0.5\\) and \\(\\tilde{\\phi}_{0}=1.0\\). We plot them in Fig.[1]. But how to fix the \\(``\\mp\"\\) sign in Eq.(11)? We choose the initial condition with \\(d\\tilde{\\phi}_{0}/dz<0\\), assuming the variety of this sign from \\(``-\"\\) to \\(``+\"\\) exists, then on the transformation point, (for the continue evolution of the field \\(\\phi\\)) we have \\(\\dot{\\phi}=d\\tilde{\\phi}/dz=0\\), which follows that \\(\\omega_{\\phi}=-1\\) at this condition. Since \\(\\omega_{\\phi}>-1\\) is always satisfied in these two models we consider, there is no transformation of the sign in Eq.(11). So the negative sign is held for all time. In Fig.[2], we have plotted the evolution of the potentials of the quintessence models with redshift, and in Fig.[3], we have plotted the constructed potentials. From these figures, one finds that although the potential functions are oscillating, but their amplitudes are altering with field. The field always runs from the potential with higher amplitudes to which with lower ones. Now let's analyze the reason of the these strange potential forms. The evolutive equation of the quintessence field is \\[\\ddot{\\phi}+3H\\dot{\\phi}+V_{,\\phi}=0, \\tag{14}\\] where \\(V_{,\\phi}\\) denotes \\(dV/d\\phi\\). This equation can be rewritten as \\[\\ddot{\\phi}+V_{,\\phi}=-3H\\dot{\\phi}. \\tag{15}\\] If the right-hand is absent, this is an equation which describes the motion of field \\(\\phi\\) in the potential \\(V(\\phi)\\) in the flat space-time. The right-hand of this equation is the effect of the expansion of the universe. Due to show clearly its effect on the field, we consider the simplest condition with \\(V(\\phi)\\) being a constant, if the right-hand is absent, we get that \\(\\ddot{\\phi}=0\\), and \\(\\dot{\\phi}\\) keeps constant, which is the free motion of the field. But if adding the right-hand, we get its solution \\(|\\dot{\\phi}|\\propto e^{\\int-3Hdt}\\), the velocity of the field rapidly decreases with time. So the effect of the cosmic expansion is a kind of resistance to the field, and this force is directly proportionate to the velocity of the field \\(\\dot{\\phi}\\). For overcoming this resistance force and keep the kinetic energy not being zero, the field must roll from the region with higher amplitude to the one with lower amplitude. This is the reason why the potential has so strange a form as shown in Figs.[2] and [3]. Since the field is always running to the relatively smaller value of its potential, but the potential can not be smaller than zero, it is very difficult to build the potential with EoS which oscillates forever if without extreme fine-tuning. ## III Three quintessence models From the previous section, we find the general characters of the potentials which can follow the oscillating EoS. According to these characters, one can find that the potential advised in ref.[11] indeed satisfies this condition. However in that reference, the authors found a weak fine-tuning exists in the model for the constraint from the BBN observation. And also this model can obviously alter the CMB anisotropy power spectrum, compared to the standard \\(\\Lambda\\)CDM model. These are bacause the oscillation of EoS exists at the radiation-dominant stage in that model. Here we will build another three kinds of potential functions, which also can generate the oscillating EoS. First we will simplify the evolutive equations of the quintessence field. We introduce the following dimensionless variables, \\[x\\equiv\\frac{\\kappa\\dot{\\phi}}{\\sqrt{6}H},\\ \\ y\\equiv\\frac{\\kappa\\sqrt{V}}{ \\sqrt{3}H},\\ \\ z\\equiv\\frac{\\kappa\\sqrt{\\rho_{m}}}{\\sqrt{3}H},\\ \\ u\\equiv\\frac{\\sqrt{6}}{\\kappa\\phi}, \\tag{16}\\] then the evolutive equations of the matter and quintessence can be rewritten as[9] \\[x^{\\prime} = 3x(x^{2}+z^{2}/2-1)-f(y,u); \\tag{17}\\] \\[y^{\\prime} = 3y(x^{2}+z^{2}/2)+f(y,u)x/y;\\] (18) \\[z^{\\prime} = 3z(x^{2}+z^{2}/2-1/2);\\] (19) \\[u^{\\prime} = -xu^{2}, \\tag{20}\\] where a prime denotes derivative with respect to the so-called e-folding time \\(N\\equiv\\ln a\\), and the function \\(f(y,u)=\\frac{\\kappa V_{,\\phi}}{\\sqrt{6}H^{2}}\\), which have the different forms for different potential functions. In this section, we mainly discuss three simple models, which have the similar potentials as in Fig.[3]: Model 1: \\(V(\\phi)=V_{0}(\\kappa\\phi)^{-2}[\\cos(\\phi/\\phi_{c})+2]\\) with \\(\\kappa\\phi_{c}=0.1\\) and \\[f(y,u)=-uy^{2}-5\\sqrt{6}y^{2}\\sin(10\\sqrt{6}/u)/[\\cos(10\\sqrt{6}/u)+2]; \\tag{21}\\] Model 2: \\(V(\\phi)=V_{0}(\\kappa\\phi)^{-1}[\\cos(\\phi/\\phi_{c})+2]\\) with \\(\\kappa\\phi_{c}=0.1\\) and \\[f(y,u)=-uy^{2}/2-5\\sqrt{6}y^{2}\\sin(10\\sqrt{6}/u)/[\\cos(10\\sqrt{6}/u)+2]; \\tag{22}\\]Model 3: \\(V(\\phi)=V_{0}[(\\kappa\\phi)^{-1}+\\cos(\\phi/\\phi_{c})+1]\\) with \\(\\kappa\\phi_{c}=0.1\\) and \\[f(y,u)=-3y^{2}[u^{2}/6+10\\sin(10\\sqrt{6}/u)]/[u+\\sqrt{6}+\\sqrt{6}\\cos(10\\sqrt{6} /u)]. \\tag{23}\\] These models have been shown in Fig.[4], which all are the combinations of the invert power law function and the PNGB field. And \\(V(\\phi)>0\\) is satisfied for all time. When \\(\\phi/\\phi_{c}\\ll 1\\), they are like the invert power law potential with \\(n=-1\\)( or \\(-2\\)), and they begin to oscillate when \\(\\phi>\\phi_{c}\\). The oscillating amplitudes decrease for all time for the first two potentials, and for the last potential, the oscillating amplitudes are nearly a constant at \\(\\phi\\gg\\phi_{c}\\). It is interesting that these potentials can be looked as the invert power law potential \\(3V_{0}(\\kappa\\phi)^{-1}\\) (\\(3V_{0}(\\kappa\\phi)^{-2},\\;V_{0}[(\\kappa\\phi)^{-1}+2]\\)) with an oscillating amendatory term at \\(\\phi>\\phi_{c}\\). Here we choose the initial condition (present values) \\(\\kappa\\phi_{0}=0.6\\), \\(\\omega_{\\phi 0}=-0.9\\), \\(\\Omega_{\\phi 0}=0.7\\) and \\(\\Omega_{m0}=0.3\\). So at the early stage, the potential functions of the quintessence are monotonic function, the EoS are not oscillating at the early (radiation-dominant) stage, which naturally overcome the shortcoming of the model in Ref.[11]. In Figs.[5] and [6], we plot the evolution of EoS and field \\(\\phi\\) in the region \\(\\ln a/a_{0}=[0,4]\\). The solid lines are the model with the first potential, whose EoS has a relatively steady oscillating amplitude. This is for the amplitudes in its potential function is rapidly decreasing with \\(\\phi\\). When the field \\(\\phi\\) rolls down to its valley, it has enough kinetic energy to climb up to its following hill and then rolls down again. At every period of its potential, when the field is rolling down, the kinetic energy is increasing, and the potential energy is decreasing, which makes its EoS is raising; on the contrary, when the field is climbing up, the kinetic energy is decreasing, and the potential energy is increasing, which makes its EoS is damping. The minimum values of its EoS never get \\(-1\\), which is for the kinetic energy of the field never get zero. This process keeps until \\(\\ln a/a_{0}\\simeq 1.7\\) (\\(\\kappa\\phi\\simeq 2.2\\)), when the field gets the state with \\(\\dot{\\phi}=0\\) (\\(\\omega_{\\phi}=-1\\)), and has to return to roll down the former valley (\\(\\dot{\\phi}<0\\)). These can be seen clearly in Fig.[6]. After this state, the EoS will rapidly run to a steady state with \\(\\omega_{\\phi}=-1\\). However all these are different for models 2 and 3, which are described with dash and dot lines in these figures. When the fields roll down to the valley with \\(\\kappa\\phi\\simeq 1\\), they try to climb up their first hills, however can not climb up to the peaks for the large values of their potential functions. When the fields get the state with \\(\\dot{\\phi}=0\\), (the corresponding EoS are \\(\\omega_{\\phi}=-1\\)) they have to return to roll down this valley again. This process lasts until the kinetic energy become negligible, the fields stay at the valley with \\(\\omega_{\\phi}=-1\\). The evolution of these fields can be seen clearly in Fig.[6]. In Fig.[7], we plot the evolution of \\(\\Omega_{\\phi}\\) in the universe, although the quintessence will be dominant at last in the universe, the values of \\(\\Omega_{\\phi}\\) are oscillating at the evolution stage for all these three quintessence models, which are determined by the evolution of \\(\\omega_{\\phi}\\). When \\(\\omega_{\\phi}>0\\), the values of \\(\\Omega_{\\phi}\\) will decrease, and when \\(\\omega_{\\phi}<0\\), the values of \\(\\Omega_{\\phi}\\) will increase. ## IV Conclusion Understanding the physics of the dark energy is one of the most mission for the modern cosmology. Until recently, the most effective way is to detect its EoS and the running behavior by the observations on SNIa, CMB,LSS and so on. There are mild evidences to show that the EoS of the dark energy is the oscillating function, which makes the building of the dark energy models difficult. For the quintessence field dark energy models, it is obvious that this EoS can't be realized from the monotonic potentials. However for the simple oscillating potential, it is also difficult to realize. In this paper, we have discussed the general features of the potentials which can follow an oscillating EoS by constructing the potentials from oscillating EoS, and found that they are oscillating functions, however the oscillating amplitudes are increasing (decreasing) with the field \\(\\phi\\). And also the field must roll from the region with larger amplitude to which with smaller amplitude if the EoS is oscillating. This kind of potentials are not very difficult to satisfy. However since the field must roll down to the region with smaller amplitude if the EoS is oscillating, and also the constraint of \\(V(\\phi)\\geq 0\\) must be satisfied for all time, which lead to the building of quintessence with oscillating (forever) EoS is very difficult. In this paper, we have studied three kinds of models: \\(V(\\phi)=V_{0}(\\kappa\\phi)^{-2}[\\cos(\\phi/\\phi_{c})+2]\\), \\(V(\\phi)=V_{0}(\\kappa\\phi)^{-1}[\\cos(\\phi/\\phi_{c})+2]\\) and \\(V(\\phi)=V_{0}[(\\kappa\\phi)^{-1}+\\cos(\\phi/\\phi_{c})+1]\\). They are all made of the invert power law functions and the oscillating functions, and can indeed follow the oscillating EoS, however this oscillating behavior can only keep a finite period in all these three models. ## References * (1) Riess A.G. _et al._, Astron.J. **116** (1998) 1009; Perlmutter S. _et al._, Astrophys.J. **517** (1999) 565; Tonry J.L. _et al._, Astrophys.J. **594** (2003) 1; Knop R.A. _et al._, Astrophys.J. **598** (2003) 102; * (2) Bennett C.L. _et al._, Astrophys.J.Suppl. **148** (2003) 1; Spergel D.N. _et al._, Astrophys.J.Suppl. **148** (2003) 175; Peiris H.V. _et al._, Astrophys.J.Suppl. **148** (2003) 213; Spergel D.N. _et al._, astro-ph/0603449; * (3) Tegmark M. _et al._, Astrophys.J. **606** (2004) 702, Phys.Rev.D **69** (2004) 103501; Pope A.C. _et al._, Astrophys.J. **607** (2004) 655; Percival W.J. _et al._, MNRAS **327** (2001) 1297; * (4) Caldwell R.R., Phys.Lett.B **545** (2002) 23; Carroll S.M., Hoffman M. and Trodden M., Phys.Rev.D **68** (2003) 023509; R.R.caldwell, M.Kamionkowski and N.N.Weinberg, Phys.Rev.Lett. **91** (2003) 071301;* [5] Armendariz C., Damour T. and Mukhanov V., Phys.Lett.B **458** (1999) 209; Chiba T., Okabe T. and Yamaguchi M., Phys.Rev.D **62** (2000) 023511; Armendariz C., Mukhanov V. and Steinhardt P.J., Phys.Rev.D **63** (2001) 103510; * [6] Feng B., Wang X.L. and Zhang X.M., Phys.Lett.B **607** (2005) 35; Guo Z.K., Piao Y.S., Zhang X.M. and Zhang Y.Z., Phys.Lett.B **608** (2005) 177; Hao W. and Cai R.G., Phys.Rev.D **72** (2005) 123507; Hu W., Phys.Rev.D **71** (2005) 047301; * [7] Zhang Y., Chin.Phys.Lett. **20** (2003) 1899; Zhao W. and Zhang Y., Class.Quant.Grav. **23** (2006) 3405; Phys.Lett.B **640** (2006) 69; astro-ph/0508010; Zhang Y., Xia T.Y. and Zhao W., gr-qc/0609115; * [8] Wetterich C., Nucl.Phys.B **302** (1988) 668 ; Astron.Astrophys. **301** (1995) 321; Ratra B. and Peebles P.J., Phys.Rev.D **37** (1988) 3406; Caldwell R.R., Dave R. and Steinhardt P.J., Phys.Rev.Lett. **80** (1998) 1582; Zhai X.H. and Zhao Y.B. Chin.Phys. **15** (2006) 1009; * [9] Copeland E.J., Liddle A.R. and Wands D., Phys.Rev.D **57** (1998) 4686; Amendola L., Phys.Rev.D **60** (1999) 043501; Amendola L., Phys.Rev.D **62** (2000) 043511 * [10] Zlatev I., Wang L. and Steinhardt P.J. Phys.Rev.Lett. **82** (1999) 896; Steinhardt P.J., Wang L. and Zlatev I., Phys.Rev.D **59** (1999) 123504;* (11) Dodelson S., Kaplinghat M. and Stewart E., Phys.Rev.Lett. **85** (2000) 5276; * (12) Feng B., Li M.Z. and Zhang X.M., Phys.Lett.B **634** (2006) 101; Xia J.Q., Feng B. and Zhang X.M., Mod.Phys.Lett.A **20** (2005) 2409; Barenboim G., Mena O. and Quigg C., Phys.Rev.D **71** (2005) 063533; Barenboim G. and Lykken J., Phys.Lett.B **633** (2006) 453; Linder E.V., Astropart.Phys. **25** (2006) 167; Zhao W. and Zhang Y., Phys.Rev.D **73** (2006) 123509; * (13) Nojiri S. and Odintsov S.D., Phys.Lett.B **637** (2006) 139; * (14) Huterer D. and Cooray A., Phys.Rev.D **71** (2005) 023506; Lazkoz R., Nesseris S. and Perivolaropoulos L., JCAP **0511** (2005) 010; Xia J.Q., Zhao G.B., Li H., Feng B. and Zhang X.M., Phys.Rev.D **74** (2006) 083521; * (15) Freese K., Frieman J.A. and Olinto A.V., Phys.Rev.Lett. **65** (1990) 3233; Asams F.C., Bond J.R., Freese K., Frieman J.A. and Olinto A.V., Phys.Rev.D **47** (1993) 426; Frieman J., Hill C., Stebbins A. and Waga I., Phys.Rev.Lett. **75** (1995) 2077; Copeland E.J., Sami M. and Tsujikawa S., hep-th/0603057; * (16) Guo Z.K., Ohta N. and Zhang Y.Z., Phys.Rev.D **72** (2005) 023504; Li Hui, Guo Z.K. and Zhang Y.Z., Mod.Phys.Lett.A **21** (2006) 1683; Cao H.M., Xin X.B, Wang L. and Zhao W., Journal of Science and Technology of China accepted; Figure 2: The evolution of the potentials of the quintessence models with redshift \\(z\\). Figure 3: Constructed potential functions Figure 4: Three kinds of quintessence models. Figure 5: The evolution of the EoS of the quintessence models. Figure 6: The evolution of field \\(\\phi\\) of the quintessence models. Figure 7: The evolution of the energy density \\(\\Omega_{\\phi}\\) of the quintessence models.
In this paper, we investigate the quintessence models with an oscillating equation of state (EoS) and their potentials. From the constructed potentials, which have the EoS of \\(\\omega_{\\phi}=\\omega_{0}+\\omega_{1}\\sin z\\), we find they are all the oscillating functions of the field \\(\\phi\\), and the oscillating amplitudes are decreasing (or increasing) with \\(\\phi\\). From the evolutive equation of the field \\(\\phi\\), we find this is caused by the expansion of the universe. This also makes that it is very difficult to build a model whose EoS oscillates forever. However one can build a model with EoS oscillating for a period. Then we discuss three quintessence models, which are the combinations of the invert power law functions and the oscillating functions of the field \\(\\phi\\). We find they all follow the oscillating EoS.
Summarize the following text.
arxiv-format/0605010v3.md
# Dark viscous fluid described by a unified equation of state in cosmology Jie Ren\\({}^{1}\\) [email protected] Xin-He Meng\\({}^{2,3}\\) [email protected] Theoretical Physics Division, Chern Institute of Mathematics, Nankai University, Tianjin 300071, China \\({}^{2}\\)Department of physics, Nankai University, Tianjin 300071, China \\({}^{3}\\)Department of physics, Hanyang University, Seoul 133-791, Korea November 3, 2021 ###### pacs: 98.80.-k,95.36.+x,95.35.+d _Introduction._ The cosmological observations have provided increasingly convincing evidence that our Universe is undergoing a late-time accelerating expansion [1; 2; 3; 4], and we live in a favored spatially flat Universe composed of approximately 4% baryonic matter, 22% dark matter and 74% dark energy. The simplest candidate for dark energy is the cosmological constant. Recently, a great number of ideas have been proposed to explain the current accelerating Universe, partly such as scalar field model, exotic equation of state (EOS), modified gravity, and the inhomogeneous cosmology model. However, the available data sets in cosmology, especially the SNe Ia data [5; 6; 7], the SDSS data [8], and the three year WMAP data [4] all indicate that the \\(\\Lambda\\)CDM model, which serves as a standard model in cosmology, is an excellent model to describe the cosmological evolution. Therefore, we suggest that a new cosmological model should be based on or can be reduced to the \\(\\Lambda\\)CDM model naturally. Time-dependent bulk viscosity [9], a linear EOS [10; 11], and the Hubble parameter dependent EOS [12] are considered in the study of the dark energy physics. The EOS approach is intensely studied in cosmology, partly such as Refs. [13; 14; 15; 16; 17; 18; 19; 20; 21]. The equivalence between the modified EOS, the scalar field model, and the modified gravity is demonstrated in Refs. [22; 23; 24], with a general method to calculate the potential of the corresponding scalar field for a given EOS. We attempt to investigate the properties of cosmological models starting from the EOS of the Universe contents directly, which is suggested by the authors in Ref. [25]. Our goal is to find more physical meanings in the right hand side of the Einstein equation to explore the currently accelerating universe. We find that a generalized EOS unifies several issues in cosmology. This EOS can be regarded as the generalization of the constant EOS \\(p=-p_{0}\\), which can reproduce the \\(\\Lambda\\)CDM model exactly. In this paper, we build up a general model called the extended \\(\\Lambda\\)CDM model, which can be exactly solved to describe the cosmological evolutions by introducing a unified EOS. This paper is a complement of our previous work [26; 27], in which the physical meanings of this EOS are very limited. We also develop a completely numerical method to perform a \\(\\chi^{2}\\) minimization to constrain the parameters of a cosmological model directly from the Friedmann equations. We consider the Friedmann-Robertson-Walker metric in the flat space geometry (\\(k\\)=0) as the case favored by observational data \\[ds^{2}=-dt^{2}+a(t)^{2}(dr^{2}+r^{2}d\\Omega^{2}), \\tag{1}\\] and assume that the cosmic fluid possesses a bulk viscosity \\(\\zeta\\). The energy-momentum tensor is \\[T_{\\mu\ u}=\\rho U_{\\mu}U_{\ u}+(p-\\zeta\\theta)h_{\\mu\ u}, \\tag{2}\\] where in comoving coordinates \\(U^{\\mu}=(1,0,0,0)\\), \\(h_{\\mu\ u}=g_{\\mu\ u}+U_{\\mu}U_{\ u}\\), and \\(\\theta=3\\dot{a}/a\\)[28]. By defining the effective pressure as \\(\\tilde{p}=p-\\zeta\\theta\\) and from the Einstein equation \\(R_{\\mu\ u}-\\frac{1}{2}g_{\\mu\ u}R=\\kappa^{2}T_{\\mu\ u}\\), where \\(\\kappa^{2}=8\\pi G\\), we obtain the Friedmann equations \\[\\frac{\\dot{a}^{2}}{a^{2}}=\\frac{\\kappa^{2}}{3}\\rho,\\ \\ \\frac{\\ddot{a}}{a}=-\\frac{ \\kappa^{2}}{6}(\\rho+3\\tilde{p}). \\tag{3}\\] The conservation equation for energy, \\(T_{;\ u}^{0\ u}=0\\), yields \\[\\dot{\\rho}+3H(\\rho+\\tilde{p})=0, \\tag{4}\\] where \\(H=\\dot{a}/a\\) is the Hubble parameter. _Physical meaning of each term._ The EOS proposed in our previous work [26] is given by \\[p=(\\tilde{\\gamma}-1)\\rho-\\frac{2}{\\sqrt{3}\\kappa T_{1}}\\sqrt{\\rho}-\\frac{2}{3 \\kappa^{2}T_{2}^{2}}. \\tag{5}\\]The first term is the prefect fluid EOS, the second term describes the dissipative effect, and the third term corresponds to the cosmological constant. The dynamical equation of the scale factor \\(a(t)\\) can be written as \\[\\frac{\\ddot{a}}{a}=-\\frac{3\\tilde{\\gamma}-2}{2}\\frac{\\dot{a}^{2}}{a^{2}}+\\frac{1 }{T_{1}}\\frac{\\dot{a}}{a}+\\frac{1}{T_{2}^{2}}. \\tag{6}\\] The dimension of both \\(T_{1}\\) and \\(T_{2}\\) is [Time]. By concerning the initial conditions of \\(a(t_{0})=a_{0}\\) and \\(\\theta(t_{0})=\\theta_{0}\\), the analytical solution for \\(a(t)\\) is given out in Ref. [26]. An intriguing feature of the extended \\(\\Lambda\\)CDM model is that it possesses both physical significance and mathematical exact solutions. Form the physical point of view, Eq. (6) naturally contains the dissipative process in the cosmological evolution. If we set the EOS as \\(p=p_{0}\\) and the bulk viscosity coefficient \\(\\zeta\\) is constant, the first term describes the dark matter, the last term (\\(T_{2}\\) term) describes the dark energy, and the middle term (\\(T_{1}\\) term) describes the dissipative effects probably caused by the interaction between the dark matter and dark energy. In Refs. [29; 30; 31; 32; 33; 34], the viscosity in cosmology has been studied in various aspects. The qualitative analysis of Eq. (6) can be easily obtained if we assume that \\(H\\) is always decreasing during the evolution of the Universe. The three terms in the right-hand side of Eq. (6) are proportional to \\(H^{2}\\), \\(H^{1}\\), and \\(H^{0}\\), respectively, therefore, the three terms dominate alternatively during the cosmological evolution and it approaches to a de Sitter Universe finally. Actually, we can see that each term in the right-hand side of Eq. (6) accounts for the time-dependent bulk viscosity or the variable cosmological constant. _Unified description of dark matter and dark energy._ The \\(\\Lambda\\)CDM model is based on the \\(H\\)-\\(z\\) relation \\[H(z)^{2}=H_{0}^{2}[\\Omega_{m}(1+z)^{3}+1-\\Omega_{m}], \\tag{7}\\] where \\(z=a_{0}/a-1\\) is the redshift. We find that for a single constant EOS \\(p=-p_{0}\\) (\\(p_{0}>0\\)), the \\(H\\)-\\(z\\) solution from the Friedmann equations without viscosity is \\[H(z)^{2}=H_{0}^{2}\\left[\\left(1-\\frac{\\kappa^{2}p_{0}}{3H_{0}^{2}}\\right)(1+z )^{3}+\\frac{\\kappa^{2}p_{0}}{3H_{0}^{2}}\\right], \\tag{8}\\] which exactly possesses the same form of Eq. (7), with \\(\\Omega_{m}=1-\\frac{\\kappa^{2}p_{0}}{3H_{0}^{2}}\\). In the \\(\\Lambda\\)CDM model, the Universe contains two fluids, i.e., the dark matter and dark energy, for which the EOS are \\(p=0\\) and \\(p=-\\rho\\), respectively. In our case, a single EOS unifies the dark matter and dark energy modeled as dark viscous fluid, which is consistent with the cosmological principle. However, it does not necessarily mean that the nature of the dark matter and dark energy is the same. The Chaplygin gas model \\(p=-A/\\rho\\)[35] also serves as a unified model of dark matter and dark energy, but it cannot reduce to Eq. (7) exactly. As a special case of Eq. (5), a linear EOS of the dark fluid is studied in Ref. [10], and the dark fluid is also studied by other approaches, such as Ref. [36; 37]. Our motivation is to find a more general EOS, which possesses as many as possible physical meanings and the Friedmann equations can be exactly solved, as the following picture shows. \\begin{tabular}{|c|c|} \\hline \\(p=0\\) & (CDM) \\\\ \\(p=-\\rho\\) & (\\(\\Lambda\\)) \\\\ \\hline \\end{tabular} Luckily we obtain one which is just Eq. (5), and gives rise to four implying unifications summarized at the end of this article. Based on this EOS, we establish a cosmological model, called the extended \\(\\Lambda\\)CDM model. _Variable cosmological constant model._ It turns out that the Friedmann equations combined with the renormalization equation which determines the variable cosmological constant [38; 39] can be reduced to the same form of Eq. (6) [27]. _Scalar field model._ The authors of Refs. [22; 24] give a general method to obtain the potential of a scalar model. We have found that the potential of the corresponding scalar model is \\[V(\\varphi)=C_{1}e^{\\alpha\\varphi}+C_{2}e^{\\alpha\\varphi/2}+C_{3} \\tag{9}\\] if \\(\\tilde{\\gamma}\ eq 0\\)[27]. However, we missed an important case, \\(\\tilde{\\gamma}=0\\). In this case, the EOS is \\[p=-\\rho-\\frac{2}{\\sqrt{3}\\kappa T_{1}}\\sqrt{\\rho}-\\frac{2}{3\\kappa^{2}T_{2}^ {2}}. \\tag{10}\\] Using the same method, which is also outlined in Ref. [27], we obtain the potential of the corresponding scalar field \\[V(\\varphi) = \\frac{3\\kappa^{2}}{64T_{1}^{2}}\\varphi^{4}-\\frac{3\\kappa}{4\\sqrt{ 2}T_{1}T_{2}}\\varphi^{3}+\\left(\\frac{3}{2T_{2}^{2}}-\\frac{1}{8T_{1}^{2}} \\right)\\varphi^{2} \\tag{11}\\] \\[+\\frac{1}{\\sqrt{2}\\kappa T_{1}T_{2}}\\varphi-\\frac{1}{\\kappa^{2} T_{2}^{2}}.\\] As a special case, if the bulk viscosity vanishes, \\(p=-\\rho-p_{0}\\). The potential of the corresponding scalar field is \\(V(\\varphi)=\\kappa^{2}p_{0}\\varphi^{2}\\) by neglecting the constant term. In general, Eq. (9) is a non-renormalizable potential, however, if the coefficient before \\(\\rho\\) is precisely equal to \\(-1\\), we obtain a renormalizable field. Moreover, the \\(\\sqrt{\\rho}\\) term in the EOS gives a contribution of \\(\\varphi^{4}\\) term in the scalar field. This property of such scalar field was missed in our previous work. We think that there is a profound relation between the renormalizability of the scalar field and that the EOS parameter of the vacuum is precisely equal to \\(-1\\). _Mathematical features._ In the mathematical aspect, the transformation [27]\\(y=a^{3\\tilde{\\gamma}/2}\\) reduces Eq. (6) to a linear differential equation of \\(y(t)\\) \\[\\ddot{y}-\\frac{1}{T_{1}}\\dot{y}-\\frac{3\\tilde{\\gamma}}{2T_{2}^{2}}y=0, \\tag{12}\\] which can be solved easily. The variable \\(y\\) serves as a rescaled scale factor and behaves like the amplitude of a damping harmonic oscillator. The \\(T_{1}\\) term is just the damping term. The equation which determines the evolution of the Hubble parameter, \\(\\dot{H}=-\\frac{3\\tilde{\\gamma}}{2}H^{2}+\\frac{1}{T_{1}}H+\\frac{1}{T_{2}^{2}}\\), has possessed a form invariance for \\(H\\to H+\\delta H\\). _Supernovae constraints._ The observations of the SNe Ia have provided the direct evidence for the cosmic accelerating expansion for our current Universe. Any model attempting to explain the acceleration mechanism should be consistent with the SNe Ia data implying results, as a basic requirement. We have found the viscosity without cosmological constant possesses a \\((1+z)^{3/2}\\) contribution [27], which seems to be an interpolation between the matter \\((1+z)^{3}\\) and the \\(\\Lambda\\)-term \\((1+z)^{0}\\). The method of the data fitting is illustrated in Refs. [40; 41], in which the explicit solution \\(H(z)\\) is required. We develop a completely numerical method to perform a \\(\\chi^{2}\\) minimization to fit the optimized values of the parameters in a cosmological model directly from the Friedmann equations, without knowing the \\(H\\)-\\(z\\) relation. Define a new function \\[F(z)=\\int_{0}^{z}\\frac{dz}{E(z)}, \\tag{13}\\] where \\(E(z)=H(z)/H_{0}\\) is the dimensionless Hubble parameter. The relations implied by Eq. (13) \\[E(z)=F^{\\prime}(z)^{-1},\\ \\ E^{\\prime}(z)=-F^{\\prime\\prime}(z)F^{\\prime}(z)^{-2} \\tag{14}\\] can transform an equation for \\(H(z)\\) to another one for \\(F(z)\\), then one solves \\(F(z)\\) numerically and obtains the luminosity distance \\(d_{L}=(c/H_{0})(1+z)F(z)\\). This is a general numerical method and it can be applied if only the dynamical equations determining the scale factor is known. The \\(\\chi^{2}\\) is calculated from \\[\\chi^{2}=\\sum_{i=1}^{n}\\left[\\frac{\\mu_{obs}(z_{i})-\\mathcal{M}^{ \\prime}-5\\log_{10}D_{Lth}(z_{i};c_{\\alpha})}{\\sigma_{obs}(z_{i})}\\right]^{2}\\] \\[+\\left(\\frac{\\mathcal{A}-0.469}{0.017}\\right)^{2}, \\tag{15}\\] where \\(\\mathcal{M}^{\\prime}\\) is a free parameter related to the Hubble constant and \\(D_{Lth}(z_{i};c_{\\alpha})\\) is the theoretical prediction for the dimensionless luminosity distance of a SNe Ia at a particular distance, for a given model with parameters \\(c_{\\alpha}\\). The parameter \\(\\mathcal{A}\\) is defined in Ref. [8]. Here \\(\\Omega_{m}=1-\\frac{2}{3T_{2}^{2}H_{0}^{2}}\\) is used in our model and we take \\(\\tilde{\\gamma}=1\\) as in the \\(\\Lambda\\)CDM model. We will consider the \\(\\Lambda\\)CDM model for comparison and perform a best-fit analysis with the minimization of the \\(\\chi^{2}\\), with respect to \\(\\mathcal{M}^{\\prime}\\), \\(T_{1}H_{0}\\), and \\(T_{2}H_{0}\\). We employ the 157 gold data, the SNLS data, and the 182 SNe data compiled by Riess _et al._ recently combined with the parameter \\(\\mathcal{A}\\) to constrain the parameters and plot the \\(T_{1}\\)-\\(T_{2}\\) relation in Fig. 1 and Fig. 2. From the results, we see that the \\(T_{1}\\) term is made less than 10% contributions to that of the \\(T_{2}\\) term on \\(2\\sigma\\) C.L. If we adopt the interpretation of viscosity of our model, the fitting result shows that the dissipative effect is rather small, as we expect that the additional term is a small correction to the \\(\\Lambda\\)CDM model. _Discussion._ The approach of the unified EOS considered in this paper have enabled us to describe the Universe contents related to several fundamental issues in cosmological evolution from a united viewpoint. We have extended the \\(\\Lambda\\)CDM model into a more general framework by introducing this unified EOS. (i) This EOS describes the perfect fluid term, the dissipative effect and the cosmological constant in a unique equation. (ii) This general EOS unifies the dark matter and the dark energy as a single dark viscous fluid and can be exactly reduced to the \\(\\Lambda\\)CDM model as a special case. (iii) The variable cosmological constant model is mathematically equiva Figure 2: The \\(1\\sigma\\) (solid line), \\(2\\sigma\\) (dashed line), and \\(3\\sigma\\) (dotted line) contour plots of the \\(T_{1}\\)-\\(T_{2}\\) relation in the extended \\(\\Lambda\\)CDM model. Figure 1: (color online). The \\(1\\sigma\\) (solid line), \\(2\\sigma\\) (dashed line), and \\(3\\sigma\\) (dotted line) contour plots of the \\(T_{1}\\)-\\(T_{2}\\) relation in the extended \\(\\Lambda\\)CDM model. The vertical lines show the result of \\(T_{2}\\) if it reduces to the \\(\\Lambda\\)CDM model (\\(T_{1}\\rightarrow\\infty\\)). lent to the form by using this EOS. (iv) We also find a scalar field that is equivalent to this EOS, moreover, the renormalizable condition of the scalar field requires the coefficient before \\(\\rho\\) is precisely equal to \\(-1\\). Thus, it is very interesting that concerning on the bulk viscosity, modified EOS, variable cosmological constant model, and scalar field model can be described in one general dynamical equation which determines the scale factor. In this sense, our model has unified the exact solutions of several models. The viewpoint of modified EOS is rather phenomenological, however, we have showed that it is strongly related to some fundamental concepts of cosmology. The incoming data sets will give more constraints to the modified EOS approach in cosmology. X.-H.M. thank Prof. S.D. Odintsov for the helpful comments with reading the manuscript, and Profs. I. Brevik and L. Ryder for lots of discussions. X.-H.M is supported by the National Natural Science Foundation of China (No. 10675062), and BK21 Foundation. ## References * (1) A.G. Riess _et al._, _Astron. J._**116**, 1009 (1998). * (2) N. Bahcall, J.P. Ostriker, S. Perlmutter, and P.J. Steinhardt, _Science_**284**, 1481 (1999). * (3) C. L. Bennett _et al._, _Astrophys. J. Suppl._**148**, 1 (2003). * (4) D. N. Spergel _et al._, astro-ph/0603449. * (5) A.G. Riess _et al._, _Astrophys. J._**607**, 665 (2004). * (6) P. Astier _et al._, _Astron. Astrophys._**447**, 31, (2006). * (7) A.G. Riess _et al._, astro-ph/0611572. * (8) D.J. Eisenstein _et al._, _Astrophys. J._**633**, 560 (2005). * (9) I. Brevik and O. Gorbunova, _Gen. Rel. Grav._**37**, 2039 (2005). * (10) R. Holman and S. Naidu, astro-ph/0408102. * (11) E. Babichev, V. Dokuchaev, and Y. Eroshenko, _Class. Quant. Grav._**22**, 143 (2005). * (12) S. Nojiri and S.D. Odintsov, _Phys. Rev. D_**72**, 023003 (2005). * (13) S. Nojiri, S.D. Odintsov, _Phys. Lett. B_**639**, 144 (2006). * (14) K.N. Ananda and M. Bruni, _Phys. Rev. D_**74**, 023523 (2006). * (15) S. Capozziello, S.D. Martino, and M. Falanga, _Phys. Lett. B_**299**, 494 (2002). * (16) J.Q. Xia, G.B. Zhao, H. Li, B. Feng, X. Zhang, astro-ph/0605366. * (17) Z.K. Guo, Y.S. Piao, X. Zhang, Y.Z. Zhang, astro-ph/0608165. * (18) S.K. Srivastava, astro-ph/0608241. * (19) J. Sola, H. Stefancic, _J. Phys. A_**39** (2006). * (20) L. Xu, H. Liu, C. Zhang, _Int. J. Mod. Phys. D_**15** (2006). * (21) X. Zhang, F.Q. Wu, J. Zhang, _JCAP_**0601** 003 (2006). * (22) S. Capozziello, S. Nojiri, and S.D. Odintsov, _Phys. Lett. B_**634**, 93 (2006). * (23) S. Capozziello, S. Nojiri, and S.D. Odintsov, _Phys. Lett. B_**632**, 597 (2006). * (24) S. Nojiri and S.D. Odintsov, _Gen. Rel. Grav._**38**, 1285 (2006). * (25) S. Capozziello, V.F. Cardone, E. Elizalde, S. Nojiri, and S.D. Odintsov, _Phys. Rev. D_**73**, 043512 (2006). * (26) J. Ren and X.H. Meng, _Phys. Lett. B_**633**, 1 (2006). * (27) J. Ren and X.H. Meng, _Phys. Lett. B_**636**, 5 (2006). * (28) I. Brevik, _Phys. Rev. D_**65**, 127302 (2002). * (29) W. Zimdahl, _Phys. Rev. D_**53**, 5483 (1996). * (30) A.A. Coley, R.J. vandenHoogen, and R. Maartens, _Phys. Rev. D_**54**, 1393 (1996). * (31) L.P. Chimento, A.S. Jakubi, V. Mendez, and R. Maartens, _Class. Quantum Grav._**14**, 3363 (1997). * (32) A. DiPrisco, L. Herrera, and J. Ibanez, _Phys. Rev. D_**63**, 023501 (2000). * (33) I. Brevik, S. Nojiri, S.D. Odintsov, and L. Vanzo, _Phys. Rev. D_**70**, 043520 (2004). * (34) M. Cataldo, N. Cruz, and S. Lepe, _Phys. Lett. B_**619**, 5 (2005). * (35) A. Kamenshchik, U. Moschella, and V. Pasquier, _Phys. Lett. B_**511**, 265 (2001). * (36) A. Arbey, astro-ph/0506732. * (37) A. Arbey, _Phys. Rev. D_**74**, 043516 (2006). * (38) I.L. Shapiro and J. Sola, _JHEP_**0202**, 006 (2002). * (39) I.L. Shapiro and J. Sola, astro-ph/0401015. * (40) M.C. Bento, O. Bertolami, N.M.C. Santos, and A.A. Sen, _Phys. Rev. D_**71**, 063501 (2005). * (41) Y. Gong and Y.Z. Zhang, _Phys. Rev. D_**72**, 043518 (2005).
We generalize the \\(\\Lambda\\)CDM model by introducing a unified EOS to describe the Universe contents modeled as dark viscous fluid, motivated by the fact that a single constant equation of state (EOS) \\(p=-p_{0}\\) (\\(p_{0}>0\\)) reproduces the \\(\\Lambda\\)CDM model exactly. This EOS describes the perfect fluid term, the dissipative effect, and the cosmological constant in a unique framework and the Friedmann equations can be analytically solved. Especially, we find a relation between the EOS parameter and the renormalizable condition of a scalar field. We develop a completely numerical method to perform a \\(\\chi^{2}\\) minimization to constrain the parameters in a cosmological model directly from the Friedmann equations, and employ the SNe data with the parameter \\(\\mathcal{A}\\) measured from the SDSS data to constrain our model. The result indicates that the dissipative effect is rather small in the late-time Universe.
Provide a brief summary of the text.
arxiv-format/0605366v2.md
# Features in Dark Energy Equation of State and Modulations in the Hubble Diagram Jun-Qing Xia\\({}^{1}\\), Gong-Bo Zhao\\({}^{1}\\), Hong Li\\({}^{1}\\), Bo Feng\\({}^{2}\\) and Xinmin Zhang\\({}^{1}\\) \\({}^{1}\\)Institute of High Energy Physics, Chinese Academy of Science, P.O. Box 391-4, Beijing 100049, P. R. China \\({}^{2}\\) Research Center for the Early Universe(RESCEU), Graduate School of Science, The University of Tokyo, Tokyo 113-0033, Japan May 19, 2022. ## I Introduction The three year Wilkinson Microwave Anisotropy Probe observations (WMAP3)[1; 2; 3; 4; 5] have made so far the most precise probe on the Cosmic Microwave Background (CMB) Radiations. In the fittings to a constant equation of state (EOS) of dark energy (DE) \\(w\\), combinations of WMAP with other cosmological observations are in remarkable agreement with a cosmological constant (CC) except for the WMAP + SDSS combination, where \\(w>-1\\) is favored a bit more than \\(1\\sigma\\)[1]. The measurements of the SDSS power spectrum[6; 7] in some sense make the most precise probe of the current linear galaxy matter power spectrum and will hopefully get significantly improved within the coming few years. If the preference of \\(w>-1\\) holds on with the accumulation of cosmological observations this will also help significantly on our understandings towards dark energy. A cosmological constant, which is theoretically problematic at present[8; 9], will NOT be the source driving the current accelerated expansion and a preferred candidate would be something like quintessence[10]. On the other hand, the observations from the Type Ia Supernova (SNIa) in some sense make the only direct detection of dark energy[11; 12; 13; 14; 15; 16] and currently a combination of WMAP + SNIa or CMB + SNIa + LSS are well consistent with the cosmological constant and the preference of a quintessence-like equation state has disappeared[1]. It is noteworthy that in the combinations with the Lyman \\(\\alpha\\) forest Ref.[17] shows that a constant EOS \\(w<-1\\) is preferred slightly. Moreover when one considers the observational imprints by dynamical equation of state, an EOS which gets across \\(-1\\) is mildly favored by the current observations[18; 19]. Intriguingly, we are also aware that the predictions for the luminosity distance-redshift relationship from the \\(\\Lambda\\)CDM model by WMAP only are in notable discrepancies with the \"gold\" samples reported by Riess \\(et\\)\\(al\\)[14]. Although the discrepancy might be due to some systematical uncertainties in the Riess \"gold\" sample[14], this needs to be confronted with the accumulation of the 5-year SNLS observations[16] and the ongoing SNIa projects like the Supernova Cosmology Project (SCP) and from the Supernova Search Team (SST). Alternatively, this might be due to the implications of dynamical dark energy with oscillating equation of state. Although the temperature-temperature correlation (TT) power of WMAP3 is now cosmic variance limited up to \\(l\\sim 400\\) and the third peak is now detected, the tentative features as discovered by the first year WMAP[20; 21; 22] are still present: the low TT quadrupole and localized oscillating features on TT for \\(l\\sim 30-50\\)[3]. Although the signatures of glitches on the first peak as discovered by the first year WMAP have now become weak, they do exist and go beyond the limited cosmic variance[3]. While for the low WMAP TT quadrupole many authors are inclined to attribute it to cutoff primordial spectrum[23; 24], and even BEFORE the release of the first year WMAP Ref. [25] claimed oscillating primordial spectrum could lead to oscillations around the first peak of CMB TT power, similar effects _might_ be due to features on dark energy rather than inflation. For example, Ref.[26] has attributed the lowquadrupole to some subtle physics of dark energy during inflation. In the literature there have been many investigations on inflationary models with broken scale invariance[29; 30; 31; 32; 33; 34; 35]. Such features have been invoked to explain the previously observed feature at \\(k\\sim 0.05\\) Mpc\\({}^{-1}\\)[36; 37; 38; 39], or even to solve the small scale problem of the CDM model [40]1. Moreover Ref.[27] has claimed that the pre-WMAP data could not exclude a large running of the spectral index (for relevant study see also [47]), which has been somewhat dramatically confirmed by the first year WMAP and WMAP3 in combination with other observations[1; 5] except for the case with the Lyman alpha forest[17; 28]. Inflation and dark energy, both of which describe the accelerated expansion of the universe, might have some relations and Ref.[48] proposed a new picture of quintessential inflation. Ref.[49] has made an attempt trying to find such relations from the observational aspect. While oscillating primordial spectrum may be responsible for the glitches on CMB, oscillating EOS of dark energy may be helpful to solve the coincidence problem of dark energy[50; 51; 52; 53]. In Ref.[51] in the framework of Quintom[54], an attempt was carried out to unify dark energy and inflation, meanwhile solving the coincidence problem of DE. Footnote 1: For other solutions to this problem, see e.g. Refs. [41]-[45] and for a review on this issue see e.g. Ref.[46]. On theoretical aspect dark energy is among the biggest problem of modern cosmology[8; 9; 55; 56; 57; 58; 59]. Dynamical dark energy models rather than the simple cosmological constant have attracted more interests in theoretical studies[60]. In cases where the mysterious component of DE is driven by scalar fields[10; 62; 63] the EOS is typically not like that by a cosmological constant, this opens a possibility for us to tell CC from scalar dark energy models with the cosmological observations. Moreover the current observations have already opened a robust window to probe the behavior of dark energy independently, and in cases when \\(w\ eq-1\\) is preferred, dynamical DE models which satisfy the observations are put forward[54; 60; 61; 63; 64; 65; 66; 67]. Given the current ambiguity on theoretical study of dark energy, in the observational probe of DE one often uses the parametrizations of EOS. Previously in the observational probes on oscillating features of dark energy EOS Ref.[68] has made some preliminary fittings to the pre-WMAP3 data and some relevant studies have been carried out later by Refs.[53; 69]. In the present paper with the method dealing with the perturbations of Quintom developed in Refs.[18; 49; 70; 71], we aim to probe the time dependence of the dark energy EOS in light of WMAP3 and the combination with other tentative cosmological observations from SDSS and SNIa from the Riess \"gold\" sample or the SNLS observations. The background evolution and perturbations of Quintom can be identified with one normal quintessence and one phantom except for the phantom crossing point, where the natural matching condition is motivated by the case with a high-dimensional operator on the kinetic term or the two-field case and the individual sound speed for each field is assumed to be unity[18; 49; 70; 71]. In the present work we mainly focus on cases where the EOS is oscillating or with local bumps. By performing a global analysis with the Markov Chain Monte Carlo (MCMC) method, we find the current observations, in particular the WMAP3 + SDSS data combination, allow large oscillations of the EOS which can leave oscillating features on the (residual) Hubble diagram, and such oscillations are potentially detectable by future observations like SNAP. Local bumps of dark energy EOS can also leave imprints on CMB, LSS and SNIa. In cases when the bumps take place at low redshifts and the effective EOS is close to \\(-1\\), CMB and LSS observations cannot give stringent constraints on such possibilities. However, geometrical observations like (future) SNIa can possibly detect such features. On the other hand when the local bumps take place at higher redshifts beyond the detectability of SNIa, future precise observations like Gamma-ray bursts and observations of 21 cm tomography, CMB and LSS may possibly detect such features. In particular, we find that bump-like dark energy EOS on high redshifts _might_ be responsible for the features of WMAP on ranges \\(l\\sim 20-40\\), which is interesting and deserves addressing further. The remaining part of our paper is structured as follows: in Section II we describe the method and the data; in Section III we present our results on the determination of cosmological parameters with (WMAP3)[1; 2; 3; 4; 5], SNIa [14; 16], Sloan Digital Sky Survey 3-D power spectrum (SDSS-P(k)) [6] by global fittings using the MCMC technique; discussions and conclusions are presented in the last section. ## II Method and data In the parametrization of oscillating EOS one typically needs four parameters for the amplitude, center values, phase and the frequency. And in our analysis we have used \\[w=w_{0}+w_{1}\\sin(w_{2}\\ln a+w_{3})\\ . \\tag{1}\\] The case with \\(w_{0}=-1\\) and \\(w_{1}=0\\) corresponds to the cosmological constant. The method we adopt is based on the publicly available Markov Chain Monte Carlo package CosmoMC[47; 72], which has been modified to allow for the inclusion of dark energy perturbations with EOS getting across \\(-1\\)[70]. Our most general parameter space is \\[{\\bf p}\\equiv(\\omega_{b},\\omega_{c},\\Theta_{S},\\tau,w_{0},w_{1},w_{2},w_{3},n_{s},\\log[10^{10}A_{s}])\\, \\tag{2}\\] where \\(\\omega_{b}=\\Omega_{b}h^{2}\\) and \\(\\omega_{c}=\\Omega_{c}h^{2}\\) are the physical baryon and cold dark matter densities relative to critical density, \\(\\Theta_{S}\\) is the ratio (multiplied by 100) of the sound horizon to the angular diameter distance at decoupling, \\(\\tau\\) is the optical depth, \\(A_{s}\\) is defined as the amplitude of initial power spectrum and \\(n_{s}\\) measures the spectral index. Assuming a flat Universe motivated by inflation and basing on the Bayesian analysis, we vary the above 10 parameters and fit to the observational data with the MCMC method. We take the weak priors as: \\(\\tau<0.8,0.5<n_{s}<1.5,-4<w_{0}<1,-10<w_{1}<10,0<w_{2}<20,-\\pi/2<w_{3}<\\pi/2\\), a cosmic age tophat prior as 10 Gyr\\(<t_{0}<\\)20 Gyr. The choice of priors on \\(w_{0},w_{1},w_{2},w_{3}\\) have been set to allow for spread in all of the parameters simultaneously. Furthermore, we make use of the HST measurement of the Hubble parameter \\(H_{0}=100h\\) km s\\({}^{-1}\\)Mpc\\({}^{-1}\\)[73] by multiplying the likelihood by a Gaussian likelihood function centered around \\(h=0.72\\) and with a standard deviation \\(\\sigma=0.08\\). We impose a weak Gaussian prior on the baryon and density \\(\\Omega_{b}h^{2}=0.022\\pm 0.002\\) (1 \\(\\sigma\\)) from Big Bang nucleosynthesis[74]. The bias factor of LSS has been used as a continuous parameter to give the minimum \\(\\chi^{2}\\). In our calculations we have taken the total likelihood to be the products of the separate likelihoods of CMB, SNIa and LSS. Alternatively defining \\(\\chi^{2}=-2\\log\\mathcal{L}\\), we get \\[\\chi^{2}_{total}=\\chi^{2}_{CMB}+\\chi^{2}_{SNIa}+\\chi^{2}_{LSS}\\ \\ \\ . \\tag{3}\\] In the computation of CMB we have included the three-year WMAP (WMAP3) data with the routine for computing the likelihood supplied by the WMAP team [5]. To be conservative but more robust, in the fittings to the 3D power spectrum of galaxies from the SDSS[6] we have used the first 14 bins only, which are supposed to be well within the linear regime[7]. In the calculation of the likelihood from SNIa we have marginalized over the nuisance parameter[75]. The supernova data we use are the \"gold\" set of 157 SNIa published by Riess \\(et\\)\\(al\\) in [14] and the 71 high redshift type Ia supernova discovered during the first year of the 5-year Supernova Legacy Survey (SNLS)[16] respectively. In the fittings to SNLS we have used the additional 44 nearby SNIa, as also adopted by the SNLS group[16]. Also to be conservative but more robust, we did not try to combine SNLS with the Riess sample simultaneously for cosmological parameter constraints, namely in one case for SNIa fitting we use SNLS data only and in another case the Riess sample only. For each regular calculation, we run 6 independent chains comprising of 150,000-300,000 chain elements and spend thousands of CPU hours to calculate on a cluster. The average acceptance rate is about 40%. And for the convergence test typically we get the chains satisfy the Gelman and Rubin[76] criteria where R-1\\(<\\)0.1. In our study for future perspectives on features of dark energy we have used the cosmic-variance[77] limited CMB TT spectrum up to \\(l=2000\\). For SNIa we have used SNAP[78] simulations2 and for LSS, we have adopted the LAMOST[80] simulations. In the remaining part of this paper the fiducial power law \\(\\Lambda\\)CDM model adopted is as follows: Footnote 2: SNAP is one of the several candidate mission concepts for the Joint Dark Energy Mission (JDEM). Nowadays there have been many proposed dark energy surveys[79]. \\[(\\omega_{b},\\omega_{c},h,z_{r},n_{s},A_{s})=(0.022,0.12,0.7,12,1,2.3\\times 10^{-9 })\\ \\, \\tag{4}\\] where \\(z_{r}\\) is the reionization redshift and the slightly different notations from previous Eq. (2) are due to the difference in the CAMB[81; 82] and CosmoMC[47; 72] default parameters. Such a fiducial model will be used to generate future CMB, SNIa and LSS data. In addition the illustrative figures will also be generated with such background parameters. Moreover in generating the illustrative figures on linear power spectrum of LSS we have fixed the bias factor to be unity. The projected satellite SNAP (Supernova / Acceleration Probe) would be a space based telescope with a one square degree field of view with 1 billion pixels. It aims to increase the discovery rate for SNIa to about 2,000 per year[78]. The simulated SNIa data distribution is taken from Refs. [83; 84; 85]. As for the error, we follow the ref. [83] which takes the magnitude dispersion 0.15 and the systematic error \\(\\sigma_{sys}(z)=0.02\\times z/1.7\\), and the whole error for each data is \\[\\sigma_{mag}(z_{i})=\\sqrt{\\sigma_{sys}^{2}(z_{i})+\\frac{0.15^{2}}{n_{i}}}\\ \\, \\tag{5}\\] where \\(n_{i}\\) is the number of supernova in the i'th redshift bin. The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) project as one of the National Major Scientific Projects undertaken by the Chinese Academy of Science, aims to measure \\(\\sim 10^{7}\\) galaxies with mean redshift \\(z\\sim 0.2\\)[80]. In the measurements of large scale matter power spectrum of galaxies there are generally two statistical errors: sample variance and shot noise. The uncertainty due to statistical effects, averaged over a radical bin \\(\\Delta k\\) in Fourier space, is [86] \\[(\\frac{\\sigma_{P}}{P})^{2}=2\\times\\frac{(2\\pi)^{3}}{V}\\times\\frac{1}{4\\pi k^{2} \\Delta k}\\times(1+\\frac{1}{\\bar{n}P})^{2}\\ . \\tag{6}\\] The initial factor of 2 is due to the real property of the density field, \\(V\\) is the survey volume and \\(\\bar{n}\\) is the mean galaxy density. In our simulations for simplicity and to be conservative, we use only the linear matter power spectrum up to \\(k\\sim 0.15\\ h\\ {\\rm Mpc}^{-1}\\). For the cases with future cosmic variance limited CMB and LAMOST, we only show the error bars for illustrations, and a further analysis with fittings is not the aim of the present paper. For the studies on bump-like dark energy EOS, we also plot the illustrative results rather than make global fittings. And the parametrized EOS takes the following form: \\[w=w_{0}+A(\\ln a-\\lambda)^{3}\\exp(-(\\ln a-\\lambda)^{4}/d)\\ . \\tag{7}\\] We should point out here that the parametrization in Eq.(7) is a specific example only and there are certainly different parametrizations to illustrate the bump-like features in dark energy EOS and the resulting cosmological imprints might be different. ## III Results We start with the oscillating case. First of all in Fig.1 we delineate the illustrative imprints of oscillating dark energy equation of state (EOS) on CMB (top left), LSS (top right) and on the Hubble diagram (lower panel).In the Hubble diagram the distance modulus \\(\\mu\\) is defined as the apparent magnitude \\(m\\) minus the absolute magnitude \\(M\\): \\[\\mu\\equiv m-M=5\\lg\\left(\\frac{d_{L}}{1Mpc}\\right)+25, \\tag{8}\\] with \\(d_{L}\\) being the luminosity distance: \\[\\frac{d_{L}}{1+z}=\\int_{0}^{z}\\frac{dz^{\\prime}}{H(z^{\\prime})}. \\tag{9}\\] In comparison the imprints by the \\(\\Lambda\\)CDM cosmology have also been displayed. The main contribution of dark energy on CMB is on the geometrical angular diameter distance to the last scattering surface. This in turn determines the locations of CMB peaks. In some cases the effects on the large scale matter power spectrum of LSS are also significant. In Fig.1 we can find for cases where \\(w_{0}=-0.5\\), the effects on CMB turn out to be most eminently modulated. This is mainly due to the fact that in such cases as \\(w_{0}\\) deviates significantly from \\(-1\\) and \\(w\\) has been _relatively_ matter-like, the contributions to large scale CMB are significant due to the Integrated Sachs-Wolfe (ISW) effects. An oscillating EOS of dark energy can leave somewhat similar imprints as oscillating primordial spectrum, as explored in Ref. [25], although the difference is also noteworthy. We find interestingly that in cases \\(w_{0}=-1.5\\), the effects are not as large as \\(w_{0}=-0.5\\), which is in part due to the effects of dark energy perturbations and consistent with the previous analysis in Ref. [70]. The effects of dark energy on CMB (and LSS) are mainly geometrical effects and can sometimes be understood from the formula on \\(w_{eff}\\)[87]: \\[w_{eff}\\equiv\\frac{\\int da\\Omega(a)w(a)}{\\int da\\Omega(a)}\\ . \\tag{10}\\] On the other hand the Hubble diagram being displayed with the redshift, the effects of dark energy, in cases when DE dominates the Universe the EOS has oscillating features, can leave oscillations on the the Hubble diagram. For the next step we show the results from our global fittings on the oscillating EOS in Eq.(1). In Table 1 we delineate the mean \\(1\\sigma\\) constrains on the relevant cosmological parameters using different combination of WMAP, SNIa and SDSS. Shown together are those with maximum likelihood (dubbed ML) and those which give the most eminent oscillating effects in the \\(2\\sigma\\) allowed regions (dubbed \"Most\" in the table). For simplicity the parameters related to bayon fractions and the primordial spectrum are not displayed, and the shown parameters are enough for the study on the Hubble diagrams below. Typically in the realizations of MCMC as the cosmological parameters are not exactly gaussian distributed, the center mean values are different from the best fit cases3. We find that although the best fit cases are given by oscillating EOS, a cosmological constant ( \\(w_{0}=-1,w_{1}=0\\) ) is well within \\(1\\sigma\\) for all the three data combinations. The accumulation of the observational data will help to break such a degeneracy. It is very interesting that \\(w_{1}\\) is much better constrained by SNLS (together with the 44 low redshift SNIa) rather than by the Riess \"gold\" sample. On the other hand the background parameter \\(H_{0}\\) is better constrained by the Riess \"gold\" sample. One can also find that the frequency of oscillations, \\(w_{2}\\), is relatively the worst constrained parameter. We will turn to this in more details in the remaining part of the present paper. Footnote 3: More detailed discussions are available at [http://cosmocoffee.info/](http://cosmocoffee.info/). In Fig.2 we delineate the one dimensional posterior constraints on the oscillating EOS in Eq.(1), showing together some relevant background cosmological parameters. The black lines are constraints from combined analysis of WMAP3 + SDSS. The red lines are from WMAP3 + SDSS + Riess sample and the blue are from WMAP3 + SDSS + SNLS combined analysis. We can find that in some sense \\(w_{2}\\) and \\(w_{3}\\) are not well constrained by the current observations. As \\(w_{3}\\) represents the phase of oscillations and the range ( \\(-\\pi/2,\\pi/2\\) ) is the largest 1-period limit (note we have allowed \\(w_{1}\\) to be both positive and negative), our prior on \\(w_{2}\\), though as large as given above, turn out to be somewhat too optimistic. The effects are more eminent in the resulting two dimensional contours. In Fig.3 we plot the corresponding two dimensional posterior constraints. The upper panel is on the constraints from the combined analysis of WMAP3 + SDSS. The lower left panel is from WMAP3 + SDSS + Riess sample and the lower right from WMAP3 + SDSS + SNLS combined analysis. Figure 1: Illustrative imprints of oscillating dark energy equation of state (EOS) on CMB (top left), LSS (top right) and on the Hubble diagram (lower panel). It is understandable that in cases one or two parameters are not well constrained by the observations, changing the priors (in our case on \\(w_{2}\\)) will inevitably affect the final results. Such a feature has also appeared in previous investigations on the oscillating primordial spectrum, regardless of fitting methods one uses such as grids[88] or MCMC[89; 90]. For the grid case the problem is how small one should set the minimum grid, and there will be finer features on scales smaller than the minimum steps4. For our conventional cases with MCMC[18; 49; 71; 91] to get the converged results, we test the convergence of the chains by Gelman and Rubin[76] criteria and typically get R-1 to be of order 0.01, which is more conservative than the recommended value R-1\\(<\\)0.1. However in the present case with an oscillating EOS, in the combinations with SNLS the maximum value of R-1 is 0.04, for the combination with Riess \"gold\" sample, we have value: R-1 \\(\\sim\\) 0.1 and for the the case with WMAP3 + SDSS we have the maximum R-1 to be 0.25 instead, although for the last case we have run much longer time for the chains. This also shows that for the case with oscillations at least for WMAP3 + SDSS our chains are not well converged. On the other hand this shows that for the current observations a large oscillation on the dark energy EOS is allowed. In Ref.[89] the authors quoted \\begin{table} \\begin{tabular}{|c|c c c|c c c|c c c|} \\hline & \\multicolumn{3}{|c|}{WMAP+SDSS} & \\multicolumn{3}{|c|}{WMAP+SDSS+Riess} & \\multicolumn{3}{|c|}{WMAP+SDSS+SNLS} \\\\ & Mean & ML & Most & Mean & ML & Most & Mean & ML & Most \\\\ \\hline \\(w_{0}\\) & \\(-0.805^{+0.413}_{-0.394}\\) & \\(-0.465\\) & \\(-0.250\\) & \\(-0.845^{+0.308}_{-0.222}\\) & \\(-0.508\\) & \\(-0.450\\) & \\(-0.886^{+0.283}_{-0.215}\\) & \\(-0.485\\) & \\(-0.400\\) \\\\ \\(w_{1}\\) & \\(0.874^{+2.910}_{-3.068}\\) & \\(2.62\\) & \\(5.80\\) & \\(0.609^{+1.282}_{-1.580}\\) & \\(1.58\\) & \\(2.90\\) & \\(0.303^{+0.801}_{-0.792}\\) & \\(1.29\\) & \\(1.90\\) \\\\ \\(w_{2}\\) & \\(11.6\\pm 5.9\\) & \\(18.3\\) & \\(11.5\\) & \\(10.8^{+5.3}_{-6.0}\\) & \\(12.0\\) & \\(10.8\\) & \\(9.52^{+1.047}_{-9.49}\\) & \\(8.28\\) & \\(9.50\\) \\\\ \\(w_{3}\\) & \\(0.179^{+0.990}_{-1.730}\\) & \\(-0.0707\\) & \\(1.5000\\) & \\(-0.0446^{+0.8646}_{-1.5203}\\) & \\(-0.131\\) & \\(1.400\\) & \\(0.0831^{+0.9316}_{-1.6505}\\) & \\(0.356\\) & \\(1.400\\) \\\\ \\(\\Omega_{m}\\) & \\(0.345^{+0.060}_{-0.057}\\) & \\(0.332\\) & \\(0.350\\) & \\(0.308^{+0.033}_{-0.032}\\) & \\(0.325\\) & \\(0.300\\) & \\(0.288^{+0.033}_{-0.032}\\) & \\(0.312\\) & \\(0.300\\) \\\\ \\(\\Omega_{\\Lambda}\\) & \\(0.655^{+0.057}_{-0.060}\\) & \\(0.668\\) & \\(0.650\\) & \\(0.692^{+0.033}_{-0.033}\\) & \\(0.675\\) & \\(0.700\\) & \\(0.712^{+0.032}_{-0.033}\\) & \\(0.688\\) & \\(0.700\\) \\\\ \\(H_{0}\\) & \\(61.8^{+5.6}_{-5.9}\\) & \\(62.0\\) & \\(62.0\\) & \\(65.6\\pm 3.5\\) & \\(63.1\\) & \\(65.5\\) & \\(68.2^{+3.6}_{-3.7}\\) & \\(62.5\\) & \\(68.0\\) \\\\ \\hline \\end{tabular} \\end{table} Table 1: Mean 1\\(\\sigma\\) constrains on cosmological parameters using different combination of WMAP, SNIa and SDSS. Shown together are those with maximum likelihood (ML) and those which give the most eminent oscillating effects within the 2\\(\\sigma\\) allowed regions (Most). Figure 2: One dimensional posterior constraints on the parametrized EOS: \\(w=w_{0}+w_{1}\\sin(w_{2}\\ln a+w_{3})\\) and on the relevant background cosmological parameters. The black solid lines are constraints from combined analysis of WMAP3 + SDSS. The red dash-dot lines are from WMAP3 + SDSS + Riess sample and the blue dashed lines are from WMAP3 + SDSS + SNLS combined analysis. a value where R-1 \\(<\\) 0.1 (their R-1 is slightly different from ours) and the structures of our resulting two-dimensional figures are similar to those by Refs.[89; 90]. These have implied that in the presence of oscillations typically the current data are not good enough to well break the degeneracy among the parameters. In our case although the parameters like \\(n_{s}\\), \\(w_{0}\\), \\(\\Omega_{DE}\\), \\(\\Omega_{m}\\) and \\(H_{0}\\) have already been well constrained, R-1 is in some cases relatively large due to the parameters \\(w_{2}\\) and \\(w_{3}\\). This has in turn led to the separate likelihood spaces in the three data combinations at 68% and 95% C.L. Interestingly in the WMAP3 + SDSS + Riess sample the 1 \\(\\sigma\\) contours clearly separate into two different peaks in the lower three panels, resembling the contours in Ref. [89]. Such a behavior is less eminent in the WMAP3 + SDSS + SNLS sample, which nevertheless does exist in the \\(w_{2}-w_{3}\\) contour. We should point out that given the priors and the R-1 value specified above, our results are robust. In Fig.4 we delineate the resulting posterior 1\\(\\sigma\\) constraints on the low-redshift behavior of the oscillating EOS, with the three different data combinations. The red lines are given by the mean center values as shown in Table 1 and the blue dashes lines are the 1\\(\\sigma\\) allowed regions. The green dashed lines are the illustrative 1\\(\\sigma\\) explored regions by future SNAP[78]. Thus future SNIa observations like SNAP can help significantly to break the degeneracy, and in some sense detect such oscillating features of dark energy EOS. In Fig.5 we delineate the corresponding imprints on the residue Hubble diagram (upper panels) and the Hubble diagram (lower panels). In the left panels, the lines dubbed \"Osc\" are given by the best fit values WMAP3 + SDSS, Figure 3: Two dimensional posterior constraints on the parametrized EOS: \\(w=w_{0}+w_{1}\\sin(w_{2}\\ln a+w_{3})\\) at 68% and 95% C.L. The upper panel is on the constraints from combined analysis of WMAP3 + SDSS. The lower left panel is from WMAP3 + SDSS + Riess sample and the lower right from WMAP3 + SDSS + SNLS combined analysis. the underlined \"S\" is by WMAP3 + SDSS + SNLS and \"R\" by WMAP3 + SDSS + Riess sample. The right panels show the corresponding cases where within the \\(2\\sigma\\) allowed regions the oscillating effects are relatively eminent. Same as the above conventions in the upper panels the blue lines dubbed \"SNAP\" illustrate the detectability of future SNAP[78]. It is noteworthy that the combination with SNLS gives relatively the most stringent constraints. We can find that for the case with WMAP + SDSS only, a larger oscillation is allowed and there are some oscillating features on the (residue) Hubble diagram. And the effects of the \"Most\" case in Table 1 are more eminent, which could be possibly detected even by the CURRENTLY ONGOING SNIa observations. In the cases combined with Riess \"gold\" sample or with SNLS, the oscillating effects are less eminent. Nevertheless, oscillations are still present and in large areas of the parameter spaces, SNAP will be able to detect such features5. Footnote 5: In some sense in Table 1 for the ”Most” case by WMAP + SDSS combination only, the relevant parameter space will get relatively stringently constrained by combinations with SNLS or with Riess ”gold” sample. However given the possible discrepancies between SNLS and the Riess sample, which has been somewhat illustrated in Ref. [1], currently ongoing SNIa observations can in some sense detect the (semi)-oscillating features in dark energy EOS as allowed by WMAP + SDSS combination. Now we turn to the studies on the local bump-like features of dark energy EOS. It is physically intuitive that dark energy EOS might not be exactly periodic, instead it could be semi-periodic or even with features on some specific redshifts. The parametrization adopted in Eq. (7) ( \\(w=w_{0}+A(\\ln a-\\lambda)^{3}\\exp(-(\\ln a-\\lambda)^{4}/d)\\) ) can accomodate constant EOS in cases where \\(A=0\\). \\(\\lambda\\) determines the locations of bumps, \\(d\\) determines the width and \\(A\\) determines the amplitudes of the bumps. One can easily understand that at the point \\(\\ln a=\\lambda\\) we have \\(w=w_{0}\\) and for \\(\\ln a<\\lambda\\) (high redshifts) there will be a trough and for \\(\\ln a>\\lambda\\) (low redshifts) there will be a peak (this happens only in cases Figure 4: Resulting posterior constraints on the low-redshift behavior of the parametrized EOS: \\(w=w_{0}+w_{1}\\sin(w_{2}\\ln a+w_{3})\\). The red lines are given by the mean center values as shown in Table 1 and the outside blue dashed lines are the \\(1\\sigma\\) allowed regions by WMAP3 + SDSS (left), WMAP3 + SDSS + SNLS (center) and WMAP3 + SDSS + Riess sample (right). The inside green dashed lines are the illustrative \\(1\\sigma\\) explored regions by future SNAP. where \\(\\lambda<0\\), which is the region of our interest). And on very high redshifts where \\(\\ln a\\ll\\lambda\\) the second term will get damped exponentially and w approaches the value of \\(w_{0}\\). For simplicity and in the illustrative study here we fix \\(w_{0}=-1\\). Firstly we consider the cases where the bump takes place at low redshifts which can be detectable by (future) SNIa observations. In Fig.6 we delineate the imprints of bump-like dark energy EOS on the (residual) Hubble diagram, dark energy density fractions (left) and the corresponding effects on CMB and LSS (right). We have chosen \\(A=2\\times 10^{7},\\lambda=-0.03\\) and \\(d=10^{-7}\\) for the first illustration. From Fig.6 we can find that for such a specific choice of parameters the effects on CMB are indistinguishable from the corresponding \\(\\Lambda\\)CDM model beyond cosmic variance, and nor can LSS tell one (bump-like) from the other (\\(\\Lambda\\)CDM). It is understandable that in such a case as the effects of the peak and trough somewhat cancel out in the contributions to \\(w_{eff}\\) defined in Eq. (10), the effects on CMB and LSS would then be _almost_ indistinguishable from the \\(\\Lambda\\)CDM case, and the results are consistent with those in Refs.[70; 87]. In such a case, observations with geometric constraints on the redshift \"tomography\" will be of great importance to break such a degeneracy. The resulting (residual) Hubble diagrams are different from the \\(\\Lambda\\)CDM model, and observations like SNAP can possibly detect such features. Although SNIa in some sense makes the only direct detection of dark energy, it is believed that due to some unavoidable effects like by dust the redshifts probed by SNIa cannot be too high, and for example in SNAP simulations one typically takes \\(z\\leq 1.7\\). In cases when the bump-like features take place at higher redshifts, we cannot expect to probe such features with even future SNAP observations. On the other hand as the current observations are consistent with a \\(\\Lambda\\)-like dark energy, typically we cannot hope dark energy will be non-negligible in epochs far before the matter-radiation equality epoch, although such a possibility remains (for relevant studies see e.g. [92; 93]). On the other hand with our parametrized EOS given in Eq. (7) we cannot expect dark energy component to be significant Figure 5: Resulting imprints of the parametrized EOS: \\(w=w_{0}+w_{1}\\sin(w_{2}\\ln a+w_{3})\\) on the residue Hubble diagram (upper panels) and the Hubble diagram (lower panels). In the left panels, the lines dubbed ”Osc” are given by the best fit values WMAP3 + SDSS, the underlined ”S” is by WMAP3 + SDSS + SNLS and ”R” by WMAP3 + SDSS + Riess sample. The right panels show the corresponding cases where within the \\(2\\sigma\\) allowed regions the oscillating effects are relatively eminent. In the upper panels the blue lines dubbed ”SNAP” illustrate the detectability of future SNAP. on very high redshifts, especially for cases where \\(w_{0}=-1\\)6. Under such a circumstance we take the locations of bumps to be slightly larger than \\(z=2\\): in one case \\(\\lambda=-1.5\\) and in another case \\(\\lambda=-1.8\\). These correspond to \\(z\\simeq 3.5\\) and \\(5.0\\) respectively where \\(\\ln a=\\lambda\\). For both of the cases we have fixed \\(A=6\\times 10^{5}\\) and \\(d=10^{-5}\\). In Fig.7 we show the imprints of bump-like dark energy EOS on CMB, LSS (right panels), on dark energy density fractions and on the residual Hubble diagram (left panels). The bumps take place at high redshifts which cannot be detectable by (future) SNIa observations. On the other hand, observations of Gamma-ray bursts (GRB) can probe relatively higher redshifts than SNIa[94; 95; 96], with the accumulations of GRB events as by SWIFT and better understandings on the systematics one is hopefully able to detect such features through geometric observations on the Hubble diagrams. Moreover observations like the 21 cm tomography [97; 98; 99; 100] are potentially able to give almost a large number of independent samples compared with CMB and LSS observations[98], features of bump-like dark energy EOS on intermediate redshifts are promisingly detected by observations of the 21 cm tomography. In the right panels of Fig.7 we find dramatically that bump-like features are possibly detectable by cosmic variance limited CMB (e.g. WMAP3[1; 2; 3; 4; 5]) and LAMOST[80]. Part of the reason lies on the fact that with a different width and a higher redshift compared with the previous example in Fig.6, here the energy density fraction \\(\\Omega_{DE}(a)\\) increases and decreases rapidly and the magnitude of the peak value of \\(\\Omega_{DE}(a)\\) is somewhat lower, the asymmetric shapes of the peak and trough cannot compensate thoroughly to give \\(w_{eff}\\) close to \\(-1\\). In fact one can find for our parametrization given in Eq. (7) one will typically get \\(w_{eff}>-1\\) and in cases with larger \\(-\\lambda\\) the deviation of \\(w_{eff}\\) from -1 can be larger, which will also lead to some shifts on CMB peaks, as shown in Fig.7. It is noteworthy to point out that both the first year WMAP and WMAP3 show some local glitches out of cosmic variance at \\(l\\sim 20-40\\). In our case of Fig.7 there are also some relevant features on such scales and this deserves further investigations. While such features can be affected by different foreground analysis and so forth[101; 102; 103], it is theoretically possible that the features of WMAP TT on scales \\(l\\sim 20-40\\) are due to some bump-like features of dark energy EOS, or some semi-oscillations. Note here we have fixed the background parameters rather than performing a global analysis, and in cases we take into account the parameter degeneracies the detectability of the observations would typically get Figure 6: Imprints of bump-like dark energy EOS (denoted by ”bulk model”) on the (residual) Hubble diagram, dark energy density fractions (left) and the corresponding effects on CMB and LSS (right), shown together with the imprints by the \\(\\Lambda\\)CDM model. The bump takes place at low redshifts which can be detectable by (future) SNIa observations. The effects on CMB are indistinguishable from the corresponding \\(\\Lambda\\)CDM model beyond cosmic variance, and nor can LSS tell one (bump-like) from the other (\\(\\Lambda\\)CDM). weaker. On the other hand we can expect that in the global analysis when all of the parameters (including the bias factor) can vary, the observations can give constraints or detect the signatures where the amplitudes of bumps are even higher than the examples listed above. ## IV Discussion and Conclusion We should point that in all our three data combinations, a large area of oscillating Quintom where the EOS of dark energy gets across \\(-1\\) during time evolutions are allowed by the current observations, which is different from the previous case in Ref.[50] and from the case in Ref. [53]. Moreover contrary to Refs.[50; 53], oscillating Quintom displays the distinctive feature that due to the average effects on the regions of EOS where \\(w>-1\\) and \\(w<-1\\), the quantity \\(w_{eff}\\) can be very close to \\(-1\\). Oscillating Quintom-like dark energy, where the unification of two epochs of accelerated expansions can be realized and the coincidence problem can be solved[51], proves to be a good fit to current Figure 7: Imprints of bump-like dark energy EOS on CMB, LSS (right panels), on dark energy density fractions and on the residual Hubble diagram (left panels). The bumps take place at high redshifts which cannot be detectable by (future) SNIa observations. On the other hand, cosmic variance limited CMB (e.g. WMAP3) and LAMOST[80] are possible to detect such features. Note here we have not performed a global analysis and did not take into account the parameter degeneracies. cosmology. We should stress again that resembling the oscillating primordial spectrum case[89], the not good enough convergence of oscillating dark energy EOS is inevitable in the light of the current observations. In the WMAP + SDSS allowed parameter space there are areas with significant oscillations, which implies that the CURRENTLY ONGOING SNIa observations may detect such oscillating or semi-oscillating features on the (residual) Hubble diagram. Moreover while oscillating features on CMB might be explained by both oscillating primordial spectrum and oscillations in dark energy EOS, the oscillating features on the Hubble diagram cannot be due to features in the primordial spectrum, but by oscillating EOS. Local bumps of dark energy EOS may leave some distinctive imprints on CMB, LSS and SNIa. The bumps are potentially detectable by geometrical observations like SNIa and GRB. Future observations of 21 cm tomography open a very promising window to detect/exclude such features. LSS measurements like LAMOST and cosmic variance limited CMB may also detect such features. In particular, bump-like dark energy EOS on high redshifts _might_ be responsible for the features of WMAP on ranges \\(l\\sim 20-40\\), which is interesting and deserves addressing further. Given the fact that currently we know relatively very little on theoretical aspects of dark energy, observing dynamical dark energy is currently the most important aspect of dark energy study. On the observational probes typically one needs some parametrizations on the form of dark energy EOS. While using some specific forms of parametrizations one often takes the risks of getting biased ( see e.g. [104] ), investigations towards unbiased probe of DE[105; 106; 107] in light of all the available cosmological observations still need further developments. On the other hand some of the parametrizations are in some sense well motivated (see e.g. [104]), and in fittings starting with parametrized EOS one typically has larger \\(\ u\\) (number of data minus the number of parameters) and hence gets better constraints on dark energy. With the accumulations of observational data poor parametrizations will get ruled out and in this sense parametrized study of dark energy provides a complementary study of the non-parametric probes. As from any quintessence-like or phantom-like EOS which do not get across \\(-1\\) one can reconstruct the dark energy potential[108], any parametrizations of non-Quintom-like EOS correspond to some specific forms of quintessence/phantom potentials7 and hence such parametrizations are somewhat well motivated. Bump-like EOS which do not get across \\(-1\\) also correspond to some specific DE potentials which can be straightforwardly worked out. Similarly for (semi-)oscillating EOS which do not get across \\(-1\\) one can work out the corresponding DE potentials. In cases with bumps described in Eq. (7) and with oscillations in Eq. (1), one cannot simply reconstruct the potentials of DE due to the distinctive nature of Quintom[54; 70]. However this remains possible for example the models with high derivatives [65; 67] and in the framework with modified gravity[56; 109; 110; 111; 112; 113]. Footnote 7: In cases where \\(w>1\\) they correspond to quintessence with negative potentials. **Acknowledgments:** We acknowledge the use of the Legacy Archive for Microwave Background Data Analysis (LAMBDA). Support for LAMBDA is provided by the NASA Office of Space Science. We have performed our numerical analysis on the Shanghai Supercomputer Center (SSC). We used a modified version of CAMB[81; 82] which is based on CMBFAST[114; 115]. We are grateful to Hiranya Peiris, Yunsong Piao and Lifan Wang for discussions related to this project. We thank Xuelei Chen, Yaoquan Chu and Long-long Feng for discussions on LAMOST. We thank Sarah Bridle, Peihong Gu, Steen Hannestad, Antony Lewis, Mingzhe Li, Yongzhong Xu, Jun'ichi Yokoyama, Max Tegmark and Penjie Zhang for helpful discussions. B. F. would like to thank the hospitalities of IHEP during his visit to Beijing. This work is supported in part by National Natural Science Foundation of China under Grant Nos. 90303004, 10533010 and 19925523 and by Ministry of Science and Technology of China under Grant No. NKBRSF G19990754. B. F. is supported by the JSPS fellowship program. ## References * (1) D. N. Spergel _et al._, arXiv:astro-ph/0603449. * (2) L. Page _et al._, arXiv:astro-ph/0603450; * (3) G. Hinshaw _et al._, arXiv:astro-ph/0603451; * (4) N. Jarosik _et al._, arXiv:astro-ph/0603452. * (5) Available from [http://lambda.gsfc.nasa.gov/product/map/current/](http://lambda.gsfc.nasa.gov/product/map/current/). * (6) M. Tegmark _et al._ [SDSS Collaboration], Astrophys. J. **606**, 702 (2004) [arXiv:astro-ph/0310725]. * (7) M. Tegmark _et al._ [SDSS Collaboration], Phys. Rev. D **69**, 103501 (2004) [arXiv:astro-ph/0310723]. * (8) S. Weinberg, Rev. Mod. Phys. **61**, 1 (1989). * (9) I. Zlatev, L. M. Wang and P. J. Steinhardt, Phys. Rev. Lett. **82**, 896 (1999) [arXiv:astro-ph/9807002]. * (10) R. D. Peccei, J. Sola and C. Wetterich, Phys. Lett. B **195**, 183 (1987); C. Wetterich, Nucl. Phys. B **302**, 668 (1988); B. Ratra and P. J. E. Peebles, Phys. Rev. D **37**, 3406 (1988). * (11) A. G. Riess _et al._ [Supernova Search Team Collaboration], Astron. J. **116**, 1009 (1998) [arXiv:astro-ph/9805201]. * (12) S. Perlmutter _et al._ [Supernova Cosmology Project Collaboration], Astrophys. J. **517**, 565 (1999) [arXiv:astro-ph/9812133]. * (13) J. L. Tonry _et al._ [Supernova Search Team Collaboration], Astrophys. J. **594**, 1 (2003) [arXiv:astro-ph/0305008]. * (14) A. G. Riess _et al._ (Supernova Search Team Collaboration), Astrophys. J. **607**, 665 (2004) [arXiv:astro-ph/0402512]. * (15) A. Clocchiatti _et al._ (the High Z SN Search Collaboration), astro-ph/0510155. * (16) P. Astier _et al._, Astron. Astrophys. **447**, 31 (2006) [arXiv:astro-ph/0510447]. * (17) U. Seljak, A. Slosar and P. McDonald, arXiv:astro-ph/0604335. * (18) G. B. Zhao, J. Q. Xia, B. Feng and X. Zhang, arXiv:astro-ph/0603621. * (19) Y. Wang and P. Mukherjee, arXiv:astro-ph/0604051. * (20) C. L. Bennett _et al._, Astrophys. J. Suppl. **148**, 1 (2003) [arXiv:astro-ph/0302207]. * (21) D. N. Spergel _et al._ [WMAP Collaboration], Astrophys. J. Suppl. **148**, 175 (2003) [arXiv:astro-ph/0302209]. * (22) H. V. Peiris _et al._, Astrophys. J. Suppl. **148**, 213 (2003) [arXiv:astro-ph/0302225]. * (23) e.g. C. R. Contaldi, M. Peloso, L. Kofman and A. Linde, JCAP **0307**, 002 (2003); B. Feng and X. Zhang, Phys. Lett. B **570**, 145 (2003); Q. G. Huang and M. Li, JCAP **0311**, 001 (2003) [arXiv:astro-ph/0308458]; M. Yamaguchi and J. Yokoyama, Phys. Rev. D **70** (2004) 023513; Y. S. Piao, B. Feng and X. m. Zhang, Phys. Rev. D **69**, 103520 (2004) [arXiv:hep-th/0310206]; Y. S. Piao, S. Tsujikawa and X. m. Zhang, Class. Quant. Grav. **21**, 4455 (2004) [arXiv:hep-th/0312139]; Y. S. Piao, Phys. Rev. D **71**, 087301 (2005) [arXiv:astro-ph/0502343]; B. A. Bassett, S. Tsujikawa and D. Wands, arXiv:astro-ph/0507632. * (24) B. Feng, M. z. Li, R. J. Zhang and X. m. Zhang, Phys. Rev. D **68**, 103511 (2003) [arXiv:astro-ph/0302479]. * (25) X. Wang, B. Feng, M. Li, X. L. Chen and X. Zhang, Int. J. Mod. Phys. D **14**, 1347 (2005) [arXiv:astro-ph/0209242]. * (26) T. Moroi and T. Takahashi, Phys. Rev. Lett. **92**, 091301 (2004) [arXiv:astro-ph/0308208]. * (27) B. Feng, X. Gong and X. Wang, Mod. Phys. Lett. A **19**, 2377 (2004) [arXiv:astro-ph/0301111]. * (28) M. Viel, M. G. Haehnelt and A. Lewis, arXiv:astro-ph/0604310. * (29) L. A. Kofman, A. D. Linde, A. A. Starobinsky, Phys. Lett. B **157**, 361 (1985). * (30) A. A. Starobinsky, JETP Lett. **55**, 489 (1992) [Pisma Zh. Eksp. Teor. Fiz. **55**, 477 (1992)]. * (31) J. A. Adams, G. G. Ross and S. Sarkar, Nucl. Phys. B **503**, 405 (1997) [arXiv:hep-ph/9704286]. * (32) J. Lesgourgues, D. Polarski and A. A. Starobinsky, Mon. Not. Roy. Astron. Soc. **297**, 769 (1998) [arXiv:astro-ph/9711139]. * (33) D. J. H. Chung, E. W. Kolb, A. Riotto and I. I. Tkachev, Phys. Rev. D **62**, 043508 (2000) [arXiv:hep-ph/9910437]. * (34) L. M. Wang and M. Kamionkowski, Phys. Rev. D **61**, 063504 (2000) [arXiv:astro-ph/9907431]. * (35) J. Lesgourgues, Nucl. Phys. B **582**, 593 (2000) [arXiv:hep-ph/9911447]. * (36) L. M. Griffiths, J. Silk and S. Zaroubi, Mon. Not. Roy. Astron. Soc. **324**, 712 (2001) [arXiv:astro-ph/0010571]. * (37) S. Hannestad, S. H. Hansen and F. L. Villante, Astropart. Phys. **16**, 137 (2001) [arXiv:astro-ph/0012009]. * (38) J. Barriga, E. Gaztanaga, M. G. Santos and S. Sarkar, Mon. Not. Roy. Astron. Soc. **324**, 977 (2001) [arXiv:astro-ph/0011398]. * (39) M. Gramann and G. Hutsi, Mon. Not. Roy. Astron. Soc. **327**, 538 (2001) [arXiv:astro-ph/0102466]. * (40) M. Kamionkowski and A. R. Liddle, Phys. Rev. Lett. **84**, 4525 (2000) [arXiv:astro-ph/9911103]; A. R. Zentner and J. S. Bullock, Phys. Rev. D **66**, 043003 (2002) [arXiv:astro-ph/0205216]. * (41) D. N. Spergel and P. J. Steinhardt, Phys. Rev. Lett. **84**, 3760 (2000) [arXiv:astro-ph/9909386]. * (42) B. D. Wandelt, et al. Proceedings of Dark Matter 2000, arXiv:astro-ph/0006344. * (43) M. Kaplinghat, L. Knox and M. S. Turner, Phys. Rev. Lett. **85**, 3335 (2000) [arXiv:astro-ph/0005210]. * (44) W. B. Lin, D. H. Huang, X. Zhang and R. H. Brandenberger, Phys. Rev. Lett. **86**, 954 (2001) [arXiv:astro-ph/0009003]. * (45) P. Bode, J. P. Ostriker and N. Turok, Astrophys. J. **556**, 93 (2001) [arXiv:astro-ph/0010389]. * (46) A. Tasitsiomi, Int. J. Mod. Phys. D **12**, 1157 (2003) [arXiv:astro-ph/0205464]. * (47) A. Lewis and S. Bridle, Phys. Rev. D **66**, 103511 (2002) [arXiv:astro-ph/0205436]. * (48) P. J. E. Peebles and A. Vilenkin, Phys. Rev. D **59**, 063505 (1999) [arXiv:astro-ph/9810509]. * (49) J. Q. Xia, G. B. Zhao, B. Feng and X. Zhang, arXiv:astro-ph/0603393. * (50) S. Dodelson, M. Kaplinghat and E. Stewart, Phys. Rev. Lett. **85**, 5276 (2000). * (51) B. Feng, M. Li, Y. S. Piao and X. Zhang, Phys. Lett. B **634**, 101 (2006) [arXiv:astro-ph/0407432]. * (52) G. Barenboim and J. D. Lykken, Phys. Lett. B **633**, 453 (2006) [arXiv:astro-ph/0504090]. * (53) For an interesting relevant study see G. Barenboim, O. Mena and C. Quigg, Phys. Rev. D **71**, 063533 (2005) [arXiv:astro-ph/0412010]. * (54) B. Feng, X. L. Wang and X. M. Zhang, Phys. Lett. B **607**, 35 (2005) [arXiv:astro-ph/0404224]. * (55) S. Weinberg, Phys. Rev. Lett. **59**, 2607 (1987). * (56) G. R. Dvali, G. Gabadadze and M. Porrati, Phys. Lett. B **485**, 208 (2000) [arXiv:hep-th/0005016]. * (57) K. Koyama, Phys. Rev. D **72**, 123511 (2005) [arXiv:hep-th/0503191]. * (58) For an interesting relevant study see J. Yokoyama, Phys. Rev. Lett. **88**, 151302 (2002) [arXiv:hep-th/0110137]. * (59) For a recent review see A. Vilenkin, arXiv:astro-ph/0605242. * (60) For a review see E. J. Copeland, M. Sami and S. Tsujikawa, arXiv:hep-th/0603057. * (61) e.g. H. Wei and R.-G. Cai, arXiv:hep-th/0501160; R.-G. Cai, H.-S. Zhang and A. Wang, arXiv:hep-th/0505186; A. A. Andrianov, F. Cannata and A. Y. Kamenshchik, arXiv:gr-qc/05087; X. Zhang, arXiv:astro-ph/0504586; Q. Guo and R.-G. Cai, arXiv:gr-qc/0504033; B. McInnes, Nucl. Phys. B **718**, 55 (2005); I. Y. Aref'eva, A. S. Koshelev, and S. Yu. Vernov, arXiv:astro-ph/0507067; C. G. Huang and H. Y. Guo, arXiv:astro-ph/0508171; W. Zhao, arXiv:astro-ph/0604460; J. Grande, J. Sola and H. Stefancic, arXiv:gr-qc/0604057arXiv:astro-ph/0602156; H. Stefancic, arXiv:astro-ph/0511316; L. Perivolaropoulos, arXiv:astro-ph/0601014. * (62) T. Chiba, T. Okabe and M. Yamaguchi, Phys. Rev. D **62** (2000) 023511 [arXiv:astro-ph/9912463]; C. Armendariz-Picon, V. F. Mukhanov and P. J. Steinhardt, Phys. Rev. Lett. **85**, 4438 (2000) [arXiv:astro-ph/0004134]. * (63) R. R. Caldwell, Phys. Lett. B **545**, 23 (2002) [arXiv:astro-ph/9908168]. * (64) Z. K. Guo, Y. S. Piao, X. M. Zhang and Y. Z. Zhang, Phys. Lett. B **608**, 177 (2005) [arXiv:astro-ph/0410654]. * (65) M. z. Li, B. Feng and X. m. Zhang, JCAP **0512**, 002 (2005) [arXiv:hep-ph/0503268]. * (66) X. F. Zhang, H. Li, Y. S. Piao and X. M. Zhang, Mod. Phys. Lett. A **21**, 231 (2006) [arXiv:astro-ph/0501652]. * (67) X. F. Zhang and T. Qiu, arXiv:astro-ph/0603824. * (68) J. Q. Xia, B. Feng and X. M. Zhang, Mod. Phys. Lett. A **20**, 2409 (2005) [arXiv:astro-ph/0411501]. * (69) E. V. Linder, Astropart. Phys. **25**, 167 (2006) [arXiv:astro-ph/0511415]. * (70) G. B. Zhao, J. Q. Xia, M. Li, B. Feng and X. Zhang, Phys. Rev. D **72**, 123515 (2005) [arXiv:astro-ph/0507482]. * (71) J. Q. Xia, G. B. Zhao, B. Feng, H. Li and X. Zhang, Phys. Rev. D **73**, 063521 (2006) [arXiv:astro-ph/0511625]. * (72) Available from [http://cosmologist.info](http://cosmologist.info). * (73) W. L. Freedman _et al._, Astrophys. J. **553**, 47 (2001) [arXiv:astro-ph/0012376]. * (74) S. Burles, K. M. Nollett and M. S. Turner, Astrophys. J. **552**, L1 (2001) [arXiv:astro-ph/0010171]. * (75) For details see e.g. E. Di Pietro and J. F. Claeskens, Mon. Not. Roy. Astron. Soc. **341**, 1299 (2003), [arXiv:astro-ph/0207332]. * (76) A. Gelman and D. Rubin, Statistical Science **7**, 457 (1992). * (77) J. R. Bond and G. Efstathiou, Mon. Not. Roy. Astron. Soc. **226**, 655 (1987). * (78) Available at [http://snap.lbl.gov](http://snap.lbl.gov). * (79) See e.g. A. Crotts _et al._, arXiv:astro-ph/0507043; T. Abbott _et al._ [Dark Energy Survey Collaboration], arXiv:astro-ph/0510346. * (80) Available at [http://www.lamost.org/](http://www.lamost.org/). * (81) A. Lewis, A. Challinor and A. Lasenby, Astrophys. J. **538**, 473 (2000) [arXiv:astro-ph/9911177]. * (82) Available at [http://camb.info](http://camb.info). * (83) A. G. Kim, E. V. Linder, R. Miquel and N. Mostek, Mon. Not. Roy. Astron. Soc. **347**, 909 (2004) [arXiv:astro-ph/0304509]. * (84) C. Yeche, A. Ealet, A. Refregier, C. Tao, A. Tilquin, J. M. Virey and D. Yvon, arXiv:astro-ph/0507170. * (85) H. Li, B. Feng, J. Q. Xia and X. Zhang, Phys. Rev. D **73**, 103503 (2006) [arXiv:astro-ph/0509272]. * (86) H. A. Feldman, N. Kaiser and J. A. Peacock, Astrophys. J. **426**, 23 (1994) [arXiv:astro-ph/9304022]. * (87) e.g. L. M. Wang, R. R. Caldwell, J. P. Ostriker and P. J. Steinhardt, Astrophys. J. **530**, 17 (2000) [arXiv:astro-ph/9901388]. * (88) O. Elgaroy and S. Hannestad, Phys. Rev. D **68**, 123513 (2003) [arXiv:astro-ph/0307011]. * (89) T. Okamoto and E. A. Lim, Phys. Rev. D **69**, 083519 (2004) [arXiv:astro-ph/0312284]. * (90) R. Easther, W. H. Kinney and H. Peiris, JCAP **0505**, 009 (2005) [arXiv:astro-ph/0412613]. * (91) B. Feng, M. Li, J. Q. Xia, X. Chen and X. Zhang, arXiv:astro-ph/0601095. * (92) M. Malquarti and A. R. Liddle, Phys. Rev. D **66**, 023524 (2002) [arXiv:astro-ph/0203232]. * (93) W. Wang and B. Feng, Chin. J. Astron. Astrophys. **3**, 105 (2003) [arXiv:astro-ph/0508139]. * (94) K. Takahashi, M. Oguri, K. Kotake and H. Ohno, arXiv:astro-ph/0305260. * (95) Z. G. Dai, E. W. Liang and D. Xu, Astrophys. J. **612**, L101 (2004) [arXiv:astro-ph/0407497]. * (96) D. Hooper and S. Dodelson, arXiv:astro-ph/0512232. * (97) G. B. Field, Astrophys. J. **129**, 536 (1959). * (98) A. Loeb and M. Zaldarriaga, Phys. Rev. Lett. **92**, 211301 (2004) [arXiv:astro-ph/0312134]. * (99) X. L. Chen and J. Miralda-Escude, Astrophys. J. **602**, 1 (2004) [arXiv:astro-ph/0303395]. * (100) e.g. U. L. Pen, X. P. Wu and J. Peterson, arXiv:astro-ph/0404083. * (101) See also A. Slosar, U. Seljak and A. Makarov, \"Exact likelihood evaluations and foreground marginalization in low resolution WMAP data,\" Phys. Rev. D **69**, 123003 (2004) [arXiv:astro-ph/0403073]. * (102) A. de Oliveira-Costa and M. Tegmark, Phys. Rev. D **74**, 023005 (2006) [arXiv:astro-ph/0603369]. * (103) H. K. Eriksen _et al._, \"A re-analysis of the three-year WMAP temperature power spectrum and arXiv:astro-ph/0606088. * (104) B. A. Bassett, P. S. Corasaniti and M. Kunz, Astrophys. J. **617**, L1 (2004) [arXiv:astro-ph/0407364]. * (105) D. Huterer and G. Starkman, Phys. Rev. Lett. **90**, 031301 (2003) [arXiv:astro-ph/0207517]. * (106) Y. Wang and M. Tegmark, Phys. Rev. D **71**, 103513 (2005) [arXiv:astro-ph/0501351]. * (107) J. Simon, L. Verde and R. Jimenez, Phys. Rev. D **71**, 123001 (2005) [arXiv:astro-ph/0412269]. * (108) A. Lewis, CAMB notes. * (109) M. Morikawa, Astrophys. J. **362**, L 37 (1990); Astrophys. J. **369**, 20 (1991). * (110) V. Sahni and Y. Shtanov, JCAP **0311**, 014 (2003) [arXiv:astro-ph/0202346]. * (111) L. Perivolaropoulos, Phys. Rev. D **67**, 123516 (2003) [arXiv:hep-ph/0301237]. * (112) H. Stefancic, Eur. Phys. J. C **36**, 523 (2004) [arXiv:astro-ph/0312484]. * (113) L. Perivolaropoulos, JCAP **0510**, 001 (2005) [arXiv:astro-ph/0504582]. * (114) U. Seljak and M. Zaldarriaga, Astrophys. J. **469**, 437 (1996) [arXiv:astro-ph/9603033].
We probe the time dependence of the dark energy equation of state (EOS) in light of three-year WMAP (WMAP3) and the combination with other tentative cosmological observations from galaxy clustering (SDSS) and Type Ia Supernova (SNIa). We mainly focus on cases where the EOS is oscillating or with local bumps. By performing a global analysis with the Markov Chain Monte Carlo (MCMC) method, we find the current observations, in particular the WMAP3 + SDSS data combination, allow large oscillations of the EOS which can leave oscillating features on the (residual) Hubble diagram, and such oscillations are potentially detectable by future observations like SNAP, or even by the CURRENTLY ONGOING SNIa observations. Local bumps of dark energy EOS can also leave imprints on CMB, LSS and SNIa. In cases where the bumps take place at low redshifts and the effective EOS is close to \\(-1\\), CMB and LSS observations cannot give stringent constraints on such possibilities. However, geometrical observations like (future) SNIa can possibly detect such features. On the other hand when the local bumps take place at higher redshifts beyond the detectability of SNIa, future precise observations like Gamma-ray bursts, CMB and LSS may possibly detect such features. In particular, we find that bump-like dark energy EOS on high redshifts _might_ be responsible for the localized features of WMAP on ranges \\(l\\sim 20-40\\), which is interesting and deserves addressing further. PACS number(s): 98.80.Es, 98.80.Cq
Summarize the following text.
arxiv-format/0605550v1.md
# Equation of State in Numerical Relativistic Hydrodynamics Dongsu Ryu1, Indranil Chattopadhyay1, and Eunwoo Choi2 Footnote 1: affiliation: Department of Astronomy and Space Science, Chungnam National University, Daejeon 305-764, Korea: [email protected], [email protected] Footnote 2: affiliation: Department of Physics and Astronomy, Georgia State University, P.O. Box 4106, Atlanta, GA 30302-4106, USA: [email protected] ## 1 Introduction Relativistic flows are involved in many high-energy astrophysical phenomena. Examples includes relativistic jets from Galactic sources (see Mirabel & Rodriguez 1999, for reviews), extragalactic jets from AGNs (see Zensus 1997, for reviews), and gamma-ray bursts (see Meszaros 2002, for reviews). In relativistic jets from some Galactic microquasars, intrinsic beam velocities larger than \\(0.9c\\) are typically required to explain the observed superluminal motions. In some powerful extragalactic radio sources, ejections from galactic nuclei produce true beam velocities of more than \\(0.98c\\). In the general fireball model of gamma-ray bursts, the internal energy of gas is converted into the bulk kinetic energy during expansion and thisexpansion leads to relativistic outflows with high bulk Lorentz factors \\(\\gtrsim 100\\). The flow motions in these objects are usually highly nonlinear and intrinsically complex. Understanding such relativistic flows is important for correctly interpreting the observed phenomena, but often studying them is possible only through numerical simulations. Numerical codes for special relativistic hydrodynamics (hereafter RHDs) have been successfully built, based on explicit finite difference upwind schemes that were originally developed for codes of non-relativistic hydrodynamics. These schemes utilize approximate or exact Riemann solvers and the characteristic decomposition of the hyperbolic system of conservation equations. RHD codes based on upwind schemes are able to capture sharp discontinuities robustly in complex flows, and to describe the physical solution reliably. A partial list of such codes includes the followings: Falle & Komissarov (1996) based on the van Leer scheme, Marti & Muller (1996), Aloy _et al._ (1999), and Mignone _et al._ (2005) based on the PPM scheme, Sokolov _et al._ (2001) based on the Godunov scheme, Choi & Ryu (2005) based on the TVD scheme, Dolezal & Wong (1995), Donat _et al._ (1998), DelZanna & Bucciantini (2002), and Rahman & Moore (2005) based on the ENO scheme, and Mignone & Bodo (2005) based on the HLL scheme. Reviews of some numerical approaches and test problems can be found in Marti & Muller (2003) and Wilson & Mathews (2003). Gas in RHDs is characterized by relativistic fluid speed (\\(v\\sim c\\)) and/or relativistic temperature (internal energy much greater than rest energy), and the latter brings us to the issue of the equation of state (hereafter EoS) of the gas. The EoS most commonly used in numerical RHDs, which is designed for the gas with constant ratio of specific heats, however, is essentially valid only for the gas of either subrelativistic or ultrarelativistic temperature. It is because that is not derived from relativistic kinetic theory. On the other hand, the EoS of the single-component perfect gas in relativistic regime can be derived from thermodynamics. But its form involves the modified Bessel functions (see Synge 1957), and is too complicated to be implemented in numerical schemes. In this paper, we study EoS for numerical RHDs. We first revisit two EoS's previously used in numerical codes, specifically the one with constant ratio of specific heats, and the other first used by Mathews (1971) and later proposed for numerical RHDs by Mignone _et al._ (2005). We then propose a new EoS which is simple to be implemented in numerical codes with minimum efforts and minimum computational costs, but at the same time approximates very closely the EoS of the single-component perfect gas in relativistic regime. We also discuss the calculation of primitive variables from conservative ones for the three EoS's. Then we present the entire eigenstructure of RHDs for a general EoS, in a way to be used to build numerical codes. In order to see the consequence of different EoS's, shock tube tests performed with a code based on the TVD scheme are presented. The tests demonstrate the differences in flow structure due to different EoS's. Employing a correct EoS should be important to get quantitatively correct results in problems involving a transition from non-relativistic temperature to relativistic temperature or vice versa. This paper is organized as follows. In sections 2 and 3 we discuss three EoS's and the calculation of primitive variables from conservative ones for those three. In sections 4 we present the eigenstructure of RHDs with a general EoS. In sections 5 and 6 we present a code based on the TVD scheme and shock tube tests with the code. Concluding remarks are drawn in section 7. ## 2 Relativistic Hydrodynamics ### Basic Equations The special RHD equations for an ideal fluid can be written in the laboratory frame of reference as a hyperbolic system of conservation equations \\[\\frac{\\partial D}{\\partial t}+\\frac{\\partial}{\\partial x_{j}}\\left(Dv_{j} \\right)=0,\\] \\[\\frac{\\partial M_{i}}{\\partial t}+\\frac{\\partial}{\\partial x_{j}}\\left(M_{i}v _{j}+p\\delta_{ij}\\right)=0,\\] \\[\\frac{\\partial E}{\\partial t}+\\frac{\\partial}{\\partial x_{j}}\\left[\\left(E+p \\right)v_{j}\\right]=0,\\] where \\(D\\), \\(M_{i}\\), and \\(E\\) are the mass density, momentum density, and total energy density, respectively (see, _e.g.,_ Landau & Lifshitz 1959; Wilson & Mathews 2003). The conserved quantities in the laboratory frame are expressed as \\[D=\\Gamma\\rho,\\] \\[M_{i}=\\Gamma^{2}\\rho hv_{i},\\] \\[E=\\Gamma^{2}\\rho h-p,\\] where \\(\\rho\\), \\(v_{i}\\), \\(p\\), and \\(h\\) are the proper mass density, fluid three-velocity, isotropic gas pressure and specific enthalpy, respectively, and the Lorentz factor is given by \\[\\Gamma=\\frac{1}{\\sqrt{1-v^{2}}}\\qquad\\mbox{with}\\qquad v^{2}=v_{x}^{2}+v_{y}^ {2}+v_{z}^{2}.\\] In the above, the Latin indices (_e.g.,_\\(i\\)) represents spatial coordinates and conventional Einstein summation is used. The speed of light is set to unity (\\(c\\equiv 1\\)) throughout this paper. ### Equation of State The above system of equations is closed with an EoS. Without loss of generality it is given as \\[h{\\equiv}h(p,\\rho). \\tag{4}\\] Then the general form of polytropic index, \\(n\\), and the general form of sound speed, \\(c_{s}\\), respectively can be written as \\[n=\\rho\\frac{\\partial h}{\\partial p}-1,\\qquad c_{s}^{2}=-\\frac{\\rho}{nh}\\frac{ \\partial h}{\\partial\\rho}. \\tag{5}\\] In addition we use a variable \\(\\gamma_{h}\\) to present the EoS property conveniently, \\[\\gamma_{h}=\\frac{h-1}{\\Theta}, \\tag{6}\\] where \\(\\Theta=p/\\rho\\) is a temperature-like variable. The most commonly used EoS, which is called the ideal EoS (hereafter ID), is given as \\[p=(\\gamma-1)(e-\\rho)\\qquad\\mbox{or}\\qquad h=1+\\frac{\\gamma\\Theta}{\\gamma-1} \\tag{7}\\] with a constant \\(\\gamma\\). Here \\(\\gamma=c_{p}/c_{v}\\) is the ratio of specific heats, and \\(e\\) is the sum of the internal and rest-mass energy densities in the local frame and is related to the specific enthalpy as \\[h=\\frac{e+p}{\\rho}. \\tag{8}\\] For ID, \\(\\gamma_{h}=\\gamma/(\\gamma-1)\\) does not depend on \\(\\Theta\\). ID may be correctly applied to the gas of either subrelativistic temperature with \\(\\gamma=5/3\\) or ultrarelativistic temperature with \\(\\gamma=4/3\\). But ID is rented from non-relativistic thermodynamics, and hence it is not consistent with relativistic kinetic theory. For example, we have \\[n=\\frac{1}{\\gamma-1},\\qquad c_{s}^{2}=\\frac{\\gamma\\Theta(\\gamma-1)}{\\gamma \\Theta+\\gamma-1}. \\tag{9}\\] In the high temperature limit, _i.e.,_\\(\\Theta{\\rightarrow}\\infty\\), and for \\(\\gamma>2\\), \\(c_{s}>1\\)_i.e.,_ admits superluminal sound speed. More importantly, using relativistic kinetic theory Taub (1948) showed that the choice of EoS is not arbitrary and has to satisfy the inequality, \\[(h-\\Theta)(h-4\\Theta)\\geq 1. \\tag{10}\\] This rules out ID for \\(\\gamma>4/3\\), if applied for \\(0<\\Theta<\\infty\\). The correct EoS for the single-component perfect gas in relativistic regime (hereafter RP) can be derived (see Synge 1957), and is given as \\[h=\\frac{K_{3}(1/\\Theta)}{K_{2}(1/\\Theta)}, \\tag{11}\\] where \\(K_{2}\\) and \\(K_{3}\\) are the modified Bessel functions of the second kind of order two and three, respectively. In the non-relativistic temperature limit (\\(\\Theta\\to 0\\)), \\(\\gamma_{h}\\to 5/2\\), and in the ultrarelativistic temperature limit (\\(\\Theta\\rightarrow\\infty\\)), \\(\\gamma_{h}\\to 4\\). However, using the above EoS comes with a price of extra computational costs (Falle & Komissarov 1996), since the thermodynamics of the fluid is expressed in terms of the modified Bessel functions. There have been efforts to find approximate EoS's which are simpler than RP but more accurate than ID. For example, Sokolov _et al._ (2001) proposed \\[\\Theta=\\frac{1}{4}\\left(h-\\frac{1}{h}\\right)\\qquad\\mbox{or}\\qquad h=2\\Theta+ \\sqrt{4\\Theta^{2}+1}. \\tag{12}\\] But this EoS does not satisfy either Taub's inequality nor is consistent with the value of \\(\\gamma_{h}\\) in the non-relativistic temperature limit. In a recent paper, Mignone _et al._ (2005) proposed for numerical RHDs an EoS that fits RP well. The EoS, which was first used by Mathews (1971), is given as \\[\\frac{p}{\\rho}=\\frac{1}{3}\\left(\\frac{e}{\\rho}-\\frac{\\rho}{e}\\right)\\qquad \\mbox{or}\\qquad h=\\frac{5}{2}\\Theta+\\frac{3}{2}\\sqrt{\\Theta^{2}+\\frac{4}{9}}, \\tag{13}\\] and is abbreviated as TM following Mignone _et al._ (2005). With TM the expressions of \\(n\\) and \\(c_{s}\\) become \\[n=\\frac{3}{2}+\\frac{3}{2}\\frac{\\Theta}{\\sqrt{\\Theta^{2}+4/9}},\\qquad c_{s}^{2 }=\\frac{5\\Theta\\sqrt{\\Theta^{2}+4/9}+3\\Theta^{2}}{12\\Theta\\sqrt{\\Theta^{2}+4/9 }+12\\Theta^{2}+2}. \\tag{14}\\] TM corresponds to the lower bound of Taub's inequality, _i.e.,_\\((h-\\Theta)(h-4\\Theta)=1\\). It produces the right asymptotic values for \\(\\gamma_{h}\\). In this paper we propose a new EoS, which is a simpler algebraic function of \\(\\Theta\\) and is also a better fit of RP compared to TM. We abbreviate our proposed EoS as RC and give it by \\[\\frac{p}{e-\\rho}=\\frac{3p+2\\rho}{9p+3\\rho}\\qquad\\mbox{or}\\qquad h=2\\frac{6 \\Theta^{2}+4\\Theta+1}{3\\Theta+2}. \\tag{15}\\] With RC the expressions of \\(n\\) and \\(c_{s}\\) become \\[n=3\\frac{9\\Theta^{2}+12\\Theta+2}{(3\\Theta+2)^{2}},\\qquad c_{s}^{2}=\\frac{ \\Theta(3\\Theta+2)(18\\Theta^{2}+24\\Theta+5)}{3(6\\Theta^{2}+4\\Theta+1)(9\\Theta^{ 2}+12\\Theta+2)}. \\tag{16}\\]RC satisfies Taub's inequality, \\((h-\\Theta)(h-4\\Theta)\\geq 1\\), for all \\(\\Theta\\). It also produces the right asymptotic values for \\(\\gamma_{h}\\). For both TM and RC, we have correctly \\(c_{s}^{2}\\to 5\\Theta/3\\) in the non-relativistic temperature limit and \\(c_{s}^{2}\\to 1/3\\) in the ultrarelativistic temperature limit, respectively. In Figure 1, \\(\\gamma_{h}\\), \\(n\\), and \\(c_{s}\\) are plotted as a function of \\(\\Theta\\) to compare TM and RC to RP as well as ID. One can see that RC is a better fit of RP than TM with \\[\\frac{|h_{\\rm TM}-h_{\\rm RP}|}{h_{\\rm RP}}\\lesssim 2\\%,\\qquad\\frac{|h_{\\rm RC }-h_{\\rm RP}|}{h_{\\rm RP}}\\lesssim 0.8\\%.\\] It is to be remembered that both \\(\\gamma_{h}\\) and \\(n\\) are independent of \\(\\Theta\\), if ID is used. ## 3 Calculation of Primitive Variables The RHD equations evolve the conserved quantities, \\(D\\), \\(M_{i}\\) and \\(E\\), but we need to know the values of the primitive variables, \\(\\rho\\), \\(v_{i}\\), \\(p\\), to solve the equations numerically. The primitive variables can be calculated by inverting the equations (2a-2c). The equations (2a-2c) explicitly include \\(h\\), and here we discuss the inversion for the EoS's discussed in section 2.2, that is, ID, TM, and RC. ### Id Schneider _et al._ (1993) showed that the equations (2a-2c) with the EoS in (7) reduce to a single quartic equation for \\(v\\) \\[v^{4}+b_{1}v^{3}+b_{2}v^{2}+b_{3}v+b_{4}=0,\\] where \\[b_{1}=-\\frac{2\\gamma(\\gamma-1)ME}{(\\gamma-1)^{2}(M^{2}+D^{2})},\\qquad b_{2}= \\frac{\\gamma^{2}E^{2}+2(\\gamma-1)M^{2}-(\\gamma-1)^{2}D^{2}}{(\\gamma-1)^{2}(M^ {2}+D^{2})},\\] \\[b_{3}=-\\frac{2\\gamma ME}{(\\gamma-1)^{2}(M^{2}+D^{2})},\\qquad b_{4}=\\frac{M^{2} }{(\\gamma-1)^{2}(M^{2}+D^{2})},\\] and \\(M=\\sqrt{M_{x}^{2}+M_{y}^{2}+M_{z}^{2}}\\). The quartic equation (18) can be solved numerically or analytically. In Choi & Ryu (2005) the analytical solution was used for the very first time, though the exact nature of the solution was not presented. The general form of analytical roots for quartic equations can be found in Abramowitz & Stegun (1972) or on webs such as \"[http://mathworld.wolfram.com/QuarticEquation.html](http://mathworld.wolfram.com/QuarticEquation.html)\". One may even use softwares such as Mathematica or Maxima to find the roots. We found that out of the four roots of the quartic equation (18), two are complex and two are real. The two real roots are \\[z_{1}=\\frac{-B+\\sqrt{B^{2}-4C}}{2},\\qquad z_{2}=\\frac{-B-\\sqrt{B^{2}-4C}}{2},\\] where \\[B=\\frac{1}{2}(b_{1}+\\sqrt{b_{1}^{2}-4b_{2}+4x_{1}}),\\qquad C=\\frac{1}{2}(x_{1} -\\sqrt{x_{1}^{2}-4b_{4}}),\\] \\[x_{1}=(R+T^{\\frac{1}{2}})^{\\frac{1}{3}}+(R-T^{\\frac{1}{2}})^{\\frac{1}{3}}-\\frac {a_{1}}{3},\\] \\[R=\\frac{9a_{1}a_{2}-27a_{3}-2a_{1}^{3}}{54},\\qquad S=\\frac{3a_{2}-a_{1}^{2}}{9 },\\qquad T=R^{2}+S^{3},\\] \\[a_{1}=-b_{2},\\qquad a_{2}=b_{1}b_{3}-4b_{4},\\qquad a_{3}=4b_{2}b_{4}-b_{3}^{2} -b_{1}^{2}b_{4}.\\] Among the two real roots, the first one is the solution that satisfies the upper and lower limits imposed by Schneider _et al._ (1993), thus \\(v=z_{1}\\). Once \\(v\\) is found, the quantities \\(\\rho\\), \\(v_{i}\\), \\(p\\), are calculated by \\[\\rho=\\frac{D}{\\Gamma},\\] \\[v_{x}=\\frac{M_{x}}{M}v,\\qquad v_{y}=\\frac{M_{y}}{M}v,\\qquad v_{z}=\\frac{M_{z}} {M}v,\\] \\[p=(\\gamma-1)[(E-M_{x}v_{x}-M_{y}v_{y}-M_{z}v_{z})-\\rho].\\] ### Tm Combining the equations (2a-2c) with the EoS in (13), we get a cubic equation for \\(W=\\Gamma^{2}-1\\) \\[W^{3}+c_{1}W^{2}+c_{2}W+c_{3}=0,\\] where \\[c_{1}=\\frac{(E^{2}+M^{2})[4(E^{2}+M^{2})-(M^{2}+D^{2})]-14M^{2}E^{2}}{2(E^{2}- M^{2})^{2}},\\] \\[c_{2}=\\frac{[4(E^{2}+M^{2})-(M^{2}+D^{2})]^{2}-57M^{2}E^{2}}{16(E^{2}-M^{2})^{2 }},\\] \\[c_{3}=-\\frac{9M^{2}E^{2}}{16(E^{2}-M^{2})^{2}}.\\]Cubic equations admit analytical solutions simpler than quartic equations (see also Abramowitz & Stegun 1972). We found that out of the three roots of the cubic equation (23), two are unphysical giving \\(\\Gamma<1\\), and only one gives the physical solution, which is \\[W=2\\sqrt{-J}\\cos\\left(\\frac{\\iota}{3}\\right)-\\frac{c_{1}}{3},\\] where \\[J=\\frac{3c_{2}-c_{1}^{2}}{9},\\qquad\\cos\\iota=\\frac{H}{\\sqrt{-J^{3}}},\\qquad H= \\frac{9c_{1}c_{2}-27c_{3}-2c_{1}^{3}}{54}.\\] Then the fluid speed is calculated by \\[v=\\frac{W}{\\sqrt{W^{2}+1}},\\] and the quantities \\(\\rho\\), \\(v_{i}\\), \\(p\\), are calculated by \\[\\rho=\\frac{D}{\\Gamma}.\\] \\[v_{x}=\\frac{M_{x}}{M}v,\\qquad v_{y}=\\frac{M_{y}}{M}v,\\qquad v_{z}=\\frac{M_{z}} {M}v,\\] \\[p=\\frac{(E-M_{x}v_{x}-M_{y}v_{y}-M_{z}v_{z})^{2}-\\rho^{2}}{3(E-M_{x}v_{x}-M_{y} v_{y}-M_{z}v_{z})}.\\] ### Rc Combining the equations (2a-2c) with the EoS in (15), we get \\[M\\sqrt{\\Gamma^{2}-1}\\left[3E\\Gamma(8\\Gamma^{2}-1)+2D(1-4\\Gamma^{2})\\right]\\] \\[=3\\Gamma^{2}\\left[4(M^{2}+E^{2})\\Gamma^{2}-(M^{2}+4E^{2})\\right]-2D(4E\\Gamma-D )(\\Gamma^{2}-1).\\] Further simplification reduces it into an equation of 8\\({}^{\\rm th}\\) power in \\(\\Gamma\\). Although the equation (29) has to be solved numerically, it behaves very well. We first analyzed the nature of the roots with a root-finding routine in the IMSL library. As noted by Schneider _et al._ (1993), the physically meaningful solution should be between the upper limit, \\(\\Gamma_{u}\\), \\[\\Gamma_{u}=\\frac{1}{\\sqrt{1-v_{u}^{2}}}\\qquad{\\rm with}\\qquad v_{u}=\\frac{M}{E},\\] and the lower limit, \\(\\Gamma_{l}\\), that is derived inserting \\(D=0\\) into equation (29): \\[16(M^{2}-E^{2})^{2}\\Gamma_{l}^{6}-8(M^{2}-E^{2})(M^{2}-4E^{2})\\Gamma_{l}^{4}+( M^{4}-9M^{2}E^{2}+16E^{4})\\Gamma_{l}^{2}+M^{2}E^{2}=0\\](a cubic equation of \\(\\Gamma_{l}^{2}\\)). Out of the eight roots of the equation (29), four are complex and four are real. Out of the four real roots, two are negative and two are positive. And out of the two real and positive roots, one is always larger than \\(\\Gamma_{u}\\), and the other is between \\(\\Gamma_{l}\\) and \\(\\Gamma_{u}\\) and so is the physical solution. Inside RHD codes the physical solution of equation (29) can be easily calculated by the Newton-Raphson method. With an initial guess \\(\\Gamma=\\Gamma_{l}\\) or any value smaller than it including 1, iteration can be proceeded upwards. Since the equation is extremely well-behaved, the iteration converges within a few steps. Once \\(\\Gamma\\) is known, the fluid speed is calculated by \\[v={\\sqrt{\\Gamma^{2}-1}\\over\\Gamma},\\] and the quantities \\(\\rho\\), \\(v_{i}\\), \\(p\\), are calculated by \\[\\rho={D\\over\\Gamma}.\\] \\[v_{x}={M_{x}\\over M}v,\\qquad v_{y}={M_{y}\\over M}v,\\qquad v_{z}={M_{z}\\over M}v\\] \\[p={(E-M_{i}v_{i})-2\\rho+[(E-M_{i}v_{i})^{2}+4\\rho(E-M_{i}v_{i})-4\\rho^{2}]^{ 1\\over 2}\\over 6},\\] where \\[M_{i}v_{i}=M_{x}v_{x}+M_{y}v_{y}+M_{z}v_{z}.\\] ## 4 Eigenvalues and Eigenvectors In building a code based on the Roe-type schemes such as the TVD and ENO schemes that solves a hyperbolic system of conservation equations, the eigenstructure (eigenvalues and eigenvectors of the Jacobian matrix) is required. The Eigenstructure for RHDs was previously described, for instance, in Donat _et al._ (1998). However, with the parameter vector different from that of Donat _et al._ (1998), the eigenvectors become different. Here we present our complete set of eigenvalues and eigenvectors without assuming any particular form of EoS. Equations (1a)-(1c) can be written as \\[{\\partial\\vec{q}\\over\\partial t}+{\\partial\\vec{F}_{j}\\over\\partial x_{j}}=0\\]with the state and flux vectors \\[\\vec{q}=\\left[\\matrix{D\\cr M_{i}\\cr E\\cr}\\right],\\qquad\\vec{F}_{j}=\\left[\\matrix{ Dv_{j}\\cr M_{i}v_{j}+p\\delta_{ij}\\cr(E+p)\\,v_{j}\\cr}\\right],\\] or as \\[{\\partial\\vec{q}\\over\\partial t}+A_{j}{\\partial\\vec{q}\\over\\partial x_{j}}=0, \\qquad A_{j}={\\partial\\vec{F}_{j}\\over\\partial\\vec{q}}.\\] Here \\(A_{j}\\) is the \\(5\\times 5\\) Jacobian matrix composed with the state and flux vectors. The construction of the matrix \\(A_{j}\\) can be simplified by introducing a parameter vector, \\(\\vec{u}\\), as \\[A_{j}={\\partial\\vec{F}_{j}\\over\\partial\\vec{u}}{\\partial\\vec{u}\\over\\partial \\vec{q}}.\\] We choose the vector made of primitive variables as the parameter vector \\[\\vec{u}=\\left[\\matrix{\\rho\\cr v_{i}\\cr p\\cr}\\right].\\] ### One Velocity Component The eigenstructure is simplified if only a single component of velocity is chosen, _i.e.,_\\(v=v_{x}\\). In principle it can be reduced from that with three components of velocity in the next subsection. Nevertheless we present it, for the case that the simpler eigenstructure with one velocity component can be used. The explicit form of the Jacobian matrix, \\(A\\), is presented in Appendix A. The eigenvalues of \\(A\\) are, \\[a_{-}={v-c_{s}\\over 1-c_{s}v},\\qquad a_{o}=v,\\qquad a_{+}={v+c_{s}\\over 1+c_{s} v}.\\] The right eigenvectors are \\[\\vec{R}_{-}=\\left[\\matrix{1\\cr\\Gamma h(v-c_{s})\\cr\\Gamma h(1-c_{s}v)\\cr} \\right],\\qquad\\vec{R}_{0}=\\left[\\matrix{1\\cr\\Gamma hv(1-nc_{s}^{2})\\cr\\Gamma h (1-nc_{s}^{2})\\cr}\\right],\\qquad\\vec{R}_{+}=\\left[\\matrix{1\\cr\\Gamma h(v+c_{ s})\\cr\\Gamma h(1+c_{s}v)\\cr}\\right].\\] and the left eigenvectors are \\[\\vec{L}_{-}=-{1\\over 2hnc_{s}^{2}}\\left[h(1-nc_{s}^{2}),\\ \\ \\Gamma(v+nc_{s}), -\\Gamma(1+nc_{s}v)\\right],\\]\\[\\vec{L}_{0}=\\frac{1}{hnc_{s}^{2}}\\left[h,\\ \\ \\Gamma v,\\ \\ -\\Gamma\\right],\\] \\[\\vec{L}_{+}=-\\frac{1}{2hnc_{s}^{2}}\\left[h(1-nc_{s}^{2}),\\ \\ \\Gamma(v-nc_{s}),\\ \\ - \\Gamma(1-nc_{s}v)\\right].\\] Here \\(n\\) and \\(c_{s}\\) are given in equation (5). ### Three Velocity Components The \\(x\\)-component of the Jacobian matrix, \\(A_{x}\\), when all the three components of velocity are considered, is presented in Appendix B. The eigenvalues of \\(A_{x}\\) are \\[a_{1}=\\frac{\\left(1-c_{s}^{2}\\right)v_{x}-c_{s}/\\Gamma\\cdot\\sqrt{Q}}{1-c_{s}^ {2}v^{2}},\\] \\[a_{2}=v_{x},\\] \\[a_{3}=v_{x},\\] \\[a_{4}=v_{x},\\] \\[a_{5}=\\frac{\\left(1-c_{s}^{2}\\right)v_{x}+c_{s}/\\Gamma\\cdot\\sqrt{Q}}{1-c_{s}^ {2}v^{2}},\\] where \\(Q=1-v_{x}^{2}-c_{s}^{2}(v_{y}^{2}+v_{z}^{2})\\). The eigenvalues represent the five characteristic speeds associated with two sound wave modes (\\(a_{1}\\) and \\(a_{5}\\)) and three entropy modes (\\(a_{2}\\), \\(a_{3}\\), and \\(a_{4}\\)). A remarkable feature is that the eigenvalues do not explicitly depend on \\(h\\) and \\(n\\), but only on \\(v_{i}\\) and \\(c_{s}\\). Hence the eigenvalues are the same regardless of the choice of EoS once the sound speed is defined properly. The corresponding right eigenvectors (\\(A_{x}\\vec{R}=a\\vec{R}\\)), however, depends explicitly on \\(h\\) and \\(n\\), and the complete set is given by \\[\\vec{R}_{1}=\\left[\\frac{1-a_{1}v_{x}}{\\Gamma},\\ \\ a_{1}h(1-v_{x}^{2}),\\ \\ h(1-a_{1}v_{x})v_{y},\\ \\ h(1-a_{1}v_{x})v_{z},\\ \\ h(1-v_{x}^{2})\\right]^{\\rm T},\\] \\[\\vec{R}_{2}=\\tilde{X}\\left[X_{1},\\ \\ X_{2},\\ \\ X_{3},\\ \\ X_{4},\\ \\ X_{5}\\right]^{\\rm T},\\] \\[\\vec{R}_{3}=\\frac{1}{1-v_{x}^{2}}\\left[\\frac{v_{y}}{\\Gamma h},\\ \\ 2v_{x}v_{y},\\ \\ 1-v_{x}^{2}+v_{y}^{2},\\ \\ v_{y}v_{z},\\ \\ 2v_{y}\\right]^{\\rm T},\\] \\[\\vec{R}_{4}=\\frac{1}{1-v_{x}^{2}}\\left[\\frac{v_{z}}{\\Gamma h},\\ \\ 2v_{x}v_{z},\\ \\ v_{y}v_{z},\\ \\ 1-v_{x}^{2}+v_{z}^{2},\\ \\ 2v_{z}\\right]^{\\rm T},\\] \\[\\vec{R}_{5}=\\left[\\frac{1-a_{5}v_{x}}{\\Gamma},\\ \\ a_{5}h(1-v_{x}^{2}),\\ \\ h(1-a_{5}v_{x})v_{y},\\ \\ h(1-a_{5}v_{x})v_{z},\\ \\ h(1-v_{x}^{2})\\right]^{\\rm T},\\]where \\[X_{1}=\\frac{nc_{s}^{2}(v_{y}^{2}+v_{z}^{2})+(1-v_{x}^{2})}{\\Gamma h},\\] \\[X_{2}=\\left[2nc_{s}^{2}(v_{y}^{2}+v_{z}^{2})+(1-nc_{s}^{2})(1-v_{x}^{2})\\right]v_{ x},\\] \\[X_{3}=\\left[nc_{s}^{2}(v_{y}^{2}+v_{z}^{2})+(1-v_{x}^{2})\\right]v_{y},\\] \\[X_{4}=\\left[nc_{s}^{2}(v_{y}^{2}+v_{z}^{2})+(1-v_{x}^{2})\\right]v_{z},\\] \\[X_{5}=2nc_{s}^{2}(v_{y}^{2}+v_{z}^{2})+(1-nc_{s}^{2})(1-v_{x}^{2}).\\] \\[\\tilde{X}=\\frac{\\Gamma^{2}}{nc_{s}^{2}(1-v_{x}^{2})},\\] The complete set of the left eigenvectors (\\(\\vec{L}A_{x}=a\\vec{L}\\)), which are orthonormal to the right eigenvectors, is \\[\\vec{L}_{1}=\\frac{1}{\\tilde{Y}_{\\,\\,1}}\\left[Y_{11},\\ \\ Y_{12},\\ \\ Y_{13},\\ \\ Y_{13},\\ \\ Y_{15}\\right],\\] \\[\\vec{L}_{2}=\\left[\\frac{h}{\\Gamma},\\ \\ v_{x},\\ \\ v_{y},\\ \\ v_{z},\\ \\ -1\\right],\\] \\[\\vec{L}_{3}=\\left[-\\Gamma hv_{y},\\ \\ 0,\\ \\ 1,\\ \\ 0,\\ \\ 0\\right],\\] \\[\\vec{L}_{4}=\\left[-\\Gamma hv_{z},\\ \\ 0,\\ \\ 0,\\ \\ 1,\\ \\ 0\\right],\\] \\[\\vec{L}_{5}=\\frac{1}{\\tilde{Y}_{\\,\\,5}}\\left[Y_{51},\\ \\ Y_{52},\\ \\ Y_{53},\\ \\ Y_{53},\\ \\ Y_{55}\\right],\\] where \\[Y_{i1}=-\\frac{h}{\\Gamma}(1-a_{i}v_{x})(1-nc_{s}^{2}),\\] \\[Y_{i2}=na_{i}(1-c_{s}^{2}v^{2})+a_{i}(1+nc_{s}^{2})v_{x}^{2}-(1+n)v_{x},\\] \\[Y_{i3}=-(1+nc_{s}^{2})(1-a_{i}v_{x})v_{y},\\] \\[Y_{i4}=-(1+nc_{s}^{2})(1-a_{i}v_{x})v_{z},\\] \\[Y_{i5}=(1+nc_{s}^{2}v^{2})+(1-c_{s}^{2})nv_{x}^{2}-a_{i}(1+n)v_{x},\\] \\[\\tilde{Y}_{i}=hn\\left[(a_{i}-v_{x})^{2}Q+\\frac{c_{s}^{2}}{\\Gamma^{2}}\\right],\\] and index \\(i=1,\\ 5\\). We note that with three degenerate modes that have same eigenvalues, \\(a_{2}=a_{3}=a_{4}\\), we have a freedom to write down the right and left eigenvectors in a variety of different forms. We chose to present the ones that produce the best results with the TVD code described next. One-Dimensional Functioning Code To be used for demonstration of the differences in flow structure due to different EoS's, a one-dimensional functioning code based on the Total Variation Diminishing (TVD) scheme was built. The code utilizes the eigenvalues and eigenvectors given in the previous section, and can employ arbitrary EoS's including those in section 2.2. ### The TVD Scheme The TVD scheme, originally developed by Harten (1983), is an Eulerian, finite-difference scheme with second-order accuracy in space and time. The second-order accuracy in time is achieved by modifying numerical flux using the quantities in five grid cells (see below and Harten 1983, for details). The scheme is basically identical to that previously used in Ryu _et al._ (1993) and Choi & Ryu (2005). But for completeness, the procedure is concisely shown here. The state vector \\(\\vec{q}_{i}^{n}\\) at the cell center \\(i\\) at the time step \\(n\\) is updated by calculating the modified flux vector \\(\\vec{\\hat{f}}_{x,i\\pm 1/2}\\) along the \\(x\\)-direction at the cell interface \\(i\\pm 1/2\\) as follows: \\[L \\[\\alpha_{k,i+1/2}=\\vec{L}_{k,i+1/2}^{n}\\cdot\\left(\\vec{q}_{i+1}^{n}-\\vec{q}_{i}^{n} \\right), \\tag{54}\\] \\[Q_{k}(x)=\\left\\{\\begin{array}{ll}x^{2}/4\\varepsilon_{k}+\\varepsilon_{k}&\\mbox {for}\\ \\ |x|<2\\varepsilon_{k},\\\\ |x|&\\mbox{for}\\ \\ |x|\\geq 2\\varepsilon_{k}.\\end{array}\\right. \\tag{55}\\] Here, \\(k=1\\) to 5 stand for the five characteristic modes. The internal parameters \\(\\varepsilon_{k}\\)'s implicitly control numerical viscosity, and are defined for \\(0\\leq\\varepsilon_{k}<0.5\\). The flux limiters in equations (52a)-(52c) are the min-mod, monotonized central difference, and superbee limiters, respectively, a partial list of the limiters that are consistent with the TVD scheme, and one of them has to be employed. ### Quantities at Cell Interfaces To calculate the fluxes we need to define the local quantities at the cell interfaces, \\(i+1/2\\). The TVD scheme originally used the Roe's linearizion technique (Roe 1981) for it. Although it is possible to implement this linearizion technique in the relativistic domain in a computationally feasible way (see Eulderink & Mellema 1995), there is unlikely to be a significant advantage from the computational point of view. Instead, we simply use the algebraic averages of quantities at two adjacent cell centers to define the fluid three-velocity and specific enthalpy at the cell interfaces: \\[v_{x,i+1/2}=\\frac{v_{x,i}+v_{x,i+1}}{2},\\qquad v_{y,i+1/2}=\\frac{v_{y,i}+v_{y,i+1}}{2},\\qquad v_{z,i+1/2}=\\frac{v_{z,i}+v_{z,i+1}}{2}, \\tag{56}\\] \\[h_{i+1/2}=\\frac{h_{i}+h_{i+1}}{2}. \\tag{57}\\] Defining \\(n\\) and \\(c_{s}\\) for the calculation of eigenvalues and eigenvectors at the cell interfaces depends on EoS. For ID, \\(n\\) is constant and \\[c_{s,i+1/2}=\\left(\\frac{h_{i+1/2}-1}{nh_{i+1/2}}\\right)^{1/2}. \\tag{58}\\] For TM, we first compute from equation (13) \\[\\Theta_{i+1/2}=\\frac{5h_{i+1/2}-\\sqrt{9h_{i+1/2}^{2}+16}}{8}, \\tag{59}\\] then define \\(n_{i+1/2}\\) and \\(c_{s,i+1/2}\\) according to equation (14). For RC, we first compute from equation (15) \\[\\Theta_{i+1/2}=\\frac{3h_{i+1/2}-8+\\sqrt{9h_{i+1/2}^{2}+48h_{i+1/2}-32}}{24} \\tag{60}\\] then define \\(n_{i+1/2}\\) and \\(c_{s,i+1/2}\\) according to equation (16). Numerical Tests The differences induced by different EoS's are illustrated through a series of shock tube tests performed with the code described in the previous section. We use the tests used in previous works (_e.g.,_ Marti & Muller 2003; Mignone _et al._ 2005), instead of inventing our own. Two sets are considered, one being purely one-dimensional with only the velocity component parallel to structure propagation, and the other with transverse velocity component. For the first set with parallel velocity component only, two tests are presented: P1: \\(\\rho_{L}=10\\), \\(\\rho_{R}=1\\), \\(p_{L}=13.3\\), \\(p_{R}=10^{-6}\\), and \\(v_{p,L}=v_{p,R}=0\\) initially, and \\(t_{\\rm end}=0.45\\), P2: \\(\\rho_{L}=\\rho_{R}=1\\), \\(p_{L}=10^{3}\\), \\(p_{R}=10^{-2}\\), and \\(v_{p,L}=v_{p,R}=0\\) initially, and \\(t_{\\rm end}=0.4\\). The box covers the region of \\(0\\leq x\\leq 1\\). Here the subscripts \\(L\\) and \\(R\\) denote the quantities in the left and right states of the initial discontinuity at \\(x=0.5\\), and \\(t_{\\rm end}\\) is the time when the solutions are presented. These two tests have been extensively used for tests of RHD codes with the ID EoS (see Marti & Muller 2003), and the analytic solutions were described in Marti & Muller (1994). Figures 2 and 3 show the numerical solutions with RC and TM, and the analytic solutions with ID and \\(\\gamma=5/3\\) and \\(4/3\\). The numerical solutions with RC and TM were obtained using the version of the TVD code having one velocity component (see section 4.1), and the analytic solutions with ID comes from the routine described in Marti & Muller (1994). The numerical solutions with ID are almost indistinguishable from the analytic solutions, once they are calculated. The ID solutions with \\(\\gamma=4/3\\) and \\(5/3\\) show noticeable differences. The density shell between the contact discontinuity (hereafter CD) and the shock becomes thinner and taller with smaller \\(\\gamma\\), because the post shock pressure is lower and so is the shock propagation speed. The rarefaction wave is less elongated with \\(\\gamma=4/3\\), because the sound speed is lower. Those solutions with ID are also clearly different from the solutions obtained with RC and TM. The ID solution with \\(\\gamma=4/3\\) better approximates the solutions with RC and TM in the left region of the CD, because the flow has relativistic temperature of \\(\\Theta\\gtrsim 1\\) there. The difference is, however, obvious in the shell between the CD and the shock, because \\(\\Theta\\sim 1\\) there. On the other hand, the solutions obtained with RC and TM look very much alike. It reflects the similarity in the distributions of specific enthalpy in equations (13) and (15). Yet there is a noticeable difference, especially in the shell between the CD and the shock, and the difference in density reaches up to \\(\\sim 5\\%\\). For the second set with transverse velocity component, four tests, where different transverse velocities were added to the test P2, are presented: T1: initially \\(v_{t,R}=0.99\\) to the right state, \\(t_{\\rm end}=0.45\\),T2: initially \\(v_{t,L}=0.9\\) to the left state, \\(t_{\\rm end}=0.55\\), T3: initially \\(v_{t,L}=v_{t,R}=0.99\\) to the left and right states, \\(t_{\\rm end}=0.18\\), T4: initially \\(v_{t,L}=0.9\\) and \\(v_{t,R}=0.99\\) to the left and right states, \\(t_{\\rm end}=0.75\\). The notations are the same ones used in P1 and P2. These are subsets of the tests originally suggested by Pons _et al._ (2000) with the ID EoS and later used by Mignone _et al._ (2005). Figures 4, 5, 6 and 7 show the numerical solutions with RC and TM and the analytic solutions with ID and \\(\\gamma=5/3\\) and \\(4/3\\). The numerical solutions with RC and TM were obtained using the version of the TVD code having three velocity components (see section 4.2), and the analytic solutions with ID comes from the routine described in Pons _et al._ (2000). Again the ID solutions with \\(\\gamma=4/3\\) and \\(5/3\\) show noticeable differences. Especially with transverse velocity initially on the left side of the initial discontinuity (Figure 5, 6 and 7), the parallel velocity reaches lower values, while the transverse velocity achieves higher values, with higher \\(\\gamma=5/3\\) in the region to the left of the CD. As a result, the density shell between the CD and the shock has propagated less. As in the P tests, the solutions with ID are clearly different from the solutions obtained with RC and TM, most noticeably in the shell between the CD and the shock. The solutions with RC and TM look very much alike with differences in the density in the shell between the CD and the shock of about \\(\\sim 5\\%\\). We note that this paper is intended to focus on the EoS in numerical RHDs, not intended to present the performance of the code. Hence, one-dimensional tests of high resolution (with \\(2^{16}\\) grid cells for the P tests and \\(2^{17}\\) grid cells the T tests) are presented to manifest the difference induced by different EoS's. The performance of the code such as capturing of shocks and CDs will be discussed elsewhere. ## 7 Summary and Discussion The conservation equations for both Newtonian hydrodynamics and RHDs are strictly hyperbolic, rendering the apt use of upwind schemes for numerical codes. The actual implementation to RHDs is, however, complicated, partly due to EoS. In this paper we study three EoS's for numerical RHDs, two being previously used and the other being newly proposed. The new EoS is simple and yet approximates the enthalpy of single-component perfect gas in relativistic regime with accuracy better than \\(0.8\\%\\). Then we discuss the calculation of primitive variables from conservative ones for the EoS's considered. We also present the eigenvalues and eigenvectors of RHDs for a general EoS, in a way that they are ready to be used to build numerical codes based on the Roe-type schemes such as the TVD and ENOschemes. Finally we present numerical tests to show the differences in flow structure due to different EoS's The most commonly used, ideal EoS, can be used for the gas of entirely non-relativistic temperature (\\(\\Theta\\ll 1\\)) with \\(\\gamma=5/3\\) or for the gas of entirely ultrarelativistic temperature (\\(\\Theta\\gg 1\\)) with \\(\\gamma=4/3\\). However, if the transition from non-relativistic to relativistic or vice versa with \\(\\Theta\\sim 0.1-1\\) is involved, the ideal EoS produces incorrect results and its use should be avoided. The EoS proposed by Mignone _et al._ (2005), TM, produces reasonably correct results with error of a few percent at most. The most preferable advantage of using TM is that the calculation of primitive variables admits analytic solutions, thereby making its implementation easy. The newly suggested EoS, RC, which approximates the EoS of the relativistic perfect gas, RP, most accurately, produces thermodynamically the most accurate results. At the same time it is simple enough to be implemented to numerical codes with minimum efforts and minimum computational costs. With RC the primitive variables should be calculated numerically by an iteration method such as the Newton-Raphson method. However, the equation for the calculation of primitive variables behaves extremely well, so the iteration converges in a few step without any trouble. In Galactic and extragalactic jets and gamma-ray bursts, as the flows travel relativistic fluid speeds (\\(v\\sim 1\\) but \\(\\Theta\\ll 1\\)), they would hit the surrounding media. Then shocks are produced and the gas can be heated up to \\(\\Theta\\gtrsim 1\\). These kind of transitions, continuous or discontinuous, between relativistic bulk speeds and relativistic temperatures are intrinsic in astrophysical relativistic flows, and so a correct EoS is required to simulate the flows correctly. The correctness as well as the simplicity make RC suitable for astrophysical applications like these. The work of DR and IC was supported by the KOSEF grant R01-2004-000-10005-0. The work of EC was supported by RPE funds to PEGA at GSU. ## Appendix A Jacobian Matrix with One Velocity Component \\[A=\\frac{1}{N}\\left(\\begin{array}{ccc}A_{11}&A_{12}&A_{13}\\\\ A_{21}&A_{22}&A_{23}\\\\ 0&A_{32}&0\\end{array}\\right)\\] \\[A_{11}=v^{2}hn(1-c_{s}^{2})+\\frac{vh}{\\Gamma^{2}}\\]\\[A_{12}=-\\frac{1}{\\Gamma^{3}}+\\frac{1+n}{\\Gamma}\\] \\[A_{13}=-\\frac{v(1+n)}{\\Gamma}\\] \\[A_{21}=-\\frac{h^{2}}{\\Gamma^{3}}(1-nc_{s}^{2})\\] \\[A_{22}=-\\frac{vh}{\\Gamma^{2}}(1-nc_{s}^{2})+2vhn(1-c_{s}^{2})\\] \\[A_{23}=-v^{2}hn(1-c_{s}^{2})+\\frac{h}{\\Gamma^{2}}\\] \\[A_{32}=hn(1-c_{s}^{2}v^{2})\\] \\[N=hn(1-c_{s}^{2}v^{2})\\] ## Appendix B Jacobian Matrix with Three Velocity Components \\[A_{x}=\\frac{1}{N}\\left(\\begin{array}{ccccc}A_{11}&A_{12}&A_{13}&A_{14}&A_{15 }\\\\ A_{21}&A_{22}&A_{23}&A_{24}&A_{25}\\\\ A_{31}&A_{32}&A_{33}&A_{34}&A_{35}\\\\ A_{41}&A_{42}&A_{43}&A_{44}&A_{45}\\\\ 0&A_{52}&0&0&0\\end{array}\\right)\\] \\[A_{11}=v_{x}hn(1-c_{s}^{2})+\\frac{hv_{x}}{\\Gamma^{2}}\\] \\[A_{12}=\\frac{1}{\\Gamma}[n+v_{x}^{2}-nc_{s}^{2}(v_{y}^{2}+v_{z}^{2})]\\] \\[A_{13}=\\frac{1}{\\Gamma}v_{x}v_{y}(1+nc_{s}^{2})\\] \\[A_{14}=\\frac{1}{\\Gamma}v_{x}v_{z}(1+nc_{s}^{2})\\] \\[A_{15}=-\\frac{1}{\\Gamma}v_{x}(1+n)\\] \\[A_{21}=-\\frac{1}{\\Gamma}(1-v_{x}^{2})h^{2}(1-nc_{s}^{2})\\] \\[A_{22}=v_{x}h[2n(1-c_{s}^{2}v^{2})-(1-v_{x}^{2})(1+nc_{s}^{2})]\\] \\[A_{23}=-v_{y}h(1-v_{x}^{2})(1+nc_{s}^{2})\\] \\[A_{24}=-v_{z}h(1-v_{x}^{2})(1+nc_{s}^{2})\\]\\[A_{25}=-v_{x}^{2}h(1+n)+h(1+nc_{s}^{2}v^{2})\\] \\[A_{31}=\\frac{1}{\\Gamma}v_{x}v_{y}h^{2}(1-nc_{s}^{2})\\] \\[A_{32}=v_{y}h[n(1-c_{s}^{2}v^{2})+v_{x}^{2}(1+nc_{s}^{2})]\\] \\[A_{33}=v_{x}h[n(1-c_{s}^{2}v^{2})+v_{y}^{2}(1+nc_{s}^{2})]\\] \\[A_{34}=v_{x}v_{y}v_{z}h(1+nc_{s}^{2})\\] \\[A_{35}=-v_{x}v_{y}h(1+n)\\] \\[A_{41}=\\frac{1}{\\Gamma}v_{x}v_{z}h^{2}(1-nc_{s}^{2})\\] \\[A_{42}=v_{z}h[n(1-c_{s}^{2}v^{2})+v_{x}^{2}(1+nc_{s}^{2})]\\] \\[A_{43}=v_{x}v_{y}v_{z}h(1+nc_{s}^{2})\\] \\[A_{44}=v_{x}h[n(1-c_{s}^{2}v^{2})+v_{z}^{2}(1+nc_{s}^{2})]\\] \\[A_{45}=-v_{x}v_{z}h(1+n)\\] \\[A_{52}=hn(1-c_{s}^{2}v^{2})\\] \\[N=hn(1-c_{s}^{2}v^{2})\\] ## References * (1) * (2) Abramowitz, M. A. & Stegun, I. A. 1972, Handbook of Mathematical Functions (Dover: Dover Publishing Company) * (3) * (4) Aloy, M. A., Ibanez, J. M., Marti, J. M. & Muller, E. 1999, ApJS, 122, 151 * (5) * (6) Choi, E. & Ryu, D., 2005, New Astronomy, 11, 116 * (7) * (8) DelZanna, L. & Bucciantini, N. 2002, A&A, 390, 1177 * (9) * (10) Donat, R., Font, J. A., Ibanez, J. M. & Marquina, A. 1998, J. Comput. Phys., 146, 58 * (11) * (12) Dolezal, A. & Wong, S. S. M. 1995, J. Comput. Phys., 120, 266 * (13) * (14) Eulderink, F. & Mellema, G. 1995, A&A, 110, 587 * (15) * (16) Falle, S. A. E. G & Komissarov, S. S., 1996, MNRAS, 278, 586 * (17) * (18) Harten, A. 1983, J. Comput. Phys., 49, 357 * (19) * (20)* () Landau, L. D. & Lifshitz, E. M. 1959, Fluid Mechanics (New York: Pergamon Press) * () Marti, J. M. & Muller, E. 1994, J. Fluid Mech., 258, 317 * () Marti, J. M. & Muller, E. 1996, J. Comput. Phys., 123, 1 * () Marti, J. M. & Muller, E. 2003, Living Rev. Relativity, 6, 7 * () Mathews, W. G. 1971, ApJ, 165, 147 * () Meszaros, P. 2002, ARA&A, 40, 137 * () Mignone, A., Plewa, T. & Bodo, G. 2005, ApJS, 160, 199 * () Mignone, A. & Bodo, G. 2005, MNRAS, 364, 126 * () Mirabel, I. F., & Rodriguez, L. F. 1999, ARA&A, 37, 409 * () Pons, J. A., Marti, J. M. & Muller, E. 2000, J. Fluid Mech., 422, 125 * () Roe, P. L. 1981, J. Comput. Phys., 43, 357 * () Ryu, D., Ostriker, J. P., Kang, H. & Cen, R. 1993, ApJ, 414, 1 * () Rahman, T. & Moore, R. 2005, preprint (astro-ph/0512246) * () Schneider, V., Katscher, U., Rischke, D. H., Waldhauser, B., Maruhn, J. A. & Munz, C.-D. 1993, J. Comput. Phys., 105, 92 * () Sokolov, I., Zhang,H. M.& Sakai, J. I. 2001, J. Comput. Phys., 172, 209 * () Synge, J. L. 1957, The Relativistic Gas (Amsterdam: North-Holland Publishing Company) * () Taub, A. H. 1948, Phys. Rev., 74, 328 * () Wilson, J. R. & Mathews, G. J. 2003, Relativistic Numerical Hydrodynamics (Cambridge: Cambridge Univ. Press) * () Zensus, J. A. 1997, ARA&A, 35, 607Figure 1: Comparison between different EoS’s. \\(\\Gamma_{h}\\), \\(n\\), and \\(c_{s}\\), vs \\(\\Theta\\) for RC (red-long dashed), TM (blue-short dashed), ID (green and cyan-dotted), and RP (black-solid). Figure 2: Relativistic shock tube with parallel component of velocity only (P1) with RC (red), TM (blue), and ID (green and cyan). Figure 3: Relativistic shock tube with parallel component of velocity only (P2) with RC (red), TM (blue), and ID (green and cyan). Figure 4: Relativistic shock tube with transverse component of velocity (T1) with RC (red), TM (blue), and ID (green and cyan). Figure 5: Relativistic shock tube with transverse component of velocity (T2) with RC (red), TM (blue), and ID (green and cyan). Figure 6: Relativistic shock tube with transverse component of velocity (T3) with RC (red), TM (blue), and ID (green and cyan). Figure 7: Relativistic shock tube with transverse component of velocity (T4) with RC (red), TM (blue), and ID (green and cyan).
Relativistic temperature of gas raises the issue of the equation of state (EoS) in relativistic hydrodynamics. We study the EoS for numerical relativistic hydrodynamics, and propose a new EoS that is simple and yet approximates very closely the EoS of the single-component perfect gas in relativistic regime. We also discuss the calculation of primitive variables from conservative ones for the EoS's considered in the paper, and present the eigenstructure of relativistic hydrodynamics for a general EoS, in a way that they can be used to build numerical codes. Tests with a code based on the Total Variation Diminishing (TVD) scheme are presented to highlight the differences induced by different EoS's. hydrodynamics -- methods: numerical -- relativity + Footnote †: slugcomment: draft of November 4, 2021
Summarize the following text.
arxiv-format/0606025v2.md
# The new form of the equation of state for dark energy fluid and accelerating universe Shin'ichi Nojiri [email protected] Department of Physics, Nagoya University, Nagoya 464-8602. Japan Sergei D. Odintsov [email protected] Institucio Catalana de Recerca i Estudis Avancats (ICREA) and Institut de Ciencies de l'Espai (IEEC-CSIC), Campus UAB, Facultat de Ciencies, Torre C5-Par-2a pl, E-08193 Bellaterra (Barcelona), Spain ## I Introduction The number of attempts is aimed to the resolution of dark energy problem (for recent review, see [1; 2; 3]) which is considered as the most fundamental one in modern cosmology. Among the different descriptions of late-time universe the easiest one is phenomenological approach where it is assumed that universe is filled with mysterious cosmic fluid of some sort. One can mention imperfect fluids [4], general equation of state (EoS) fluid where pressure is some (power law) function of energy-density [5], fluids with inhomogeneous equation of state [6], where EoS with time-dependent bulk viscosity is the particular case [7], coupled fluids [8; 9], etc. The EoS fluid description may be even equivalent to modified gravity approach as is shown in [10]. As it has been recently discussed in [11] it is not easy to construct the dark energy model which describes the universe acceleration and on the same time keep untouched the radiation/matter dominated epochs with subsequent transition from deceleration to acceleration. In order to minimize the dark energy effect at intermediate epoch one may speculate about sudden appearence of dark energy around the deceleration-acceleration transition point. In other words, one may suppose that EoS of DE fluid is of the form \\(p=\\theta(t-t_{d})w\\rho\\) where \\(t_{d}\\) is transition time and \\(w\\) is DE EoS parameter. Before transition point, DE plays a role of usual dust which changes EoS by some unknown scenario. In the similar way, one can generalize other cosmic fluids with more complicated EoS. This introduces the idea of structure/form changing EoS in different epochs. The simplest example of such cosmic fluid is oscillating dark energy [12; 13]. Finally, the reason why cosmic fluid still escapes of direct observations could be that it has completely unexpected properties, for instance, in EoS picture. In the present letter the new form of dark energy EoS is considered. As the first step, we introduce the relaxation equation for pressure (the analog of energy conservation law for energy-density). It is then shown that such constrained EoS is equivalent to usual but inhomogeneous EoS which is known to be the effective description for brane-worlds or modified gravity [6]. The generalized inhomogeneous EoS which contains time derivatives of pressure is introduced. The number of examples for such EoS cosmic fluids is presented and the corresponding FRW cosmologies are described. It is shown that cosmic speed-up in the examples under consideration corresponds to the asymptotically de Sitter or the oscillating universe where accelerating/decelerating epochs repeat with possibility to cross the phantom barrier or to make transition from deceleration to acceleration. In all cases, dark energy EoS parameter is close to \\(-1\\), being within the observational bounds to it. It is demonstrated that the inclusion of matter may be consistent with constrained EoS. Constrained equation of state Let us discuss the possible modification of the equation of state in such a way that it would change its structure/form during the universe evolution. Consider the balance equation for the energy (energy conservation law) \\[\\dot{\\rho}+3H(\\rho+P)=0\\,. \\tag{1}\\] It can be represented as a relaxation equation \\[\\dot{\\Psi}=-\\frac{1}{\\tau}(\\Psi-\\Psi_{0})\\,, \\tag{2}\\] where \\(\\Psi\\) coincides with \\(\\rho\\), relaxation time \\(\\tau\\) is \\(\\tau=\\frac{1}{3H}\\) and stationary (or equilibrium) value of \\(\\rho\\) is \\(\\rho_{0}\\equiv\\Psi_{0}=-P\\). To formulate consistently this equation we need, as usual, the equation of state (EoS). The standard barotropic EoS is \\(P=P(\\rho)\\), providing the equation for \\(\\rho\\) only: \\[\\frac{1}{3H}\\dot{\\rho}=-(\\rho+P(\\rho))\\,. \\tag{3}\\] Let us conjecture now that cosmological fluid is described by different EoS at different epochs. In other words, to describe the transition from one epoch in cosmological evolution to another we try to introduce the transition from one EoS to another, or in simplest form to modify EoS to permit the presence of pressure derivatives. The simplest way is to introduce the relaxation equation for pressure \\[\\tau\\dot{P}+P=f(\\rho,a(t))\\,. \\tag{4}\\] When \\(\\tau=0\\) and \\(f(\\rho,a(t))=P(\\rho)\\) we recover the standard EoS. Such an equation may be considered as some (dynamical) constraint to usual EoS. Of course, the physical sense of such equation (unlike to energy conservation law) is not clear at the moment although some explanations are given in the next section. In daily life, however, there could occur similar phenomena where the time change of the presure depends on the density. For example, consider the water. There is a pressure in the steam, which is the gas of water. When the density increases, the molecules of water make drops of water, like fog. The pressure of the drops could be neglected. At high density, the total pressure could decrease. The equation (4) seems to express such a process. Then if dark energy consists of particles or some objects with internal structure, there may occur the phase transition like that between steam and water drop. At the point of phase transition, since the system becomes unstable, the pressure may be governed by a equation like (4). We prefer to measure the relaxation time \\(\\tau\\) in terms of Hubble function \\(H\\), i.e., consider \\(\\tau H=\\xi=const\\). In this case it is convenient to use a new variable \\(x\\equiv\\frac{a(t)}{a(t_{0})}\\). In terms of \\(x\\) the expression \\(\\tau\\dot{P}\\) simplifies as \\[\\tau\\dot{P}=\\xi x\\frac{dP}{dx} \\tag{5}\\] and one obtains the pair of relaxation type equations for \\(\\rho\\) and \\(P\\) \\[\\frac{1}{3}x\\frac{d\\rho}{dx}+\\rho=-P\\,, \\tag{6}\\] \\[\\xi x\\frac{dP}{dx}+P=f(\\rho,x)\\,. \\tag{7}\\] Extracting \\(P\\) from the first equation and inserting it to the second one we obtain the second order, master equation for the energy density \\(\\rho\\) \\[x^{2}\\frac{d^{2}\\rho}{dx^{2}}+x\\frac{d\\rho}{dx}\\left(4+\\frac{1}{\\xi}\\right)+ \\frac{3}{\\xi}[\\rho+f(\\rho,x)]=0 \\tag{8}\\] This is new, dynamical equation to energy-density which is compatible with energy conservation law. As the explicit example, let the function \\(f\\) be of the form (of course, more complicated choices may be considered) \\[f(\\rho,x)=-\\rho+\\gamma(x)(\\rho-\\rho_{c})\\,, \\tag{9}\\] where \\(\\rho_{c}=\\) const is some critical value of the energy density, and \\(\\gamma(x)=\\gamma_{0}+\\alpha x^{m}\\). When \\(\\rho_{c}=0\\) and \\(\\alpha=0\\) we recover the standard linear EoS \\[P=(\\gamma_{0}-1)\\rho\\,. \\tag{10}\\] The equation for \\(\\rho\\) can be reduced to the Bessel equation and the solution is of the form \\[\\rho=\\rho_{c}+x^{\\sigma}\\left[C_{1}J_{\ u}(\\hat{a}x^{\\lambda})+C_{2}J_{-\ u}( \\hat{a}x^{\\lambda})\\right]\\,. \\tag{11}\\] Here \\[\\sigma=-2-\\frac{m}{2}-\\frac{1}{\\xi}\\,\\quad\\lambda=\\frac{m}{2}\\,\\quad\\hat{a}= \\alpha^{m/2}. \\tag{12}\\] Using (6), we find \\[P = -\\rho_{c} \\tag{13}\\] \\[-\\left(\\frac{\\sigma}{3}+1\\right)x^{\\sigma}\\left[C_{1}J_{\ u}( \\hat{a}x^{\\lambda})+C_{2}J_{-\ u}(\\hat{a}x^{\\lambda})\\right]\\] \\[-\\frac{\\lambda\\hat{a}}{6}x^{\\sigma+\\lambda}\\left[C_{1}\\left(J_{ \ u-1}(\\hat{a}x^{\\lambda})-J_{\ u+1}(\\hat{a}x^{\\lambda})\\right)\\right.\\] \\[\\left.+C_{2}\\left(J_{-\ u-1}(\\hat{a}x^{\\lambda})-J_{-\ u+1}(\\hat{ a}x^{\\lambda})\\right)\\right]\\.\\]When \\(x\\) is large, \\(\\rho\\) behaves as an oscillating function \\[\\rho \\sim \\rho_{c}+\\sqrt{\\frac{2}{\\pi\\hat{a}}}x^{\\sigma-\\lambda/2}\\left\\{C_{1} \\cos\\left(\\hat{a}x^{\\lambda}-\\frac{2\ u+1}{4}\\pi\\right)\\right. \\tag{14}\\] \\[\\left.+C_{2}\\cos\\left(\\hat{a}x^{\\lambda}-\\frac{-2\ u+1}{4}\\pi \\right)\\right\\}\\.\\] Since \\(\\sigma-\\lambda/2=-2-(3/4)m-1/\\xi\\), if we naturally assume that \\(m\\) and \\(\\xi\\) should be positive, the second term damps with oscillation. Then \\(\\rho\\) goes to a constant \\(\\rho\\rightarrow\\rho_{c}\\). On the other hand, for large \\(x\\), \\(P\\) behaves as \\[P \\sim -\\rho_{c} \\tag{15}\\] \\[-\\frac{\\lambda\\hat{a}}{3}\\sqrt{\\frac{2}{\\pi\\hat{a}}}x^{\\sigma+ \\lambda/2}\\left\\{C_{1}\\cos\\left(\\hat{a}x^{\\lambda}-\\frac{2\ u-1}{4}\\pi\\right)\\right.\\] \\[\\left.+C_{2}\\cos\\left(\\hat{a}x^{\\lambda}+\\frac{-2\ u+1}{4}\\pi \\right)\\right\\}\\.\\] Since \\(\\sigma+\\lambda/2=-2-m/4-1/\\xi\\), when \\(m\\) and \\(\\xi\\) are positive, the second term damps with oscillation again and \\(P\\) goes to a constant \\(P\\rightarrow-\\rho_{c}\\). Then the effective EoS parameter \\(w\\equiv P/\\rho\\) goes to \\(-1\\), which corresponds to a cosmological constant. The Bessel function is quasi-oscillating and we obtain an infinite number of epochs, in which \\(\\rho\\), \\(P\\), \\(H\\) and \\(a\\) are also quasi-oscillating. In other words we have an infinite number of points in which the deceleration replaces the acceleration and vice-versa. The presence of \\(\\rho_{c}\\) can guarantee that \\(\\rho\\) is positive, thus, \\(H^{2}\\) is also positive. Nevertheless, \\(P\\) can change its sign, and this phenomenon can mimic the dark energy effect. When \\(\\rho_{c}=0\\), \\(\\alpha=0\\) the equation becomes of the Euler type, and the solution is also very simple. ## III The relation with standard equation of state. We now consider the relation with the standard EoS. Let us start with the scale factor dependent EoS: \\[P=g\\left(\\rho,a\\right). \\tag{16}\\] Then we have \\[\\mathrm{e}^{-t/\\tau}\\frac{d}{dt}\\left(\\mathrm{e}^{t/\\tau}P\\right) \\tag{17}\\] \\[=\\frac{1}{\\tau}P+\\dot{P}\\] \\[=\\frac{1}{\\tau}g\\left(\\rho,a\\right)+\\frac{\\partial g\\left(\\rho,a \\right)}{\\partial\\rho}\\dot{\\rho}+\\frac{\\partial g\\left(\\rho,a\\right)}{ \\partial a}aH\\.\\] By using the conservation law (1), one can rewrite (17) in a form similar to (4): \\[\\tau\\dot{P}+P \\tag{18}\\] \\[=g\\left(\\rho,a\\right)+\\tau H\\left(-3\\frac{\\partial g\\left(\\rho,a \\right)}{\\partial\\rho}\\left(\\rho+g\\left(\\rho,a\\right)\\right)\\right.\\] \\[\\left.+\\frac{\\partial g\\left(\\rho,a\\right)}{\\partial a}a\\right)\\.\\] When other contributions to the energy density can be neglected, the first FRW equation looks as \\[\\frac{3}{\\kappa^{2}}H^{2}=\\rho. \\tag{19}\\] Then Eq.(18) can be rewritten as \\[\\tau\\dot{P}+P \\tag{20}\\] \\[=g\\left(\\rho,a\\right)+\\tau\\kappa\\sqrt{\\frac{\\rho}{3}}\\left(-3 \\frac{\\partial g\\left(\\rho,a\\right)}{\\partial\\rho}\\left(\\rho+g\\left(\\rho,a \\right)\\right)\\right.\\] \\[\\left.+\\frac{\\partial g\\left(\\rho,a\\right)}{a}\\right)\\.\\] By comparing (20) with (4), we may identify \\[f(\\rho,a) \\tag{21}\\] \\[=g\\left(\\rho,a\\right)+\\tau\\kappa\\sqrt{\\frac{\\rho}{3}}\\left(-3 \\frac{\\partial g\\left(\\rho,a\\right)}{\\partial\\rho}\\left(\\rho+g\\left(\\rho,a \\right)\\right)\\right.\\] \\[\\left.+\\frac{\\partial g\\left(\\rho,a\\right)}{a}\\right)\\.\\] This shows the relation between standard (generally speaking, inhomogeneous EoS[6]) and relaxation equation for pressure. ## IV Generalized inhomogeneous equation of state As it was indicated above, there is a possibility that the EoS contains \\(\\dot{P}\\) or even higher time derivatives of pressure. More generally, the EoS could depend on \\(H\\) or \\(\\dot{H}\\) (inhomogeneous EoS [6]) like \\[U(\\rho,P,\\dot{P},H,\\dot{H})=0. \\tag{22}\\] Note that many effective dark energy models like brane-worlds, modified gravity and string compactifications have such a form ( for very recent example compatible with observational data, see [14] and references therein). As particular example, one may consider \\[U(\\rho,P,\\dot{P},H,\\dot{H})=\\dot{P}+\\left(\\frac{\\dot{H}}{H}-3H\\right)(\\rho+P)+ \\frac{W(\\rho)}{3H}. \\tag{23}\\]Here \\(W(\\rho)\\) is a proper function of the energy density \\(\\rho\\). Using the energy conservation law (1), one gets \\[\\ddot{\\rho}=W(\\rho). \\tag{24}\\] If \\(\\rho\\) is regarded as a coordinate, Eq.(24) has a form of Newtonian equation of motion of the classical particle with the \"force\" \\(W\\). For example, if \\(W\\) is a constant \\(W=w_{0}\\), we find \\(\\rho\\) behaves as a coordinate of the massive particle in the uniform gravity: \\[\\rho=\\frac{w_{0}}{2}\\left(t-t_{0}\\right)^{2}+c_{0}. \\tag{25}\\] Here \\(t_{0}\\) and \\(c_{0}\\) are constants of the integration. As an another example, we may consider the case of the harmonic oscillator: \\[W(\\rho)=-\\omega^{2}\\left(\\rho-\\rho_{0}\\right). \\tag{26}\\] Then an oscillating energy density follows [12; 13]: \\[\\rho=\\rho_{0}+A\\sin\\left(\\omega t+\\alpha\\right). \\tag{27}\\] If other contributions to the energy density may be neglected, by using the first FRW equation (19), we find find the behavior of the Hubble rate, for (25), \\[H=\\frac{\\kappa}{\\sqrt{3}}\\sqrt{\\frac{w_{0}}{2}\\left(t-t_{0}\\right)^{2}+c_{0}}. \\tag{28}\\] As \\(\\dot{H}<0\\) when \\(t<t_{0}\\), and \\(\\dot{H}>0\\) when \\(t>t_{0}\\), there is a transition from non-phantom era to phantom one at \\(t=t_{0}\\). For (27), we have oscillating \\(H\\): \\[H=\\frac{\\kappa}{\\sqrt{3}}\\sqrt{\\rho_{0}+A\\sin\\left(\\omega t+\\alpha\\right)}. \\tag{29}\\] When we neglect the other contributions to the energy density and pressure, we also have \\[-\\frac{2}{\\kappa^{2}}\\dot{H}=\\rho+p. \\tag{30}\\] Combining (30) with (19), one may define the effective EoS parameter \\(w_{\\rm eff}\\) by \\[w_{\\rm eff}\\equiv-1-\\frac{2\\dot{H}}{3H^{2}}. \\tag{31}\\] Hence, for (28) \\[w_{\\rm eff}=-1-\\frac{t-t_{0}}{\\sqrt{3}\\kappa\\left(\\frac{w_{0}}{2}\\left(t-t_{0 }\\right)^{2}+c_{0}\\right)^{3/2}}\\, \\tag{32}\\] which surely crosses \\(w_{\\rm eff}=-1\\) when \\(t=t_{0}\\). On the other hand, for (29), one gets \\[w_{\\rm eff}=-1-\\frac{2A\\omega\\cos\\left(\\omega t+\\alpha\\right)}{\\sqrt{3}\\kappa \\left(\\rho_{0}+A\\sin\\left(\\omega t+\\alpha\\right)\\right)^{3/2}}\\, \\tag{33}\\] which oscillates around \\(w_{\\rm eff}=-1\\) as in [13]. As an another example, we consider the EoS \\[\\dot{P}-3H\\left(\\rho+P\\right)=U(H). \\tag{34}\\] Here \\(U(H)\\) is a proper function of the Hubble rate \\(H\\). Then by using (1), one arrives at \\[\\dot{\\rho}+\\dot{P}=U(H). \\tag{35}\\] In a simplest case, \\(U(H)=0\\), it follows \\[\\rho+P=c\\quad(c:\\mbox{constant}). \\tag{36}\\] When the other contributions to the energy density and pressure are neglected, because of (30), we find \\(\\dot{H}\\) is constant and \\[H=-\\frac{\\kappa^{2}c}{2}t. \\tag{37}\\] As an another case, we may consider \\[U(H)=\\frac{2\\omega^{2}}{\\kappa^{2}}H. \\tag{38}\\] Here \\(\\omega\\) is a constant. Then combining (35), (30), and (38), we find \\[\\ddot{H}=-\\omega^{2}H\\, \\tag{39}\\] which is the equation typical for the harmonic oscillator in classical mechanics. Hence, the oscillating Hubble rate is obtained \\[H=H_{0}\\sin\\left(\\omega t+\\alpha\\right). \\tag{40}\\] Here \\(H_{0}\\) and \\(\\alpha\\) are constants of the integration. Thus, we demonstrated that inhomogeneous generalized EoS (linear in the pressure derivative) leads to the interesting accelerating (often oscillating) late-time universe. ## V The equation of state quadratic on the pressure derivative In this section, as an immediate generalization, the case that the EoS is not linear on \\(\\dot{P}\\) but quadratic is considered. Let the equation to pressure and its derivatives looks like an energy in the classical mechanics: \\[E=\\frac{1}{2}\\dot{P}^{2}+V(P). \\tag{41}\\]Here \\(E\\) is a constant but as it is an analogue of the energy, it is denoted as \\(E\\). We should note that \\(E\\) does not correspond to real energy in universe. This may be also considered as implicit form of EoS. First example is \\[V(P)=\\tilde{a}P\\, \\tag{42}\\] with a constant \\(\\tilde{a}\\). Then by the analogy with the classical mechanics, we find \\[P = -\\frac{1}{2}\\tilde{a}t^{2}+v_{0}t+p_{0}\\,\\] \\[E = \\frac{1}{2}v_{0}^{2}+\\tilde{a}p_{0}. \\tag{43}\\] Here \\(v_{0}\\) and \\(p_{0}\\) are constants. In case that other contributions to the total energy density are large, as in the early universe, the Hubble rate \\(H\\) could not be so rapidly changed. Then we may assume that the Hubble rate \\(H\\) could be almost constant \\(H=H_{0}\\). Using (1), one obtains \\[\\rho = \\rho_{0}{\\rm e}^{-3H_{0}t} \\tag{44}\\] \\[-\\frac{\\tilde{a}}{2}\\left(\\frac{2}{27H_{0}^{3}}-\\frac{2t}{9H_{0}^ {2}}+\\frac{t^{2}}{3H_{0}}\\right)\\] \\[-v_{0}\\left(-\\frac{1}{9H_{0}^{2}}+\\frac{t}{3H_{0}}\\right)+\\frac{ p_{0}}{3H_{0}}\\.\\] Here \\(\\rho_{0}\\) is a constant. The explicit form of (inhomogeneous) EoS may be found combining two above equations. On the other hand, we may also consider the case that the other contributions to the energy density and pressure are neglected as in late-time or future unverse. Then deleting \\(\\rho\\) from (1) and (19), we have \\[\\dot{H}+\\frac{3}{2}H^{2}+\\frac{\\kappa^{2}}{2}P=0. \\tag{45}\\] For (43), Eq.(45) admits the solution \\[H=h_{0}t+h_{1}\\, \\tag{46}\\] when \\[\\tilde{a}=3h_{0}\\,\\quad v_{0}=-3h_{0}h_{1}\\,\\] \\[p_{0}=-h_{0}-h_{1}^{2}. \\tag{47}\\] For (46), the effective EoS parameter \\(w_{\\rm eff}\\) defined by (31) has the following form: \\[w_{\\rm eff}=-1-\\frac{2h_{0}}{3\\left(h_{0}t+h_{1}\\right)^{2}}\\, \\tag{48}\\] which goes to \\(-1\\) when \\(t\\) goes to infinity. Hence, the emerging universe seems to be the asymptotically de Sitter one. Second example is \\[V(P)=\\frac{1}{2}\\omega^{2}P^{2}. \\tag{49}\\] Then we have \\[P = A\\sin\\left(\\omega t+\\alpha\\right)\\] \\[E = A^{2}\\omega^{2}. \\tag{50}\\] where \\(A\\) and \\(\\alpha\\) are constants. Then in the case that other contribution to the total energy density is large, as in the early universe, the Hubble rate \\(H\\) could be almost constant \\(H=H_{0}\\), we find \\[\\rho = \\rho_{0}{\\rm e}^{-3H_{0}t} \\tag{51}\\] \\[-\\frac{A}{9H_{0}^{2}+\\omega^{2}}\\left(3H_{0}\\sin\\left(\\omega t+ \\alpha\\right)\\right.\\] \\[\\left.-\\omega\\cos\\left(\\omega t+\\alpha\\right)\\right)\\,\\] with a constant \\(\\rho_{0}\\). This corresponds to de Sitter universe. On the other hand, when other contributions to the total energy density can be neglected, as in the late-time universe, by using (45), one gets \\[\\frac{d^{2}a^{3/2}}{dt^{2}}+\\frac{3}{4}\\kappa^{2}A\\sin\\left(\\omega t+\\alpha \\right)a^{3/2}=0. \\tag{52}\\] By defining a new variable \\(s\\) \\[s\\equiv\\omega t+\\alpha+\\frac{\\pi}{2}\\, \\tag{53}\\] one obtains a kind of Mathieu equation: \\[0=\\frac{d^{2}a^{3/2}}{ds^{2}}+\\frac{3\\kappa^{2}}{4\\omega^{2}}\\cos s\\,a^{3/2}\\, \\tag{54}\\] whose solution is given by \\[a^{3/2}=\\sum_{n=0}^{\\infty}c_{n}\\cos(nt)+\\sum_{n=1}^{\\infty}s_{n}\\sin(nt). \\tag{55}\\] Here the coefficients \\(c_{n}\\) and \\(s_{n}\\) are given by recursively solving the following equations: \\[c_{0}=c\\,\\quad c_{1}=0\\] \\[-n^{2}c_{n}+\\frac{q}{2}\\left(c_{n-1}+c_{n+1}\\right)=0\\ \\left(n\\geq 1 \\right)\\,\\] \\[s_{1}=s\\,\\quad s_{2}=-\\frac{2}{q}s\\,\\] \\[-n^{2}s_{n}+\\frac{q}{2}\\left(s_{n-1}-s_{n+1}\\right)=0\\ \\left(n\\geq 2 \\right)\\,\\] \\[q\\equiv\\frac{3\\kappa^{2}}{4\\omega^{2}}. \\tag{56}\\]Hence, \\(a\\) has a periodicity \\(1/\\omega\\). In the expression (55), \\(a\\) is not always positive. Then physically the regions where \\(a^{3/2}\\) is not negative could be allowed and the points \\(a=0\\) could correspond to Big Bang/Big Crunch/Big Rip[15]. We should note that the expressions of \\(\\rho_{0}\\) in (44) and (51) are not always positive. Then only the period(s) where \\(\\rho_{0}\\) is positive could be allowed in the real universe. ## VI Coupling with the matter ### No direct interaction between dark energy and matter Let us now include the matter. For simplicity, we consider the matter with constant EoS parameter \\(w_{m}\\) so that the matter energy density \\(\\rho_{m}\\) is given by \\[\\rho=\\rho_{0}\\left(\\frac{a(t)}{a(t_{0})}\\right)^{-3(1+w_{m})}. \\tag{57}\\] In case of (11), the total energy density is given by \\[\\rho_{\\rm tot}=\\rho_{c}+x^{\\sigma}\\left[C_{1}J_{\ u}(\\hat{a}x^{\\lambda})+C_{2 }J_{-\ u}(\\hat{a}x^{\\lambda})\\right]+\\rho_{0}x^{-3(1+w_{m})}. \\tag{58}\\] and the Hubble rate \\(H\\) is given by \\[H = \\kappa\\left\\{\\frac{1}{3}\\left(\\rho_{c}+x^{\\sigma}\\left[C_{1}J_{ \ u}(\\hat{a}x^{\\lambda})+C_{2}J_{-\ u}(\\hat{a}x^{\\lambda})\\right]\\right.\\right. \\tag{59}\\] \\[\\left.\\left.+\\rho_{0}x^{-3(1+w_{m})}\\right)\\right\\}^{1/2}\\.\\] In future, \\(x\\) becomes large, then the Hubble rate \\(H\\) goes to a constant (with oscillations): \\[H\\rightarrow\\kappa\\sqrt{\\frac{\\rho_{0}}{3}}\\, \\tag{60}\\] which tells \\(w_{\\rm eff}\\rightarrow-1\\). On the other hand, in the early universe, \\(x\\) should be small. Hence, one finds \\[H = \\kappa\\left\\{\\frac{1}{3}\\left(\\rho_{c}+x^{\\sigma}\\left[C_{1} \\left(\\frac{\\hat{a}x^{\\lambda}}{2}\\right)^{\ u}+C_{2}\\left(\\frac{\\hat{a}x^{ \\lambda}}{2}\\right)^{-\ u}\\right]\\right.\\right. \\tag{61}\\] \\[\\left.\\left.+\\rho_{0}x^{-3(1+w_{m})}\\right)\\right\\}^{1/2}\\.\\] If \\(-3(1+w_{m})<\\sigma-\\lambda\\), the contribution from matter becomes dominant and Hubble rate is \\[H=\\kappa\\left\\{\\frac{\\rho_{0}}{3}\\right\\}^{1/2}x^{-3(1+w_{m})/2}\\, \\tag{62}\\] which gives, as well-known, \\[H\\sim\\frac{\\frac{2}{3(1+w_{m})}}{t}. \\tag{63}\\] On the other hand if \\(\\sigma-\\lambda\ u<-3(1+w_{m})<0\\), Hubble rate is \\[H=\\kappa\\left\\{\\frac{C_{2}}{3}\\left(\\frac{\\hat{a}}{2}\\right)^{-\ u}\\right\\}^{ 1/2}x^{(\\sigma-\\lambda\ u)/2}\\, \\tag{64}\\] which gives \\[H=\\frac{-\\frac{2}{\\sigma-\\lambda\ u}}{t}. \\tag{65}\\] By comparing (65) with (63) or (11), it follows that the effective EoS parameter is given by \\[w_{\\rm eff}=-1-\\frac{\\sigma-\\lambda\ u}{3}. \\tag{66}\\] For the model in (23), solving (24), we find the \\(t\\)-dependence of \\(\\rho\\). Then the FRW equation gives \\[\\frac{3}{\\kappa^{2}}H^{2}=\\rho(t)+\\rho_{0}\\left(\\frac{a(t)}{a(t_{0})}\\right)^ {-3(1+w_{m})}. \\tag{67}\\] For the case (25), when \\(t\\) is large enough, the second term in the r.h.s. of (67) could be neglected and we will obtain (28). If \\(c_{0}\\) in (25) is small enough, when \\(t\\sim t_{0}\\), the second term in (67) could be dominant and we may obtain (63). For the case (29), in the early universe, where \\(a\\) is small, the second term in (67) could be dominant and one obtains (63), again. Especially for the dust \\(w_{m}=0\\), we find \\(H\\sim\\frac{2/3}{t}\\), that is, \\(a\\sim t^{\\frac{2}{3}}\\). In the late time universe, the first term could be dominant and one gets (29). Three years WMAP data are recently analyzed in Ref.[16], which shows that the combined analysis of WMAP with supernova Legacy survey (SNLS) constrains the dark energy equation of state \\(w_{DE}\\) pushing it towards the cosmological constant. The marginalized best fit values of the equation of state parameter at \\(68\\%\\) confidence level are given by \\(-1.14\\leq w_{DE}\\leq-0.93\\). In case of a prior that universe is flat, the combined data gives \\(-1.06\\leq w_{DE}\\leq-0.90\\). In our models, as shown in (16), (17), (48), and (60), the effective EoS parameter is \\(w_{\\rm eff}\\sim-1\\) and there is no contradiction with the above WMAP data. We should also note that when mater is coupled, we find \\(w_{\\rm eff}\\sim w_{m}\\) in the early universe, as in (63). Thus when \\(w_{m}<-1/3\\), there should occur the transition from deceleration to acceleration. ### Dark energy interacting with matter Generally speaking, the matter interacts with the dark energy. In such a case, the total energy density \\(\\rho_{\\rm tot}\\) consists of the contributions from the dark energy and the matter: \\(\\rho_{\\rm tot}=\\rho+\\rho_{m}\\). If we define, however, the matter energy density \\(\\rho_{m}\\) properly, we can also _define_ the matter pressure \\(p_{m}\\) and the dark energy pressure \\(p\\) by \\[p_{m}\\equiv-\\rho_{m}+\\frac{\\dot{\\rho}}{3H}\\,\\quad P\\equiv P_{\\rm tot}-P_{m}. \\tag{68}\\] Here \\(P_{\\rm tot}\\) is the total pressure. Hence, the matter and dark energy satisfy the energy conservation laws separately, \\[\\dot{\\rho}_{m}+3H\\left(\\rho_{m}+P_{m}\\right)=0\\,\\quad\\dot{\\rho}+3H\\left(\\rho+P \\right)=0. \\tag{69}\\] In case, however, that the EoS parameter \\(w_{m}\\) for the matter is almost constant, one may write the conservation law as \\[\\dot{\\rho}_{m}+3H\\left(1+w_{m}\\right)\\rho_{m}=Q\\, \\tag{70}\\] and therefore for the dark energy \\[\\dot{\\rho}+3H\\left(\\rho+P\\right)=-Q\\, \\tag{71}\\] so that the total energy density and the pressure satisfy the conservation law: \\[\\dot{\\rho}_{\\rm tot}+3H\\left(\\rho_{\\rm tot}+P_{\\rm tot}\\right)\\rho_{m}=0. \\tag{72}\\] In (70), \\(Q\\) expresses the shift from the constant EoS parameter case. As an example, we consider the case that \\(Q\\) is given by a function \\(q=q(a)\\) as \\[Q=Haq^{\\prime}(a)\\rho_{m}. \\tag{73}\\] Combining (73) with (70), one gets \\[\\rho_{m}=\\rho_{m0}a^{-3(1+w_{m})}{\\rm e}^{q(a)}. \\tag{74}\\] Here \\(\\rho_{m0}\\) is a constant of the integration. Hence, the conservation law (1) is modified, through (71) as \\[\\dot{\\rho}+3H\\left(\\rho+P\\right)=-\\rho_{m0}Ha^{-(2+3w_{m})}q^{\\prime}(a){\\rm e }^{r(a)}\\, \\tag{75}\\] and (6) is also modified as \\[\\frac{1}{3}x\\frac{d\\rho}{dx}+\\rho=-P-S(x). \\tag{76}\\] Here \\[S(x)\\equiv-\\frac{\\rho_{m0}}{3}\\left(a(t_{0})x\\right)^{-(2+3w_{m})}q^{\\prime} \\left(a(t_{0})x\\right){\\rm e}^{q(a(t_{0})x)}. \\tag{77}\\] Note that Eq.(8) is also modified: it now contains the inhomogeneous terms: \\[x^{2}\\frac{d^{2}\\rho}{dx^{2}}+x\\frac{d\\rho}{dx}\\left(4+\\frac{1} {\\xi}\\right)+\\frac{3}{\\xi}[\\rho+f(\\rho,x)] \\tag{78}\\] \\[= 3x\\frac{dS(x)}{dx}+\\frac{3}{\\xi}S(x)\\] \\[= 3x^{1-1/\\xi}\\frac{d}{dx}\\left(x^{1/\\xi}S(x)\\right)\\.\\] Let a (special) solution of (8) be \\(\\rho=\\rho_{s}(x)\\). Then in case of (9) with \\(\\gamma(x)=\\gamma_{0}+\\alpha x^{m}\\), the general solution corresponding to (11) is given by \\[\\rho=\\rho_{s}(x)+x^{\\sigma}\\left[C_{1}J_{\ u}(\\hat{a}x^{\\lambda})+C_{2}J_{-\ u }(\\hat{a}x^{\\lambda})\\right]\\,. \\tag{79}\\] where \\(\\rho_{c}\\) should be included in \\(\\rho_{s}(x)\\). It is also noted that the initial conditions are relevant to determine \\(C_{1}\\) and \\(C_{2}\\) but irrelevant for \\(\\rho_{s}(x)\\). As an example, we can find \\[\\rho_{s}(x)=\\rho_{c}+\\rho_{0}x^{\\eta}\\, \\tag{80}\\] with constants \\(\\rho_{0}\\) and \\(\\eta\\) when \\[{\\rm e}^{q(a)}={\\rm e}^{q_{0}}\\] \\[-\\frac{\\rho_{0}}{\\rho_{m0}}\\left[\\frac{\\left(\\eta^{2}-3\\eta/4+ \\eta/\\xi+3\\gamma_{0}/\\xi\\right)a_{0}^{-\\eta}a^{\\eta+3(1+w_{m})}}{\\left(\\eta+1/ \\xi\\right)\\left(\\eta+3(1+w_{m})\\right)}\\right.\\] \\[\\left.+\\frac{(3\\alpha/\\xi)a_{0}^{-m-\\eta}a^{m+\\eta+3(1+w_{m})}}{ \\left(m+\\eta+1/\\xi\\right)\\left(m+\\eta+3(1+w_{m})\\right)}\\right]\\] \\[-\\frac{3s_{0}a_{0}^{1/\\xi}a^{-1/\\xi+3(1+w_{m})}}{\\rho_{m0}\\left(- 1/\\xi+3(1+w_{m})\\right)}. \\tag{81}\\] which gives \\[S = \\frac{\\rho_{0}}{3}\\left[\\frac{\\left(\\eta^{2}-3\\eta/4+\\eta/\\xi+3 \\gamma_{0}/\\xi\\right)x^{\\eta}}{\\eta+1/\\xi}\\right. \\tag{82}\\] \\[\\left.+\\frac{3\\alpha x^{m+\\eta}}{\\left(m+\\eta+1/\\xi\\right)\\xi} \\right]+s_{0}x^{-1/\\xi}\\.\\] In (81) and (82), \\(q_{0}\\), \\(s_{0}\\) are constants and \\(a_{0}\\equiv a(t_{0})\\). In case that \\[\\eta+3(1+w_{m}),\\ m+\\eta+3(1+w_{m}),\\ -1/\\xi+3(1+w_{m})<0\\, \\tag{83}\\] we find \\({\\rm e}^{q(a)}\\rightarrow{\\rm e}^{q_{0}}\\) when \\(a\\) becomes large, that is, in the late time universe. Thus, \\(\\rho_{m}\\rightarrow\\rho_{m0}{\\rm e}^{q_{0}}a^{-(2+3w_{m})}\\). Furthermore if \\(\\eta<0\\), we find \\(\\rho_{s}\\rightarrow\\rho_{0}\\), that is, \\(H\\) goes to a constant, which may lead to the asymptotically deSitter space. Clearly, for more complicated coupling \\(Q\\), more sophisticated accelerating cosmology may be constructed. ## VII Discussion In summary, we discussed the constrained EoS for cosmic fluid where the relaxation equation for pressure is introduced. It is shown that such EoS is equivalent to usual inhomogeneous EoS [6] which contains scale factor dependent terms. Subsequently, the generalized inhomogeneous EoS with time derivatives of pressure is presented. For the number of explicit examples, the accelerating dark energy cosmology as follows from such EoS cosmic fluid is constructed. It turns out to be the asymptotically de Sitter universe or oscillating universe with long accelerating phase and transtion from deceleration to acceleration. The consistent coupling of such constrained EoS dark fluid with matter is discussed. It is shown that emerging FRW cosmology may be consistent with three years WMAP data. Of course, there are many ways to generalize the EoS for cosmic fluid and to investigate the corresponding impact of such generalization to dark cosmos. The physics behind such generalization remains to be quite obscure (as dark energy itself and its sudden appearence). At best, this may be considered as some phenomenological approximation. Nevertheless, having in mind, that most of modern attempts to understand dark energy including strings/M-theory, brane-worlds, modified gravity, etc lead to effective description in terms of cosmic fluid with unusual form of EoS, it turns out to be extremely powerful approach. From another side, the reconstruction of the cosmic fluid EoS may be done for any given cosmology compatible with observational data which may finally select the true dark energy theory. ## Acknowledgements We are very grateful to A. Balakin for stimulating discussions and participation at the early stage of this work. The research by SDO was supported in part by LRSS project n4489.2006.02 (Russia), by RFBR grant 06-01-00609 (Russia), by project FIS2005-01181 (MEC, Spain) and by the project 2005SGR00790 (AGAUR,Catalunya, Spain) and the research by S.N. was supported in part by YITP computer facilities. ## References * (1) T. Padmanabhan, Phys. Rept. **380**, 235 (2003); arXiv:astro-ph/0603114; L. Perivolaropoulos, arXiv:astro-ph/0601014; S. Bludman, arXiv:astro-ph/0605198. * (2) E. J. Copeland, M. Sami and S. Tsujikawa, arXiv:hep-th/0603057. * (3) S. Nojiri and S. D. Odintsov, arXiv:hep-th/0601213. * (4) V. F. Cardone, C. Tortora, A. Troisi and S. Capozziello, Phys. Rev. **D73**, 043508 (2006) [arXiv:astro-ph/0511528]. * (5) S. Nojiri and S. D. Odintsov, Phys. Rev. D **70**, 103522 (2004), [arXiv:hep-th/0408170]; H. Stefancic, Phys. Rev. D **71**, 084024 (2005), [arXiv:astro-ph/0411630]. * (6) S. Nojiri and S. D. Odintsov, Phys. Rev. D **72**, 103522 (2005) [arXiv:hep-th/0505215]; S. Capozziello, V. Cardone, E. Elizalde, S. Nojiri and S. D. Odintsov, Phys. Rev. D **73**, 043512(2006), [arXiv:astro-ph/0508350]. * (7) J. Barrow, Phys. Lett. B **180**, 335 (1987); I. Brevik and O. Gorbunova, GRG **37**, 2039 (2005) [arXiv:gr-qc/0504001]; I. Brevik, arXiv:gr-qc/0601100; J. Ren and X. Meng, Phys. Lett. B **633**, 1 (2006) [arXiv:astro-ph/0511163]; M. Hu and X. Meng, Phys. Lett. B **635**, 186 (2006) [arXiv:astro-ph/0511615]; M. Cataldo, N. Cruz and S. Lepe, Phys. Lett. B **619**, 5 (2005). * (8) L. Amendola, Phys. Rev. D **62**, 043511 (2000); W. Zimdahl and D. Pavon, Phys. Lett. B **521**, 133 (2001); W. Zimdahl, D. J. Schwarz, A. B. Balakin and D. Pavon, Phys. Rev. D **64**, 063501 (2001); L. P. Chimento, A. S. Jakubi, D. Pavon and W. Zimdahl, Phys. Rev. D **67**, 083513 (2003); V. Faraoni, Phys. Rev. D **69**, 123520 (2004); B. Gumjudpai, T. Naskar, M. Sami and S. Tsujikawa, JCAP **0506**, 007 (2005) [arXiv:hep-th/0502191]; R. G. Cai and A. Wang, arXiv:hep-th/0411025; Z. Guo, R. G. Cai and Y. Z. Zhang, arXiv:astro-ph/0412624; JCAP **0505**,002 (2005); V. Rubakov, arXiv:hep-th/0604153; E. Elizalde, S. Nojiri and S. D. Odintsov, Phys. Rev. D **70**, 043539 (2004) [arXiv:hep-th/0405034]; S. Nojiri and S. D. Odintsov, Phys. Lett. B **562**, 147 (2003) [arXiv:hep-th/0303117]; S. Capozziello, S. Nojiri and S. D. Odintsov, Phys. Lett. B **632**, 597 (2006), [arXiv:hep-th/0507182]; Z. Huang, H. Lu and W. Fang, arXiv:hep-th/0604160; H. Wei and R.-G. Cai, Phys. Rev. D **73**, 083002 (2006); D. Rolarski, arXiv:astro-ph/0605532. * (9) J. Grande, J. Sola and H. Stefancic, arXiv:gr-qc/0604057; J. Barrow and T. Clifton, arXiv:gr-qc/0604063; B. Wang, C. Lin and E. Abdalla, arXiv:hep-th/0509107; B. Wang, Y. Gong and E. Abdalla, Phys. Lett. B **624**, 141 (2005);S. Nojiri, S. D. Odintsov and S. Tsujikawa, Phys. Rev. D **71**, 063004 (2005) [arXiv:hep-th/0501025]; S. Tsujikawa, Phys. Rev. D **73**, 103504 (2006); E. Elizalde, S. Nojiri, S.D. Odintsov and P. Wang, Phys. Rev. **D71**, 103504 (2005), hep-th/0502082; Z. Guo and Y. Zhang, Phys. Rev. D **71**, 023501 (2005); M. Dabrowski, C. Kiefer and B. Sandhoefer, arXiv:hep-th/0605229; M. Alimohammadi and H. Mohseni, Phys. Rev. D **73**, 083527 (2006). * (10) S. Capozziello, S. Nojiri and S. D. Odintsov, Phys. Lett. B **634**, 93 (2006) [arXiv:hep-th/0512118]; S. Capozziello, S. Nojiri, S. D. Odintsov and A. Troisi, [arXiv:astro-ph/0604431]. * (11) L. Amendola, M. Quartin, S. Tsujikawa and I. Waga, arXiv:astro-ph/0605488. * (12) S. Dodelson, M. Kaplinghat and E. Stewart, Phys. Rev. Lett. **85**, 5276 (2000); V. Sahni and L. Wang, arXiv:astro-ph/9910097; B. Feng, M. Li, Y. Piao and X. Zhang, arXiv:astro-ph/0407432; G. Yang and L. Wang, arXiv:astro-ph/0510006; I. Brevik, S. Nojiri, S. D. Odintsov and L. Vanzo, Phys. Rev. D **70**, 043520 (2004); W. Zhao, arXiv:astro-ph/0604459. * (13) S. Nojiri and S. D. Odintsov, Phys. Lett. **B637**, 139 (2006), [arXiv:hep-th/0603062]. * (14) R. Lazkoz, R. Maartens and E. Majerotto, arXiv:astro-ph/0605701; P. Apostolopoulos and N. Tetradis, arXiv:hep-th/0604014; G. Kofinas, G. Panotopoulos and T. Tomaras, JHEP **0601**, 107 (2006); L. Chimento, R. Lazkoz, R. Maartens and I. Quiros, arXiv:astro-ph/0605450. * (15) B. McInnes, JHEP **0208**,029 (2002). * (16) D. N. Spergel _et al._, arXiv:astro-ph.0603449.
We suggest to generalize the dark energy equation of state (EoS) by introduction the relaxation equation for pressure which is equivalent to consideration of the inhomogeneous EoS cosmic fluid which often appears as the effective model from strings/brane-worlds. As another, more wide generalization we discuss the inhomogeneous EoS which contains derivatives of pressure. For several explicit examples motivated by the analogy with classical mechanics the accelerating FRW cosmology is constructed. It turns out to be the asymptotically de Sitter or oscillating universe with possible transition from deceleration to acceleration phase. The coupling of dark energy with matter in accelerating FRW universe is considered, it is shown to be consistent with constrained (or inhomogeneous) EoS. pacs: 11.25.-w, 95.36.+x, 98.80.-k
Write a summary of the passage below.
arxiv-format/0606076v1.md
# Critical phenomena in atmospheric precipitation Ole Peters OLS, Los Alamos National Laboratory, MS-B258, Los Alamos, NM 87545, USA. Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA. Department of Atmospheric Sciences and Institute of Geophysics and Planetary Physics, University of California, Los Angeles, 405 Hilgard Ave., Los Angeles, CA 90095-1565, USA. J. David Neelin Department of Atmospheric Sciences and Institute of Geophysics and Planetary Physics, University of California, Los Angeles, 405 Hilgard Ave., Los Angeles, CA 90095-1565, USA. ###### Journal reference: Nature Physics **2**, 393 - 396 (2006). doi:10.1038/nphys314 Self-organized criticality has been proposed as an explanation for scale-free behaviour in many different physical systems[8]. In most of these, however, it is impossible to measure standard observables for critical phenomena, such as order parameters, tuning parameters, or susceptibilities. Consequently, despite theoretical advances[2; 3; 4], SOC has only loosely been connected to the broader field of critical phenomena. The present study helps position SOC as a sub-branch of critical phenomena by examining a system where the identification and measurement of standard observables is feasible. At short time scales the majority of tropical rainfall occurs in intense rain events that exceed the climatological mean rate by an order of magnitude or more. Precipitation has been found to be sensitive to variations in water vapour along the vertical on large space and time scales both in observations[9; 10] and in models.[11; 12; 13] This is due to the effect of water vapour on the buoyancy of cloud plumes as they entrain surrounding air by turbulent mixing. We conjecture that the transition to intense convection, accompanying the onset of intense precipitation, shows signs of a continuous phase transition. The water vapour, \\(w\\), plays the role of a tuning parameter and the precipitation rate, \\(P(w)\\), is the order parameter Note that such a large-scale continuous phase transition involving the flow regime of the convecting fluid is entirely different from the well-known discontinuous phase transition of condensation at the droplet scale. We analyzed satellite microwave retrievals of rainfall, \\(P\\), water vapour, \\(w\\), cloud liquid water and sea surface temperature (SST) from the Tropical Rainfall Measuring Mission from 2000 to 2005. Observations from the western Pacific provided initial support for our conjecture: a power-law pick up of the order parameter above a critical value of the tuning parameter, \\(w_{c}\\), was observed. We proceded to test whether other observables also behaved as predicted by the theory of phase transitions. As motivation for our conjecture consider a generic lattice-based model which exhibits a continuous phase transition. Particle-conserving rules defining the model ascribe a number of particles to every lattice site, and demand hopping of particles to nearest-neighbour sites when a local density threshold is exceeded. The global effect of these rules is a phase transition at a critical value of the global particle density between a quiescent phase (where the system eventually settles into a stable configuration) and an active phase (where stable configurations are inaccessible). The tuning parameter is the particle density and the order parameter is identified as the density of active sites[14]. SOC can be described in terms of such absorbing-state phase transitions.[2; 14] Here a coupling between order parameter and tuning parameter is introduced by opening the boundaries and adding a slow drive: whenever activity ceases, a new particle is added to the system, i.e., an increase in the tuning parameter. Large activity on the other hand leads to dissipation (particle loss) at the boundaries, i.e., a reduction of the tuning parameter. Such open, slowly driven systems organise themselves to the critical point of the corresponding (closed boundaries, no drive) absorbing state phase transition. The critical behaviour as derived from finite-size scalinganalyses is the same in both cases[3; 4], although the reason for this universality is not fully understood.[15] The scale-free avalanche size distributions in SOC models result from the proximity of the system to a critical point. From the meteorological perspective, a related motivation for our conjecture arises. Atmospheric convection has long been viewed similarly in terms of a slow drive (surface heating and evaporation) and fast dissipation (of buoyancy and rainwater) in precipitating convection. Surface heating and evaporation drive turbulent mixing that maintains a moist atmospheric boundary layer. Combined with radiative cooling, conditional instability is created--while sub-saturated air remains stable, saturated condensing plumes can rise through the full depth of the tropical troposphere. The fast dissipation by moist convection prevents the troposphere from deviating strongly from marginal stability.[16] Although observational tests of this approximate QE state of the tropical troposphere have limited precision, it forms the basis of most convective parameterizations in large scale models[17] and much tropical dynamical theory.[18; 19] Taking large-scale flows into account modifies the process in space and time but does not change it fundamentally. This perspective suggests that a critical point in the water vapour would act as an attractor. Indeed this is basically the convective QE postulate.[6] The critical value \\(w_{c}\\) depends, _e.g._, on atmospheric temperature, but for present purposes this translates well enough into a critical amount of water vapour for a given climatic region. Regions here are defined by longitude ranges given in the caption of Fig. 1 corresponding to major ocean basins, for oceanic grid-points within 20S-20N. Data are collected at 0.25 degree latitude-longitude resolution. The observable \\(w\\) captures vertically integrated, or column, water vapour. It is given as a volume per area in units of mm. In Fig. 1 we show as a function of the tuning parameter \\(w\\) the average value of the order parameter \\(\\left\\langle P\\right\\rangle(w)\\) and the susceptibility of the system, represented by the order parameter variance, \\(\\sigma_{P}^{2}(w)\\), discussed following Eq. (2). The ensemble size for the average ranges from a few thousand at extremes to \\(10^{6}\\) at typical \\(w\\)-values. Above \\(w_{c}\\), the order parameter is well approximated by the standard form[20] \\[\\left\\langle P\\right\\rangle(w)=a(w-w_{c})^{\\beta}, \\tag{1}\\] where \\(a\\) is a system-dependent constant and \\(\\beta\\) is a universal exponent. The deviations from power-law behaviour below \\(w_{c}\\) in the main graph of Fig. 1 are typical of critical systems of finite size.[21] The critical value \\(w_{c}\\) is non-universal and changes with regional climatic conditions, as does the amplitude \\(a\\). To test the degree to which curves from different regions \\(i\\) collapse, we re-scaled the \\(w\\)-values in Fig. 1 by factors \\(f_{w}^{i}\\), reflecting the non-universality of \\(w_{c}\\) and \\(\\left\\langle P\\right\\rangle(w)\\) and \\(\\sigma_{P}^{2}(w)\\) by \\(f_{P}^{i}\\) and \\(f_{\\sigma^{2}}^{i}\\), respectively (setting Western Pacific factors to one). For visual clarity, the data collapse in Fig. 1 is shown only for the Eastern and Western Pacific--climatically very different regions. Similar agreement occurs for other regions (steps in the rescaling and figures for all regions are provided in the Supplementary Information). The exponent \\(\\beta\\) seems to be universal and independent of the climatic region. In the inset to Fig. 1 we show the average precipitation as a function of the reduced water vapour \\(\\Delta w\\equiv(w-w_{c})/w_{c}\\) in a double-logarithmic plot. Importantly, power laws fitted to these distributions all have the same exponent (slope) to within \\(\\pm 0.02\\). The data points in Fig. 1 represent the entire observational period, including all observed SSTs. Conditioning averages by SST ranges yields similar results (see Fig. 3 and Supplementary Information), reducing the subcritical part of the curves slightly. We define the susceptibility \\(\\chi(w;L)\\) via the variance of the order parameter \\(\\sigma_{P}^{2}\\): \\[\\chi(w;L)=L^{d}\\sigma_{P}^{2}(w;L), \\tag{2}\\] where \\(d\\) denotes the dimensionality of the system and \\(L\\) the spatial resolution. Fig. 1 shows a suggestive increase in \\(\\sigma_{P}^{2}\\) near \\(w_{c}\\), and indicates that standard methods for critical phenomena can sensibly be applied. Next we test for finite-size scaling. Because our system size cannot be changed, we identify the spatial data resolution \\(L\\) as the relevant length scale. Changing \\(L\\) has Figure 1: **Order parameter and susceptibility.** The main figure shows the collapsed (see text) precipitation rates \\(\\left\\langle P\\right\\rangle(w)\\) and their variances \\(\\sigma_{P}^{2}(w)\\) for the tropical Eastern (red) and Western (green) Pacific as well as a power-law fit above the critical point (solid line). The inset displays on double-logarithmic scales the precipitation rate as a function of reduced water vapour (see text) for Western Pacific (green, 120E to 170W), Eastern Pacific (red, 170W to 70W), Atlantic (blue, 70W to 20E), and Indian Ocean (pink, 30E to 120E). Data are shifted by a small arbitrary factor for visual ease. The straight lines are to guide the eye. They all have slope 0.215, fitting the data from all regions well. the effect of taking averages over different numbers of degrees of freedom and allows one to investigate the degree of spatial correlation. The finite size scaling ansatz for the susceptibility is \\[\\chi(w;L)=L^{\\gamma/\ u}\\tilde{\\chi}(\\Delta wL^{1/\ u}), \\tag{3}\\] defining \\(\\gamma\\) and \\(\ u\\) as the standard critical exponents and the usual finite-size scaling function \\(\\tilde{\\chi}(x)\\), constant for small arguments \\(|x|\\ll 1\\) and decaying as \\(|x|^{-\\gamma}\\) for large arguments \\(|x|\\gg 1\\).[22] The variance \\(\\sigma_{P}^{2}(w;L)\\) is affected by uncertainties in \\(w\\) and \\(w_{c}\\), making precise quantification of \\(\\chi(w;L)\\) difficult. We therefore do not estimate \\(\\gamma\\) from the \\(w\\)-dependence of \\(\\chi(w;L)\\), corresponding to large arguments \\(|x|\\) in Eq. (3) but effectively fix \\(\\Delta w=0=x\\) and obtain \\(\\gamma/\ u\\) from the \\(L\\)-dependence of the maximum susceptibility \\(\\chi^{\\rm max}(L)\\). The variance of the average \\(\\left\\langle P\\right\\rangle(w;L)\\) over \\(L^{d}\\) independent degrees of freedom decreases as \\(\\sigma_{P}^{2}(w;L)\\propto L^{-d}\\). In a critical system, however, the diverging bulk correlation length \\(\\xi\\propto(\\Delta w)^{-\ u}\\gg L\\) (small argument in Eq. (3)) prohibits the assumption of independence. In this case Eq. (3) with Eq. (2) yields \\[\\sigma_{P}^{2\\,{\\rm max}}(L)\\propto L^{\\gamma/\ u-d}. \\tag{4}\\] Coarsening the spatial resolution of the data, we find in Fig. 2 that \\(\\sigma_{P}^{2\\,{\\rm max}}(L)\\) scales roughly as \\(L^{-\\lambda}\\), with \\(\\lambda=0.46(4)\\). This suggests the exponent ratio \\(\\gamma/\ u=1.54(4)\\). At criticality, the spatial decay of correlations between order parameter fluctuations becomes scale-free.[20] This is equivalent to a non-trivial power-law dependence of the order-parameter variance on \\(L\\) (see Supplementary Information for details and conditions). Hence, Fig. 2 indicates a scale-free correlation function of fluctuations in the rain rate in the range of 25 km to 200 km. This suggests that the meteorological features known as mesoscale convective systems[23] are long-range correlation structures akin to critical clusters.[24] Synoptic inspection indicates that the high rain rate phase and critical region of Fig. 1 come substantially from points within such complexes (examples are provided in the Supplementary Information). The question of self-organisation towards the critical point of the transition is addressed by displaying the residence times of the system in Fig. 3. This is the number of observations in the 5-year period where the system was found at a given level of water vapour. A slowly driven system would be expected to spend a significant amount of time in the low-\\(w\\) phase because when it fluctuates into this phase _e.g._ due to some large-scale event, it takes a long time to recover. Therefore the distribution decreases slowly towards low values of \\(w\\). The fast dissipation mechanism, on the other hand, ensures that the system leaves the high-\\(w\\) regime relatively quickly when it fluctuates into it. Consequently the distribution decreases rapidly towards large values of \\(w\\). For the properties of rainfall, the part of the distribution in Fig. 3 comprised only of observations with rainfall is of interest, seen as the blue line in Fig. 3. We note that the system is most likely to be found near the beginning of the intense precipitation regime. Almost the entire weight of the distribution of rainy times is concentrated here. Meteorologically, these results suggest a means to redefine and extend convective QE, both empirically and theoretically. In its simplest application QE assumes that the relationship among atmospheric column thermody Figure 2: **Finite-size scaling.** The variance of the order parameter \\(\\sigma_{P}^{2}(w)\\) as a function of \\(w\\), rescaled with \\(L^{0.42}\\) for system sizes \\(0.25^{\\circ}\\), \\(0.5^{\\circ}\\), \\(1^{\\circ}\\), and \\(2^{\\circ}\\) in the Western Pacific. From \\(w\\approx 57\\) mm, this produces a good collapse. The inset shows that away from the critical point, up to \\(w\\approx 40\\) mm a trivial rescaling with \\(L^{d=2}\\) works adequately. This suggests that the non-trivial collapse is indeed a result of criticality. Figure 3: **Residence times.** The number of times \\(N(w)\\) an atmospheric pixel of \\(0.25^{\\circ}\\times 0.25^{\\circ}\\) was observed at water vapour \\(w\\) in the western Pacific, given a sea surface temperature within a \\(1^{\\circ}\\)C bin at \\(30^{\\circ}\\)C. The green and blue lines show residence time for all points and precipitating points, respectively. The red line shows the order-parameter pick-up \\(\\left\\langle P\\right\\rangle(w)\\) for orientation (precipitation scale on the right). namic variables is pinned close to the point where deep convection and precipitation set in. Fig. 3 shows this to be a reasonable first approximation, but it also implies associated critical phenomena. A loss term \\(\\left\\langle P\\right\\rangle(w)\\) of the form of Eq. (1) implies the absence of any well-defined convective time scale. Scale-free distributions of event sizes[5] and the spatial correlation behavior seen in Fig. 2 may result from this proximity to an apparent continuous phase transition. These findings beg for a simple SOC-type model of the atmospheric dynamics responsible for the critical behaviour. While the physics must conform with recent cloud-resolving model analysis of mesoscale aggregation[11; 25], our results point to the key role of excitatory short-range interactions, essential for critical phenomena of the type seen here. This study advances our understanding of SOC as a critical phenomenon, identifying the underlying phase transition and associated critical phenomena. Beyond scale-free event size distributions it furnishes direct evidence, for the first time, for an underlying phase transition in a physical system. _Methods --_ Data are from the TMI (TRMM microwave imager), processed by Remote Sensing Systems (RSS). The spatial resolution reflects the footprint of the instrument. As with any satellite retrieval product, it is necessary to consider whether the algorithm assumptions could impact the results. The microwave retrieval algorithm is that used on Special Sensor Microwave Imager (SSM/I) data.[26] The combination of four microwave channels permits independent retrieval of water vapour and condensed phase water (with SST and surface wind speed), while an empirical relation is used to partition cloud water and rain. Column water vapour validates well against in situ sounding data, which also show that daily variations are largely associated with the lower troposphere above the atmospheric boundary layer.[9] Validation of TMI rain rate against space-borne precipitation radar (PR) at sub-daily time scales in the tropical Western Pacific[27] show TMI overestimating rain rate but with an approximately linear relationship to PR. We have performed a number of checks to verify that results are not substantially impacted by a high rain rate cutoff in the algorithm (25 mm/h), including comparison to regions where cutoff occurences are very low, such as the eastern Pacific (Fig. 1). The clearest check is that the essential features are identical for the cloud liquid water, whose measurement cutoff of 2.5 mm is never reached (see Supplementary Information). SST data here are averages over non-flagged neighbors in space and time, since SST is not retrieved at high rain rates. The critical value \\(w_{c}\\) is determined by an iterative procedure with an initial guess, followed by a fit to Eq. (1) above \\(w_{c}\\). Error bars in Fig. 3 are standard errors of the variance \\(\\sigma_{P}^{2}(w;L)\\), determined via the zeroth, second and fourth moments of \\(P(w)\\). Individual measurements of \\(P(w)\\) are considered independent, which holds well between satellite overpasses, though not within individual tracks. ## References * (1) Klein, M. J. & Tisza, L. Theory of Critical Fluctuations _Phys. Rev._**76**, 1861-1868 (1949). * (2) Dickman, R., Vespignani, A. & Zapperi, S. Self-organized criticality as an absorbing-state phase transition. _Phys. Rev. E_**57**(5), 5095-5105 (1998). cond-mat/9712115 * (3) Dickman, R. _et al._ Critical behaviour of a one-dimensional fixed-energy stochastic sandpile. _Phys. Rev. E_**64**, 056104 (2001). cond-mat/0101381 * (4) Christensen, K. _et al._ Avalanche Behavior in an Absorbing State Oslo Model. _Phys. Rev. E_**70**, 067101 (2004). cond-mat/0405454 * (5) Peters, O., Hertlein, C. & Christensen, K. A complexity view of rainfall. _Phys. Rev. Lett._**88**, 018701 January (2002). cond-mat/0201468 * (6) Arakawa, A. & Schubert, W. H. Interaction of a cumulus cloud ensemble with the large-scale environment, Part I. _J. Atmos. Sci._, **31**, 674-701 (1974). * (7) Bak, P., Tang, C. & Wiesenfeld, K. Self-Organized Criticality: An Explanation of 1/f Noise. _Phys. Rev. Lett._**59**(4), 381-384 (1987). * (8) Christensen, K. & Moloney, N. _Complexity and Criticality_ (Imperial College Press, 2005). * (9) Bretherton, C. S., Peters, M. E. & Back, L. E. Relationships between Water Vapor Path and Precipitation over the Tropical Oceans. _J. Climate_**17**, 1517-1528 (2004). * (10) Parsons, D. B., Yoneyama, K. & Redelsperger, J.-L. The evolution of the tropical western Pacific ocean-atmosphere system following the arrival of a dry intrusion. _Q. J. Roy. Met. Soc._, **126**, 517-548 (2000). * (11) Tompkins, A. M. Organization of Tropical Convection in Low Vertical Wind Shears: The Role of Water Vapor. _J. Atmos. Sci._, **58**, 529-545 (2001). * (12) Grabowski, W. W. MJO-like Coherent Structures: Sensitivity Simulations Using the Cloud-Resolving Convection Parameterization (CRCP), _J. Atmos. Sci._, **60**, 847-864 (2003). * (13) Derbyshire, S. H. _et al._ Sensitivity of moist convection to environmental humidity. _Q. J. Roy. Met. Soc._, **130**, 3055-3079 (2005). * (14) Marro, J. and Dickman, R. _Nonequilibrium Phase Transitions in Lattice Models._ (Cambridge University Press, 1999). * (15) Pruessner, G. and Peters, O. Absorbing state and Self-Organized Criticality: Lessons from the Ising Model. _Phys. Rev. E_**73**, 025106 (2006). cond-mat/0411709. * (16) Xu, Kuan-man & Emanuel, K. A. Is the tropical atmosphere conditionally unstable? _Mon. Wea. Rev._, **117**, 1471-1479 (1989). * (17) Arakawa, A. The cumulus parameterization problem: Past, present, and future. _J. Climate_, **17**, 2493-2525 (2004). * (18) Emanuel, K. A., Neelin, J. D. & Bretherton, C. S. On large-scale circulations in convecting atmospheres. _Q. J. Roy. Met. Soc._, **120**, 1111-1143 (1994). * (19) Neelin, J. D., & Zeng, N. A quasi-equilibrium tropical circulation model-formulation. _J. Atmos. Sci._**57**, 1741-1766 (2000). * (20) Yeomans, J. _Statistical Mechanics of Phase Transitions._ (Oxford University Press, 1992). * (21) Fisher, M. E. & Barber, M. N. Scaling Theory for Finite-Size Effects in the Critical Region _Phys. Rev. Lett._, **28** (23), 1516-1519 (1972). * (22) Privman, V., Hohenberg, P. C., & Aharony, A. In _Phase Transitions and Critical Phenomena,_ (eds Domb, C. & Lebowitz, J. L.), volume 14, chapter 1, 1-134. (Academic Press, New York, 1991). * (23) Houze, R. A., _Cloud Dynamics_. (Academic Press, 1993). * (24) Stauffer, D. and Aharony, A. _Introduction to Percolation Theory, 2nd ed._ (Taylor and Francis, London, 1992). * (25) Bretherton, C. S., Blossey, P. N. & Khairoutdinov, M. An energy balance analysis of deep convective self-aggregation above uniform SST. _J. Atmos. Sci._, **62**, in press. * (26) Wentz, F. J. & Spencer, R. W. SSM/I rain retrievals within a unified all-weather ocean algorithm. _J. Atmos. Sci._**56**, 1613-1627 (1998). * (27) Ikai, J. & Nakamura, K. Comparison of rain rates over the ocean derived from TRMM microwave imager and precipitation radar. _J. Atmos. Oceanic Technol._, **20**, 1709-1726 (2003). Supplementary Information is provided at www.nature.com/nphys/journal/v2/n6/abs/nphys314.html. This work was supported under National Science Foundation grant ATM-0082529 and National Oceanic and Atmospheric Administration grants NA05OAR4310013 (JDN and OP) and the US Department of Energy (W-7405-ENG-35) (OP). We thank D. Sornette for connecting the authors, and the RSS rain team for discussion.
Critical phenomena near continuous phase transitions are typically observed on the scale of wavelengths of visible light[1]. Here we report similar phenomena for atmospheric precipitation on scales of tens of kilometers. Our observations have important implications not only for meteorology but also for the interpretation of self-organized criticality (SOC) in terms of absorbing-state phase transitions, where feedback mechanisms between order- and tuning-parameter lead to criticality.[2] While numerically the corresponding phase transitions have been studied,[3; 4] we characterise for the first time a physical system believed to display SOC[5] in terms of its underlying phase transition. In meteorology the term quasi-equilibrium (QE)[6] refers to a state towards which the atmosphere is driven by slow large-scale processes and rapid convective buoyancy release. We present evidence here that QE, postulated two decades earlier than SOC[7], is associated with the critical point of a continuous phase transition and is thus an instance of SOC.
Condense the content of the following passage.
arxiv-format/0606180v1.md
# A revisit to the GNSS-R code range precision O. Germain and G. Ruffini Starlab, C. de l'Observatori Fabra s/n, 08035 Barcelona, Spain, [http://starlab.es](http://starlab.es) Contact: [email protected] [email protected] ## I Introduction GNSS-R, the use of Global Navigation Satellite Systems (GNSS) reflected signals is a powerful and potentially disruptive technology for remote sensing: wide coverage, passive, precise, long-term, all-weather and multi-purpose. GNSS emit precise signals which will be available for decades as part of an emerging infrastructure resulting from the enormous effort invested in GPS, GLONASS, Galileo and augmentation systems. A key advantage of GNSS-R is its \"multistatic\" character: unlike monostatic systems, a single receiver will collect information from a simultaneous set of reflection points associated to GNSS emitters. A system in low Earth orbit capable of collecting GPS, Galileo and GLONASS data would potentially be combing the surface with more than a dozen reflection tracks at the same time (for a review, see [12]). An important aspect is that GNSS signals are very weak as they were not designed for radar applications; yet they contain a wealth of information. For this reason, signal processing plays an important role. The first detection of GNSS signals from space was documented in [11]. More recently, GPS-R L1 C/A signals have been successfully detected from a dedicated experiment in space using a moderate gain antenna [1], complementing a large number of experiments from aircraft and stratospheric balloons. The resulting data will be used to further validate models. The reflection process affects the signal in several ways, at the same time degrading (from the point of view of detection) and loading it with information from the reflecting surface. The waveform amplitude is normally reduced, the shape distorted and signal coherence mostly lost. While GNSS-R cannot provide the precision of dedicated radar altimetry missions, it offers a significant advantage thanks to its multistatic character. The impact of GNSS-R altimetry data to global circulations models has been studied through simulations, with very promising results [13]. Another recent impact study has focused on the potential of GNSS-R to detect Tsunami's [14]. A dedicated GNSS altimetry system could provide timely warnings, potentially saving many lives. As described in [15], simulations have indicated that a global 100% tsunami detection rate in less than two hours is possible with a ten satellite GNSS-R constellation. Altimetry in GNSS-R can be carried out in two general ways, depending on the ranging principle used. In code altimetry, our focus here, the code is used for ranging with the direct and reflected signals. In phase altimetry, the phase of the signal is used. All of this is rather similar to normal GNSS processing. The main difference is that the reflected signal is affected by the reflection process, which generally distorts the triangular waveform shape of the return and renders the reflected signal very incoherent. This makes the ranging task rather challenging. ## II Range precision and altimetry Contrarily to classical radar altimetry, range precision is a dominant factor in the error budget for a GNSS-R code-altimetry space mission, due to the much lower modulation bandwidth (1 MHz or 10 MHz for the GPS C/A and P codes respectively). If the direct signal error is considered negligible in front of the reflected signal error, the altimetry precision \\(\\sigma_{k}\\) writes simply as a function of the reflected signal range precision \\(\\sigma_{R}\\): \\[\\sigma_{k}=\\frac{\\sigma_{g}}{2\\sin\\varepsilon}, \\tag{1}\\] where \\(\\varepsilon\\) is the transmitter elevation angle. [11] proposed a simple approach to assess \\(\\sigma_{R}\\) and since then, the majority of space mission feasibility studies (e.g. the ESA PARIS and STERNA studies) rely on this reference as an approximation. However, this model is known to neglect important aspects--notably speckle--and a re-evaluation of the matter is necessary. Section III presents a critical review of the state of the art and discusses the model validity. Section IV introduces theCramer-Rao Bound (CRB) theory which constitutes the foundation of our analysis approach. This methodology is then applied to both the direct and reflected GNSS signals to derive closed-form expressions of range precision in sections V and VI respectively. Finally, the impact of new performance predictions is illustrated in section VII where mission scenarios are discussed in the light of two classes of applications. ## III State of the Art Review The approach proposed in [10] basically assumes that range precision for the reflected signal can be evaluated (to first order) in the same way as for the direct signal. The reflected waveform is assumed to be re-tracked using the algorithm of [14]. This algorithm estimates the direct waveform's delay using three points (the peak and its two immediate neighbours) to determine the peak sub-sample position. In the limit of low thermal noise the precision of this algorithm turns out to be \\[\\sigma_{x}\\approx\\frac{1}{\\sqrt{2}}\\frac{\\tau_{c}}{snr}\\sqrt{1-C(2)}\\,.\\] where \\(\\tau_{c}\\) is the chip length, \\(C(2)\\) is the correlation factor between amplitudes separated by two lags and \\(snr\\) is the signal to noise ratio, defined as the ratio between average and standard deviation of the peak amplitude. The approach proposed by [10] suffers from several limitations. First, it is valid for relatively high SNR only. Second, the derived expression is tied to the choice of a particular estimator. It cannot then be considered applicable to others and as such, it does not address the general case of retracking where an arbitrary number of waveform's points are fit by a model. Third, the derived expression (and associated estimator) assumes a direct signal statistical model whereas the reflected signal is quite different. The waveform's fluctuations are caused by thermal noise but also by speckle. Besides, the waveform's shape is far from the triangular aspect of the direct signal. Finally, the retracking will presumably not be done on the peak of the waveform (which is known to be an unstable and badly localized feature of the reflected signal) but rather on its leading edge. For these reasons, it appears necessary to re-assess the matter in a more systematic fashion, using appropriate tools from Estimation Theory. ## IV Cramer-Rao Bound The context of the present problem is Estimation Theory. The CRB methodology allows predicting the best achievable performance in estimation problems for which the stochastic nature of the observation can be described by a probability distribution function (PDF). Formally, the problem comes to estimate a parameter \\(\\theta\\) (e.g., the delay) from a random observation \\(X\\) (the complex waveform, a vector), knowing its PDF \\(p(X,\\theta)\\). Then, the RMS precision of any non-biased estimator of \\(\\theta\\) has a lower bound (see e.g. [11]): \\[CRB=\\left[-<\\frac{\\partial^{2}}{\\partial\\theta^{2}}\\log p(X,\\theta)>\\right]^{- 1/2}.\\] Focusing on complex, vectorial Gaussian-distributed signals, the PDF is given by \\[p(X,\\theta)=\\frac{1}{\\pi^{\\,cont(X)}\\left|\\Gamma\\right|}\\exp[-(X-m)^{*}\\cdot \\Gamma^{-1}\\cdot(X-m)],\\] where \\(m\\)=\\(<\\)\\(X\\)\\(>\\) and \\(\\Gamma\\)=\\(<\\)\\(X\\)-\\(m\\)\\((X\\)\\(-m)\\)\\(>\\) are the mean vector and covariance matrix of the complex signal vector \\(X\\) respectively. In this case, the CRB expression is \\[CRB^{-2}=\\frac{\\partial^{2}}{\\partial\\theta^{2}}\\log[]\\Gamma\\!\\!\\!\\!/+2\\!\\left( \\frac{\\partial m}{\\partial\\theta}\\right)^{*}\\Gamma^{-1}\\!\\!\\left(\\frac{ \\partial m}{\\partial\\theta}\\right)+\\sum_{\\vec{y}}\\Gamma_{\\vec{y}}^{*}\\,\\frac{ \\partial^{2}}{\\partial\\theta^{2}}\\Gamma_{\\vec{y}}^{-1}\\] This expression is the starting point for evaluating the GNSS direct/reflected range precisions, as developed in the two following sections. ## V Direct Signal Range Precision The RF signal received by the direct antenna can be seen as an attenuated (\\(\\alpha\\) factor) and delayed (by \\(\\theta\\)) version of the GNSS code \\(C\\) emitted by the transmitter, and corrupted by additive thermal noise \\(\\sigma\\)\\(b\\) (where \\(b\\) is a complex zero-mean unit-variance white-noise Gaussian random process and \\(\\sigma\\) a real scaling factor). The waveform is produced by correlating this input signal with a clean replica of the GNSS down-converted signal, leading to the complex waveform \\[X=\\left[\\alpha\\cdot C_{\\rho}+\\sigma\\cdot b\\right]\\otimes C\\,,\\] defined along the time-delay axis \\(\\tau_{i}\\) (i.e. the correlation lag vector). Introducing the GNSS code autocorrelation function, \\[\\chi=C\\otimes C\\,,\\] it is immediate to write expressions for the mean complex waveform and its covariance matrix: \\[m_{i}=\\alpha\\cdot\\chi\\!\\left(\\frac{\\tau_{i}-\\theta}{\\tau_{c}}\\right),\\] \\[\\Gamma_{\\vec{y}}=2\\sigma^{2}\\chi\\!\\left(\\frac{\\tau_{i}-\\tau_{j}}{\\tau_{c}} \\right).\\] Having a Gaussian-distributed signal allows to use Eq. 5 and plugging the mean and covariance expressions leads to the CRB for the direct signal delay estimation, that is, the best possible performance for direct signal range precision: \\[CRB^{-2}=(2-\\frac{\\pi}{2})\\frac{SNR_{1}^{2}}{\\tau_{c}^{2}}\\sum_{\\vec{y}}\\partial \\chi\\!\\left(\\frac{\\tau_{i}}{\\tau_{c}}\\right)\\partial\\chi\\!\\left(\\frac{\\tau_{j}}{ \\tau_{c}}\\right)\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!Note that for the direct signal this SNR definition can be linked to the previous one, \\[SNR_{1}\\approx\\sqrt{2}snr\\,.\\] **Eq. 12** The CRB expression can now be compared to the state of the art model. For this purpose, Eq. 10 should be further simplified by adopting the assumptions that \\(\\chi\\) is a triangle function and that only three points of the waveform are retained for retracking (the peak and its two immediate neighbours). Doing this, we recover Eq. 2. This exercise illustrates the strength of the CRB approach for deriving generic performance expression adaptable to a particular algorithm and also proves that the Thomas estimator is an efficient one (i.e. reaching its Cramer-Rao bound) in the limit of high enough SNR. Figure 1 gives values of the direct signal (GPS C/A code) range precision as a function of 1/SNR (i.e. NSR). The waveform is sampled from -300m to 300m with a step of 15m (i.e. 20 MHz). As expected, the CRB approach is in full agreement with the state of the art model. To further validate these results, we have performed Monte-Carlo simulations. Realizations of the model of Eq. 6 have been re-tracked using two estimators: the Maximum Likelihood Estimator (MLE), known to be efficient, and the Thomas algorithm. MLE results match well to the theoretical CRB, except for very low SNR where a slight departure is observed. As expected, the Thomas algorithm is efficient for high SNR but deviates from optimality at severe noise levels. ## VI Reflected Signal Range Precision The expression for the complex reflected waveform involves two contributions: one is the GNSS electric field scattered by sea-surface and the other is thermal noise, as for the direct signal, \\[X=\\left(\\alpha\\cdot U+\\sigma\\cdot b\\right)\\otimes C=\\alpha\\cdot u+\\sigma\\cdot b \\otimes C\\,,\\] **Eq. 13** where \\(U\\) is the scattered electric field and \\(u\\) the electric field after correlation with a signal replica. From space and for the majority of sea-states, it can reasonably be assumed that the sea-surface scattering contribution follows fully-developed speckle statistics, that is, a complex, vectorial, zero-mean, Gaussian PDF. Since thermal noise is also Gaussian, the reflected complex waveform is Gaussian distributed with parameters \\[m_{i}=0\\,,\\] **Eq. 14** \\[\\Gamma_{y}=<u_{i}\\,u_{j}^{*}>+\\frac{\\pi}{4-\\pi}\\frac{1}{SNR_{1}^{2}}\\chi\\left( \\frac{\\tau_{i}-\\tau_{j}}{\\tau_{c}}\\right)\\,.\\] **Eq. 15** The CRB expression immediately follows: \\[CRB^{-2}=\\frac{\\partial^{2}}{\\partial\\theta^{2}}\\log\\left[\\Gamma\\right]+\\sum _{y}\\Gamma_{y}^{*}\\,\\frac{\\partial^{2}}{\\partial\\theta^{2}}\\,\\Gamma_{y}^{-1}\\,.\\] **Eq. 16** The tricky part is now to evaluate the covariance of the scattered filtered field \\(<\\!u_{i}\\,u_{j}^{*}\\!>\\). The starting point is the EM integral equation of [22] modelling the scattered filtered field \\(u\\). Now, we emphasize that the critical feature for our purpose is the waveform leading edge which is obtained by integration of sea-surface scatterers in the vicinity of the specular point. In this regime and from space, the signal covariance is largely dominated by the radar ambiguity function, i.e. by the GNSS autocorrelation. In other words, we assume that the antenna pattern and the glistening zones are much larger than the first-chip zone. In addition, we simplify further the study by limiting ourselves to reflections occurring at nadir. Under these assumptions, the covariance of the scattered filtered field, in the leading edge regime, simplifies to \\[<u_{i}\\,u_{j}^{*}>=\\int_{0}^{\\infty}\\chi\\left[\\frac{\\tau_{i}-\\theta-\\xi}{\\tau _{c}}\\right]\\cdot\\chi\\left[\\frac{\\tau_{j}-\\theta-\\xi}{\\tau_{c}}\\right]d\\xi\\,.\\] **Eq. 17** Figure 2 illustrates the \\(\\Gamma\\) covariance matrix. We highlight again that this model is acceptable for the description of the leading edge but cannot render the behaviour of the waveform's trailing edge (which is affected by the finite size of antenna beam and glistening zone). Figure 1: Direct signal range precision vs. one-shot thermal NSR, given by the state of the art model (Eq. 2), the CRB approach (Eq. 10) and Monte-Carlo simulations conducted with the MLE and Thomas estimators. Figure 3 provides values for the reflected signal (GPS C/A code) range precision as a function of NSR. The waveform is sampled from -400m to 300m with a step of 15m (i.e., \\(\\sim\\)20 MHz). The re-assessment leads to more pessimistic results than in previous analyses: typically, the range precision computed with the CRB approach is predicted \\(\\sim\\)4 times worse. Besides, it is worth noting that the asymptotic range precision for infinite SNR is now predicted finite. Even without thermal noise (e.g., with a very large antenna), the waveform is still degraded by speckle and this remains a limitation for delay estimation. ## VII Altimetry Scenario Study The impact of this result is now discussed. A simple error budget is assessed for two generic space missions and compared to the requirements of space altimetry applications potentially suitable for GNSS-R. The two proposed missions receive the GPS C/A code and are characterized by their altitude (500 or 700 km) and antenna gain (28 or 34 dB). A link budget model developed elsewhere [CNES ALT GNSSR, 2006] allows computing the expected thermal SNR and the coherence time of the reflected signal, which is needed to compute the number of independent samples in one second. The altimetric precision is then derived according to Eq. 1. Table 2 shows user requirements for mesoscale oceanography and tsunami detection, expressed as the altimetric and spatial scales of signatures to be observed. The two missions using the C/A code meet the requirements of strong tsunami detection but not the ones of mesoscale oceanography. The same exercise has been conducted for a GNSS code with a ten times broader bandwidth, namely the GPS P code (Table 3). The performance improvement is rather clear and becomes now compatible with the requirements of mesoscale oceanography. \\begin{table} \\begin{tabular}{|l|c|c|} \\hline Estimate & Mission & Mission \\\\ \\hline Altitude (km) & 500 & 700 \\\\ \\hline Antenna Gain (dB) & 28 & 34 \\\\ \\hline Waveform sampling step (m) & 15 & 15 \\\\ \\hline One-shot thermal SNR (linear) & 12 & 22 \\\\ \\hline Coherence time (ms) & 0.8 & 0.9 \\\\ \\hline One-shot radar range precision (m) & 32.6 & 20.8 \\\\ \\hline One sec nadir range precision (cm) & 92 & 62 \\\\ \\hline One-sec nadir altimetric precision (cm) & 46 & 31 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Performance of two GNSS-R space missions using the GPS C/A code (nadir case). \\begin{table} \\begin{tabular}{|l|c|c|} \\hline Estimate & Measured & Strong tsunami \\\\ \\hline Altimetric scale (cm) & 5 & 20 \\\\ \\hline Spatial scale (km) & 100 & 100 \\\\ \\hline Allowed interation time (s) & 13.3 & 13.3 \\\\ \\hline 1sec altimetry precision (cm) & 18 & 73 \\\\ \\hline \\end{tabular} \\end{table} Table 2: User requirements for applications addressed by GNSS-R altimetry. Figure 3: Reflected signal range precision vs. one-shot thermal NSR, given by the state of the art model (Eq. 2), the CRB approach (Eq. 16). Figure 2: Illustration of the reflected-signal covariance matrix model (Eq. 15), obtained for SNR=3. ## VIII Conclusions In this paper, we have carried out a critical review of the state of the art model for GNSS-R range precision. The goal was to revisit the baseline assumption (known to be incorrect) that reflected and direct signals can be treated the same. A rigorous evaluation of the problem, based on the Cramer-Rao Bound methodology, has been conducted. For the direct signal, we have obtained results in agreement with the state of the art, as expected. For the reflected signal, we have shown that precision degrades, as suspected. For instance, a mission receiving the C/A code at 500km with 28dB gain would have a 1-second range precision of 1m at nadir. This is due to the impact of speckle noise and the shape change in the reflected signal. These results question the suitability of a C/A code GNSS-R mission focusing on mesoscale altimetry. The use of 1-MHz codes (e.g. GPS C/A) remains acceptable to detect strong tsunamis (20 cm over 100 km) but mesoscale oceanography (5 cm over 100 km) would be realistic only with 10-MHz codes (e.g., GPS P code). The availability of such signals as well as those with an even higher bandwidth (up to 50 MHz with the E5 signal) provided by the European Galileo system will further increase the potential of this technique [Galileo OS SIS ICD, 2006]. Future work should consolidate these results with more numerical simulations and experimental validation either using space data or adapting the model to low altitudes and take benefit of available airborne/coastal data. Finally, an in-depth study of the Galileo signal structure impact on GNSS-R is a very important future research line. ## Acknowledgments Part of this work was carried out under CNES contract. We thank C. Tison (CNES) for publication permission. ## References * [**CNES ALT GNSSR, 2006]** << Evaluation des Performances Altimetriques a partir de Signaux GNSS >>, CNES contract, 2005/2006. * [**Galileo OS SIS ICD, 2006]** Galileo Open Service, Signal in Space Interface Control Document, Draft 0, May 2006. * [**Gleason _et al., 2005]** \"Detection and Processing of Bistatically Reflected GPS Signals From Low Earth Orbit for the Purpose of Ocean Remote Sensing\", IEEE Trans. on Geoscience and Remote Sensing. * Estimation Theory\", Prentice Hall, Upper Saddle River, NJ. * [**Le Traon _et al., 2002]** \"Mesoscale Ocean Altimetry Requirements and Impact of GPS-R measurements for Ocean Mesoscale Circulation Mapping\" Abridged Starlab ESA/ESTEC Technical Report, available at [http://arxiv.org/abs/physics/0212068](http://arxiv.org/abs/physics/0212068). * [**Lowe _et al., 2002]** \"First spaceborne observation of an Earth-reflected GPS signal\", Radio Science 37(1). * [**Martin-Neira _et al., 2005]**, \"Detecting tsunamis using PARIS concept\", URSI Conf. on Microwave Remote Sensing of the Earth, Ispra, Italy, 20-21 April 2005. * [**Ruffini, 2006]** IEEE GRS March Newsletter, p. 15-21. * [**Soulat _et al., 2005]** \"PARIS mission impact analysis\", GNSSR'05 workshop, Surrey, UK, June 2005. * [**Thomas, 1995]** \"Signal Processing Theory for the Turbo Rogue Receiver\", JPL Publication 95-6. * [**Zavorotny and Voronovich, 2000]** \"Scattering of GPS signals from the ocean with wind remote sensing application\", IEEE Trans. Geoscience and Remote Sensing, 38(2):951-964. \\begin{table} \\begin{tabular}{|l|c|c|} \\hline \\hline \\multicolumn{1}{|l|}{Hennittore} & \\multicolumn{1}{c|}{African 1} & \\multicolumn{1}{c|}{African 2} \\\\ \\hline Altitude (km) & 500 & 700 \\\\ \\hline Antenna Gain (dB) & 28 & 34 \\\\ \\hline Waveform sampling stem (m) & 1.5 & 1.5 \\\\ \\hline One-shot thermal SNR (linear) & 4.8 & 8.7 \\\\ \\hline Coherence time (ms) & 2.5 & 2.8 \\\\ \\hline One-shot nadir range precision (m) & 7.7 & 4.3 \\\\ \\hline One-sec nadir range precision (cm) & 38 & 23 \\\\ \\hline One-sec nadir altimetric precision (cm) & 19 & 11 \\\\ \\hline \\end{tabular} \\end{table} Table 3: **Performance of two GNSS-R space missions receiving the GPS P code (nadir case).**
We address the feasibility of a GNSS-R code-altimetry space mission and more specifically a dominant term of its error budget: the reflected-signal range precision. This is the RMS error on the reflected-signal delay, as estimated by waveform retracking. So far, the approach proposed by [10] has been the state of the art to theoretically evaluate this precision, although known to rely on strong assumptions (e.g., no speckle noise). In this paper, we perform a critical review of this model and propose an improvement based on the Cramer-Rao Bound (CRB) approach. We derive closed-form expressions for both the direct and reflected signals. The performance predicted by CRB analysis is about four times worse for typical space mission scenarios. The impact of this result is discussed in the context of two classes of GNSS-R applications: mesoscale oceanography and tsunami detection.
Give a concise overview of the text below.
arxiv-format/0606235v1.md
# Casimir edge effects Holger Gies Institut fur Theoretische Physik, Philosophenweg 16, 69120 Heidelberg, Germany Klaus Klingmuller Institut fur Theoretische Physik, Philosophenweg 16, 69120 Heidelberg, Germany ## I Introduction Casimir's prediction for the force \\(F\\) per unit area \\(A\\) between two perfectly conducting infinite parallel plates at a distance \\(a\\)[1], \\[\\frac{F_{\\parallel}}{A}=-2\\gamma_{\\parallel}\\frac{\\hbar c}{a^{4}},\\quad\\gamma _{\\parallel}=\\frac{\\pi^{2}}{480}\\simeq 2.056\\times 10^{-2}, \\tag{1}\\] has a remarkable property: a straightforward dimensional analysis already fixes the powers of \\(\\hbar\\), \\(c\\), and \\(a\\) uniquely. In absence of any other dimensionful quantity, the effects of quantum fluctuations in this geometry can be summarized by a simple number: \\(2\\gamma_{\\parallel}\\). This coefficient is universal in the sense that it does not depend on the microscopic details of the interactions between the fluctuating field and the constituents of the surfaces. It is completely fixed by specifying the geometry, the nature of the fluctuating field and the type of boundary conditions. For instance, for a fluctuating real scalar field with Dirichlet boundary conditions, the parallel-plate coefficient reduces exactly to \\(\\gamma_{\\parallel}\\); the factor of 2 in Eq. (1) can be traced back to the two polarization modes of the electromagnetic field. Away from the ideal Casimir limit, corrections to Eq. (1) arise from finite conductivity, surface roughness, thermal fluctuations and deviations from the ideal geometry. All these come with additional dimensionful scales, such as plasma frequency, length scales of roughness variation, temperature or surface-curvature radii. The corrections generically cannot be predicted from dimensional analysis, but its functional dependence on the further parameters has to be computed [2; 3; 4; 5; 6; 7; 8]. The present work is devoted to an investigation of the Casimir force between disconnected rigid surfaces, which exhibits properties similar to Casimir's classic parallel-plate configuration: unique dimensional scale dependencies and universal coefficients. The first property implies that the geometry is characterized by only one length scale, such as the distance parameter \\(a\\). New Casimir configurations therefore necessarily involve edges, whose influence on the Casimir effect is an interesting and difficult question in itself. In view of the rapid progress in the fabrication and use of micro- and nano-scale mechanical devices accompanied by precision measurements of the Casimir forces in these systems [9; 10; 11; 12; 13; 14; 15], a detailed understanding of Casimir edge effects is indispensable. Straightforward computations of Casimir edge effects are conceptually complicated, since the fluctuation spectrum carries the relevant information in a subtle manner. A technique that facilitates Casimir computations from first field-theoretic principles is given by _worldline numerics_[16], combining the string-inspired approach to quantum field theory [17] with Monte Carlo methods. As a main advantage, the worldline algorithm can be formulated for arbitrary Casimir geometries, resulting in a numerical estimate of the exact answer [18]. Since the approach is based on Feynman path-integral techniques, the problem of determining the Casimir fluctuation spectrum is circumvented [19]. The resulting algorithms are trivially scalable, and computational efforts increase only linearly with the parameters of the numerics. Recent results obtained by worldline numerics [20] go hand in hand with those obtained by new analytical methods [21; 22; 23] which are based on advanced scattering-theory techniques; excellent agreement has been found for the experimentally important sphere-plate and cylinder-plate Casimir configurations. In the present work, we use worldline numerics to examine Casimir edge effects induced by a fluctuating scalar field, obeying Dirichlet boundary conditions (\"Dirichlet scalar\"). We compute Casimir interaction energies and forces between rigid surfaces. Our results can directly be applied to Casimir configurations in ultracold-gas systems [24] where massless scalar fluctuations exist near the phase transition. For Casimir configurations probing the electromagnetic fluctuation field, the results for the universal coefficients may quantitatively differ, but our values can be used for an order-of-magnitude estimate of the error induced by edges of a finite configuration, thus providing an important ingredient for the data analysis of future experiments. In addition to being a simple and reliable quantitative method, the worldline formalism also offers an intuitive picture of quantum-fluctuation phenomena. The fluctuations are mapped onto closed Gaussian random paths(worldlines) which represent the spacetime trajectories of virtual loop processes. The Casimir interaction energy between two surfaces can thus be obtained by identifying all worldlines that intersect both surfaces. These worldlines correspond to fluctuations that would violate the boundary conditions; their removal from the ensemble of all possible fluctuations thereby contributes to the (negative) Casimir interaction energy. The latter measures only that part of the energy that contributes to the force between rigid surfaces; possibly divergent self-energies of the single surfaces [25] are already removed. For a massless Dirichlet scalar, the worldline representation of the Casimir interaction energy reads [18; 19] \\[E=-\\frac{1}{2}\\frac{1}{(4\\pi)^{2}}\\int_{0}^{\\infty}\\frac{dT}{T^{3}}\\,\\left< \\Theta_{\\Sigma}[\\mathbf{x}]\\right>_{\\mathbf{x}}. \\tag{2}\\] The expectation value in (2) has to be taken with respect to an ensemble of closed worldlines, \\[\\langle\\dots\\rangle_{\\mathbf{x}}:=\\int_{\\mathbf{x}(T)=\\mathbf{x}(0)}\\mathcal{ D}\\mathbf{x}\\,\\dots e^{-\\frac{1}{4}\\int_{0}^{T}d\\tau\\dot{\\mathbf{x}}^{2}}, \\tag{3}\\] with implicit normalization \\(\\langle 1\\rangle_{\\mathbf{x}}=1\\). In Eq. (2), \\(\\Theta_{\\Sigma}[\\mathbf{x}]=1\\) if a worldline \\(\\mathbf{x}\\) intersects both surfaces \\(\\Sigma=\\Sigma_{1}+\\Sigma_{2}\\), and \\(\\Theta_{\\Sigma}[\\mathbf{x}]=0\\) otherwise. The worldline integral can also be evaluated locally, e.g., with the restriction to worldlines with a common center of mass, \\(\\mathbf{x}_{\\mathrm{CM}}\\), resulting in the interaction energy density \\(\\varepsilon(\\mathbf{x}_{\\mathrm{CM}})\\), \\(E=\\int d^{3}x\\varepsilon(\\mathbf{x}_{\\mathrm{CM}})\\). The interaction energy serves as a potential for the Casimir force between rigid surfaces; the force is thus obtained by simple differentiation with respect to the distance parameters. The worldline numerical algorithm corresponds to a Monte Carlo evaluation of the path integral of Eq. (3) with a discretized propertime \\(\\tau\\). In this work, we exploit the recent algorithmic developments detailed in [26]. ## II Casimir edge configurations ### Perpendicular Plates Let us first analyze a semi-infinite plate perpendicularly above an infinite plate at a minimal distance \\(a\\), as first proposed in [19]. This configuration is illustrated in Fig. 1 together with a worldline which contributes to the Casimir interaction energy, since it intersects both plates. This configuration is translationally invariant only in the direction pointing along the edge with \\(a\\) being the only dimensionful length scale. The Casimir force per unit length \\(L\\) along the edge direction is thus unambiguously fixed by dimensional analysis, \\[\\frac{F_{\\perp}}{L}=-\\gamma_{\\perp}\\,\\frac{\\hbar c}{a^{3}}. \\tag{4}\\] Evaluating the worldline integral as outlined above, we obtain an estimate for the corresponding Casimir interaction energy density \\(\\varepsilon(\\mathbf{x})\\), a contour plot of which is given in Fig. 2. For the universal coefficient, we obtain \\[\\gamma_{\\perp}=1.200(4)\\times 10^{-2}. \\tag{5}\\] The error is below the 1% level for a path ensemble of 40 000 loops with 200 000 points per loop (ppl) each. This coefficient is in agreement with the Casimir interaction energy computed in [19]. ### Semi-infinite plate parallel to an infinite plate Next we consider a first variant of the parallel-plate configuration, where one of the plates is only semi-infinite with an edge on one side; see Fig. 3. This configuration can be viewed as an idealized limit of a real experimental situation where a smaller controllable finite plate is kept parallel above a larger fixed substrate. In this case, the dominant contribution to the force is given by the universal classic parallel-plate result of Eq. (1) with \\(A\\) being the surface area of the smaller plate. In the ideal limit of \\(A\\) as well as the edge length \\(L\\) going to infinity, the sub-leading Casimir edge effect is also universal. Dimensional analysis requires the exact force to be of the form \\[F=F_{\\parallel}-\\gamma_{\\mathrm{1si}}\\,\\frac{\\hbar c}{a^{3}}\\,L, \\tag{6}\\] Figure 1: Sketch of the perpendicular-plates configuration. The minimal distance \\(a\\) between the edge of the upper semi-infinite plate (thick solid line) and the lower infinite plate represents the only dimensionful length scale in the problem. Figure 2: Contour plot of the Casimir interaction energy density \\(\\varepsilon\\) for the perpendicular-plate configuration. The white lines mark the position of the plates to guide the eye. Ensemble parameters: 2000 loops with 10 000 ppl. where \\(F_{\\parallel}\\) denotes the parallel-plate force for the Dirichlet scalar, i.e., without the factor 2 in Eq. (1). A priori, the universal coefficient \\(\\gamma_{\\rm lsi}\\) can be positive or negative. The sign can easily be guessed within the worldline picture: owing to their spatial extent, a sizable fraction of worldlines can intersect both plates even if their center of mass is located outside the plates. This can quantitatively be verified by the energy density, the peak of which indeed extends into the outside region; see Fig. 4. This peak in the outside region contributes to the total interaction energy, implying an increase of the Casimir force compared to the pure parallel-plate formula. Therefore, the Casimir edge effect leads to further attraction, and the sign of the universal coefficient \\(\\gamma_{\\rm lsi}\\) must be positive. Quantitatively, we find \\[\\gamma_{\\rm lsi}=5.23(2)\\times 10^{-3}, \\tag{7}\\] again with 40 000 loops, 200 000 ppl. ### Parallel semi-infinite plates Another variant of the parallel-plate configuration is given by two parallel semi-infinite plates with parallel edges; see Fig. 5. This configuration corresponds to an idealized parallel-plate experiment where both plates have the same area size \\(A\\). In the ideal limit of infinite \\(A\\) as well as infinite edge length \\(L\\), the exact form of the force is again given by dimensional analysis, \\[F=F_{\\parallel}-\\gamma_{\\rm 2si}\\,\\frac{\\hbar c}{a^{3}}\\,L, \\tag{8}\\] equivalent to Eq. (6). Qualitatively, the situation is similar to the preceding one with one semi-infinite plate. Quantitatively, fewer worldlines in the outside as well as the inside region near the edge intersect both plates. Both aspects are visible in the plot of the interaction energy density in Fig. 6: the peak height and width is reduced near the edge both inside and outside the plates. We still observe a positive universal coefficient, \\[\\gamma_{\\rm 2si}=2.30(1)\\times 10^{-3} \\tag{9}\\] (93 000 loops, 500 000 ppl), which is a bit less than half as big as the preceding case with one semi-infinite plate. Again, the Casimir edge effect increases the force in comparison with the pure parallel-plate estimate \\(F_{\\parallel}\\). ## III Edge-Configuration Estimates The universal results for the idealized configurations presented above can immediately be used to derive estimated predictions for further Casimir configurations. Figure 4: Contour plot of the Casimir interaction energy density \\(\\varepsilon\\) for a semi-infinite plate parallel to an infinite plate. The white lines mark the position of the plates to guide the eye. The energy-density peak extends into the outside region, since worldlines can intersect both plates even if their center of mass is in the outside region. Ensemble parameters: 1000 loops, 10 000 ppl. Figure 5: Sketch of the configuration with two parallel semi-infinite plates at a distance \\(a\\). Figure 3: Sketch of the configuration of a semi-infinite plate parallel to an infinite plate at a distance \\(a\\). A worldline can intersect both plates even if its center of mass is located outside the two plates. Figure 6: Contour plot of the Casimir interaction energy density \\(\\varepsilon\\) for two parallel semi-infinite plates. The energy-density peak extends into the outside region, since worldlines can intersect both plates even if their center of mass is in the outside region. Ensemble parameters: 2000 loops, 10 000 ppl. ### Casimir comb Replicating the perpendicular-plate configuration in the horizontal direction of Figs. 1 and 2, we obtain a stack of semi-infinite plates (a \"Casimir comb\") perpendicularly above an infinite plate. Let \\(d\\) be the distance between two neighboring semi-infinite plates, i.e., the distance between two teeth of the comb. In the limit \\(d\\gg a\\), we obtain the Casimir force between the Casimir comb and the infinite plate by simply adding the forces for the individual perpendicular plates. The reliability of this approximation is obvious from Fig. 2, which shows that the dominant contribution to the energy is peaked inside a region with length scale \\(\\sim a\\). The resulting force is \\[F_{\\rm comb}=-\\gamma_{\\perp}\\,\\frac{\\hbar c}{a^{3}d}\\,A, \\tag{10}\\] with \\(A=Lnd\\) being the total area of a comb with \\(n\\) teeth. For a fixed comb, i.e., fixed \\(d\\), the short-distance Casimir force thus has a weaker dependence on \\(a\\) than for the parallel-plate case. In the opposite limit \\(d\\ll a\\), we expect the force between the comb and the plate to rapidly approach that of the parallel-plate case (1). This is because a generic worldline contributing to the force will have a spatial extent of order \\(a\\), such that the finer comb scale \\(d\\ll a\\) will not be resolved by the worldline ensemble to first approximation. A similar observation has been made in studies of periodic corrugations [28]. ### Finite parallel-plate configurations In a real parallel-plate experiment, the finite extent of the plates induces edge effects. If the typical length scale \\(L\\) of a plate (such as the edge length of a square plate or the radius of a circular disc) is much larger than the plate distance \\(a\\), our results for the idealized limits studied above can be used within a good approximation. The force law can then be summarized as \\[F=-\\gamma_{\\parallel}\\,\\frac{\\hbar c}{a^{4}}\\,A_{\\rm eff}, \\tag{11}\\] where the effective area \\(A_{\\rm eff}\\) also carries the information about the edge effects. For the case of a smaller plate with area \\(A\\) and circumference \\(C\\) above a much larger substrate, the effective area is given by \\[A_{\\rm eff}=A+\\frac{\\gamma_{\\rm lsi}}{\\gamma_{\\parallel}}\\,aC, \\tag{12}\\] e.g., \\(C=4L\\) for a square plate with edge length \\(L\\). For the case of two parallel plates of equal size and shape with area \\(A\\) and circumference \\(C\\), Eq. (12) holds with \\(\\gamma_{\\rm lsi}\\) replaced by \\(\\gamma_{\\rm lsi}\\). Obviously, the effective area \\(A_{\\rm eff}\\) is larger than the physical area in either case. Consider, for instance, a square plate of edge length \\(L\\) above a larger substrate: the Casimir edge effects induce a correction on the 1% level if \\(a\\gtrsim 1\\%\\) of \\(L\\). In the experiment of reference [27], the edge length is \\(L=1.2\\)mm and the distance goes up to \\(a=3\\mu\\)m. One of the edges faces an edge of the substrate, similar to Fig. 5, whereas the other three correspond to Fig. 3. For the Dirichlet scalar this results in a correction of 0.2%, which is much smaller than the 15% precision level of the experiment. ## IV Conclusions We have performed a detailed quantitative study of Casimir edge effects induced by a fluctuating scalar field obeying Dirichlet boundary conditions. All of our results exhibit a uniquely fixed dependence on dimensionful scales, as for Casimir's classic result. The effect of quantum fluctuations is quantitatively encoded in a universal dimensionless coefficient, which only depends on the geometry, the nature of the fluctuating field and the boundary conditions. From the perspective of a scattering-theory approach, Casimir edge effects are dominated by diffractive contributions to the correlation functions which are difficult to handle for direct approximation techniques [29; 30]; hence, our results give an important first insight into the properties of diffractive contributions to Casimir forces. For Casimir measurements involving electromagnetic fluctuations, our results serve as a first order-of-magnitude estimate of the error induced by edges of finite configurations - an error that any parallel-plate experiment has to deal with. The authors acknowledge support by the DFG Gi 328/1-3 (Emmy-Noether program) and Gi 328/3-2. ## References * (1) H.B.G. Casimir, Kon. Ned. Akad. Wetensch. Proc. **51**, 793 (1948). * (2) G.L. Klimchitskaya, A. Roy, U. Mohideen, and V.M. Mostepanenko, Phys. Rev. A **60**, 3487 (1999). * (3) A. Lambrecht and S. Reynaud, Eur. Phys. J. D **8**, 309 (2000). * (4) M. Bostrom and Bo E. Sernelius, Phys. Rev. Lett. **84**, 4757 (2000); for a controversial discussion of thermal corrections, see I. Brevik, S. A. Ellingsen and K. A. Milton, arXiv:quant-ph/0605005; V. M. Mostepanenko _et al._, arXiv:quant-ph/0512134.. * (5) V.B. Bezerra, G.L. Klimchitskaya, and V.M. Mostepanenko, Phys. Rev. A 62, 014102 (2000). * (6) M. Bordag, U. Mohideen and V. M. Mostepanenko, Phys. Rept. **353**, 1 (2001). * (7) K. A. Milton, \"The Casimir effect: Physical manifestations of zero-point energy\", World Scientific (2001). * (8) P.A. Maia Neto, A. Lambrecht, and S. Reynaud, Europhys. Lett. **69**, 924 (2005); Phys. Rev. A **72**, 012115 (2005). * (9) S. K. Lamoreaux, Phys. Rev. Lett. **78**, 5 (1997). * (10) U. Mohideen and A. Roy, Phys. Rev. Lett. **81**, 4549 (1998); * (11) A. Roy, C. Y. Lin and U. Mohideen, Phys. Rev. D **60**, 111101 (1999). * (12) T. Ederth, Phys. Rev. A **62**, 062104 (2000) * (13) H.B. Chan, V.A. Aksyuk, R.N. Kleiman, D.J. Bishop and F. Capasso, Science 291, 1941 (2001). * (14) F. Chen, U. Mohideen, G.L. Klimchitskaya and V.M. Mostepanenko, Phys. Rev. Lett. **88**, 101801 (2002). * (15) R.S. Decca, E. Fischbach, G.L. Klimchitskaya, D.E. Krause, D.L. Lopez and V.M. Mostepanenko, Phys. Rev. D **68**, 116003 (2003), * (16) H. Gies and K. Langfeld, Nucl. Phys. B **613**, 353 (2001); Int. J. Mod. Phys. A **17**, 966 (2002). * (17) see, e.g., C. Schubert, Phys. Rept. **355**, 73 (2001). * (18) H. Gies, K. Langfeld and L. Moyaerts, JHEP **0306**, 018 (2003); arXiv:hep-th/0311168. * (19) H. Gies and K. Klingmuller, J. Phys. A **39**, 6415 (2006). * (20) H. Gies and K. Klingmuller, Phys. Rev. Lett. **96**, 220401 (2006). * (21) A. Bulgac, P. Magierski and A. Wirzba, Phys. Rev. D **73**, 025007 (2006); A. Wirzba, A. Bulgac and P. Magierski, J. Phys. A **39**, 6815 (2006). * (22) T. Emig, R. L. Jaffe, M. Kardar and A. Scardicchio, Phys. Rev. Lett. **96**, 080403 (2006). * (23) M. Bordag, arXiv:hep-th/0602295. * (24) D.C. Roberts and Y. Pomeau, Phys. Rev. Lett. **95**, 145303 (2005); cond-mat/0503757. * (25) N. Graham, R. L. Jaffe, V. Khemani, M. Quandt, M. Scandurra and H. Weigel, Nucl. Phys. B **645**, 49 (2002). * (26) H. Gies and K. Klingmuller, arXiv:quant-ph/0605141. * (27) G. Bressi, G. Carugno, R. Onofrio and G. Ruoso, Phys. Rev. Lett. **88**, 041804 (2002). * (28) T. Emig, A. Hanke, R. Golestanian and M. Kardar, Phys. Rev. Lett. **87**, 260402 (2001); Phys. Rev. A **67**, 022114 (2003); T. Emig, Europhys. Lett. **62**, 466 (2003). * (29) M. Schaden and L. Spruch, Phys. Rev. A **58**, 935 (1998); Phys. Rev. Lett. **84** 459 (2000). * (30) A. Scardicchio and R. L. Jaffe, Nucl. Phys. B **704**, 552 (2005); Phys. Rev. Lett. **92**, 070402 (2004). * (31) M. Brown-Hayes, D.A.R. Dalvit, F.D. Mazzitelli, W.J. Kim and R. Onofrio, Phys. Rev. A **72**, 052102 (2005).
We compute Casimir forces in open geometries with edges, involving parallel as well as perpendicular semi-infinite plates. We focus on Casimir configurations which are governed by a unique dimensional scaling law with a universal coefficient. With the aid of worldline numerics, we determine this coefficient for various geometries for the case of scalar-field fluctuations with Dirichlet boundary conditions. Our results facilitate an estimate of the systematic error induced by the edges of finite plates, for instance, in a standard parallel-plate experiment. The Casimir edge effects for this case can be reformulated as an increase of the effective area of the configuration. pacs: 42.50.Lc,03.70.+k,11.10.-z
Condense the content of the following passage.
arxiv-format/0606522v1.md
# Meteorologic parameters analysis above Dome C made with ECMWF data Kerstin Geissler European Southern Observatory, Alonso de Cordova 3107, Santiago, Chile Max-Planck Institut fur Astronomie, Konigstuhl 17 - D69117, Heidelberg, Germany Elena Masciadri INAF - Osservatorio Astrofisico di Arcetri, L.go E. Fermi 5, 50125 Florence, Italy Max-Planck Institut fur Astronomie, Konigstuhl 17 - D69117, Heidelberg, Germany [email protected] ## 1 Introduction The Antarctic Plateau has revealed to be particularly attractive for astronomy since already several years Fossat (2005), Storey et al. (2003). It is extremely cold and dry and this does of this site an interesting candidate for astronomy in the long wavelength ranges (infrared, sub-millimeter and millimeter) thanks to the low sky brightness and high atmospheric transmission caused by a low temperature and concentration of the water vapour in the atmosphere (Valenziano & Dall'Oglio 1999, Lawrence 2004, Walden et al. 2005). The Antarctic Plateau is placed at high altitudes (the whole continent has an average height of \\(\\sim 2500\\) m), it is characterized by a quite peculiar atmospheric circulation and a quite stable atmosphere so that the level of the optical turbulence (\\(C_{N}^{2}\\) profiles) in the free atmosphere is, for most of the time, lower than above whatever other mid-latitude sites (Marks et al. 1996, Marks et al. 1999, Aristidi et al. 2003, Lawrence et al. 2004). Gillingham (1991), suggested for the first time, such a low level of the optical turbulence above the Antarctic Plateau. Atmospheric conditions, in general, degrade in proximity of the coasts. A low level of optical turbulence in the free atmosphere is, in general, associated to large isoplanatic angles (\\(\\theta_{0}\\)). The coherence wavefront time (\\(\\tau_{0}\\)) is claimed to be particularly large above the Antarctic Plateau due to the combination of a weak \\(C_{N}^{2}\\) and a low wind speed all along thewhole troposphere. Under these conditions, an adaptive optics system can reach better levels of correction (minor residual wavefront perturbations) than those obtained by an equivalent AO system above mid-latitude sites. Wavefront correction at high Zernike orders can be more easily reached over a large field of view, the wavefront-corrector can run at reasonably low frequencies and observations with long exposure time can be done in closed loop. This could reveal particularly advantageous for some scientific programs such as searches for extra-solar planets. Of course, also the interferometry would benefit from a weak \\(C_{N}^{2}\\) and \\(\\tau_{0}\\). In the last decade several site testing campaigns took place, first above South Pole (Marks et al., 1996, Loewenstein et al. 1998, Marks et al. 1999, Travouillon et al. 2003a, Travouillon 2003b) and, more recently, above Dome C (Aristidi et al. 2003, Aristidi et al. 2005a, Lawrence et al. 2004). Dome C seems to have some advantages with respect to the South Pole: **(a)** The sky emission and atmospheric transparency is some order of magnitude better than above South Pole (Lawrence 2004) at some wavelengths. The sensitivity (depending on the decreasing of sky emission and increasing of transparency) above Dome C is around 2 times better than above South Pole in near to mid-infrared regions and around 10 times better in mid to far-infrared regions. **(b)** the surface turbulent layer, principally originated by the katabatic winds, is much more thinner above Dome C (tens of meters - Aristidi et al. 2005a, Lawrence et al. 2004) than above South Pole (hundreds of meters - Marks et al. 1999). The thickness and strength of the surface turbulent layer is indeed tightly correlated to the katabatic winds, a particular wind developed near the ground characterizing the boundary layer circulation above the whole Antarctic continent. Katabatic winds are produced by the radiative cooling of the iced surface that, by conduction, cools the air in its proximity. The cooled air, in proximity of the surface, becomes heavier than the air in the up layers and, for a simple gravity effect, it moves down following the ground slope with a speed increasing with the slope. Dome C is located on the top of an Altiplano in the interior region of Antarctica and, for this reason, the katabatic winds are much weaker above Dome C than above other sites in this continent such as South Pole placed on a more accentuated sloping region. At present not much is known about the typical values of meteorological parameters above Dome C during the winter (April-September) time i.e. the most interesting period for astronomers. The goals of our study are the following. **(i)** We intend to provide a complete analysis of the vertical distribution of the main meteorological parameters (wind speed and direction, absolute temperature, pressure) in different months of the year using European Center For Medium Weather Forecasts (ECMWF) data. A particular attention is addressed to the wind speed, key element for the estimate of the wavefront coherence time \\(\\tau_{0}\\). The ECMWF data-set is produced by the ECMWF General Circulation Model (GCM) and is therefore reliable at synoptic scale i.e. at large spatial scale. This means that our analysis can be extended to the whole troposphere and even stratosphere up to 20-25 km. The accuracy of such a kind of data is not particularly high in the first meters above the ground due to the fact that the orographic effects produced by the friction of the atmospheric flow above the ground are not necessarily well reconstructed by the GCMs1. We remind to the reader that a detailed analysis of the wind speed near the ground above Dome C extended over a time scale of 20 years was recently presented by Aristidi et al. (2005a). In that paper, measurements of wind speed taken with an automatic weather station (AWS) are used to characterize the typical climatological trend of this parameter. In the same paper it is underlined that estimates of the temperature near the ground are provided by Schwerdtfeger (1984). The interested reader can find information on the value of this meteorologic parameter above Dome C and near the surface in these references. Our analysis can therefore complete the picture providing typical values (seasonal trend and median values) of the meteorological parameters in the high part of the surface layer, the boundary layer and the free atmosphere. Thanks to the large and homogeneous temporal coverage of ECMWF data we will be able to put in evidence typical features of the meteorological parameters in the summer and winter time and the variability of the meteorological parameters in different years. The winter time is particularly attractive for astronomical applications due to the persistence of the _'night time'_ for several months. This period is also the one in which it is more difficult to carry out measurements of meteorological parameters due to logistic problems. For this reason ECMWF data offer a useful alternative to measurements for monitoring the atmosphere above Dome C over long time scales in the future. **(ii)** We intend to study the conditions of stability/instability of the atmosphere that can be measured by the Richardson number that depends on both the gradient of the potential temperature and the wind speed: \\(R_{i}\\)=\\(R_{i}\\)(\\(\\partial\\theta/\\partial h\\),\\(\\partial V/\\partial h\\)). The trigger of optical turbulence in the atmosphere depends on both the gradient of the potential temperature (\\(\\partial\\theta/\\partial h\\)) and the wind speed (\\(\\partial V/\\partial h\\)) i.e. from the \\(R_{i}\\). This parameter can therefore provide useful information on the probability to find turbulence at different altitudes in the troposphere and stratosphere in different period of the year. Why this is interesting? At present we have indications that, above Dome C, the optical turbulence is concentrated in a thin surface layer. Above this layer the \\(r_{0}\\) is exceptionally large indicating an extremely low level of turbulence. The astronomic community collected so far several elements certifying the excellent quality of the Dome C site and different solutions might be envisaged to overcome the strong surface layer such as rising up a telescope above 30 m or compensating for the surface layer with AO techniques. The challenging question is now to establish more precisely how much the Dome C is better than a mid-latitude site. In other words, which are the _typical_\\(\\varepsilon\\), \\(\\tau_{0}\\) and \\(\\theta_{0}\\) that we can expect from this site? We mean here as _typical_, values that repeat with a statistical relevance such as a mean or a median value. For example, the gain in terms of impact on instrumentation performances and astrophysical feedback can strongly change depending on how weak the \\(C_{N}^{2}\\) is above the first 30 m. In spite of the fact that \\(C_{N}^{2}\\) =\\(10^{-18}\\), \\(C_{N}^{2}\\) =\\(10^{-19}\\) or \\(C_{N}^{2}\\) =0 are all small quantities, they can have a different impact on the final value of \\(\\varepsilon\\), \\(\\tau_{0}\\) and \\(\\theta_{0}\\). Only a precise estimate of this parameter will provide to the astronomic community useful elements to better plan future facilities (telescopes or interferometers) above the Antarctic Plateau and to correctly evaluate the real advantage in terms of turbulence obtained choosing the Antarctic Plateau as astronomical site. With the support of the Richardson number, the wind speed profile and a simple analytical \\(C_{N}^{2}\\) model we will try to predict a \\(\\tau_{0,max}\\) and a \\(\\theta_{0,max}\\) without the contribution of the first 30 m of atmosphere. **(iii)** Data provided by ECMWF can be used as inputs for atmospheric meso-scale models usually employed to simulate the optical turbulence (\\(C_{N}^{2}\\) ) and the integrated astroclimatic parameters (Masciadri et al. 2004, Masciadri & Egner 2004, Masciadri & Egner 2005). Measurements of wind speed done during the summer time have been recently published (Fig. 1 - Aristidi et al. 2005a). We intend to estimate the quality and reliability of the ECMWF data comparing these values with measurements from Aristidi et al. so to have an indication of the quality of the initialization data for meso-scale models. We planned applications of a meso-scale model (Meso-Nh) to the Dome C in the near-future. As a further output this model will be able to reconstruct, in a more accurate way than the ECMWF data-set, the meteorologic parameters near the ground. The paper is organized in the following way. In Section 2 we present the median values of the main meteorological parameters and their seasonal trend. We also present a study of the Richardson number tracing a complete map of the instability/stability regions in the whole 25 km on a monthly statistical base. In Section 3 we study the reliability of our estimate comparing ECMWF analysis with measurements. In Section 4 we try to retrieve the typical value of \\(\\tau_{0,max}\\) and \\(\\theta_{0,max}\\) above Dome C. Finally, in Section 5 we present our conclusions. ## 2 Meteorological Parameters Analysis The characterization of the meteorological parameters is done in this paper with _'analyses'extracted by the catalog MARS (Meteorological Archival and Retrieval System) of the ECMWF. An _'analysis'_ provided by the ECMWF general circulation (GCM) model is the output of a calculation based on a set of spatio-temporal interpolations of measurements provided by meteorological stations distributed on the surface of the whole world and by satellite as well as instruments carried aboard aircrafts. These measurements are continuously up-dated and the model is fed by new measurements at regular intervals of few hours. The outputs are formed by a set of fields (scalar and/or vectors) of classical meteorological parameters sampled on the whole world with a horizontal resolution of 0.5\\({}^{\\circ}\\) correspondent to roughly 50 km. This horizontal resolution is quite better than that of the NCEP/NCAR Reanalyses having an horizontal resolution of 2.5\\({}^{\\circ}\\) so we can expect more accurate estimate of the meteorological parameters in the atmosphere. The vertical profiles are sampled over 60 levels extended up to 60 km. The vertical resolution is higher near the ground (\\(\\sim\\) 15 m above Dome C) and weaker in the high part of the atmosphere. In order to give an idea of the vertical sampling, Fig. 1 shows the output of one data-set (wind speed and direction, absolute and potential temperature) of the MARS catalog (extended in the first 30 km) with the correspondent levels at which estimates are provided. We extracted from the ECMWF archive a vertical profile of all the most important meteorological parameters (wind speed and direction, pressure, absolute and potential temperature) in the coordinates (75\\({}^{\\circ}\\) S, 123\\({}^{\\circ}\\) E) at 00:00 U.T. for each day of the 2003 and 2004 years. We verified that the vertical profiles of the meteorologic parameters extracted from the nearest 4 grid points around the Dome C (75\\({}^{\\circ}\\)06\\({}^{\\prime}\\)25\\({}^{\\prime\\prime}\\) S, 123\\({}^{\\circ}\\)20\\({}^{\\prime}\\)44\\({}^{\\prime\\prime}\\) E) show negligible differences. This is probably due to the fact that the orography of the Antarctic continent is quite smoothed and flat in proximity of Dome C. Above this site we can appreciate on an orographic map a difference in altitude of the order of a few meters over a surface of 60 kilometers (Masciadri 2000), roughly the distance between 2 contiguous grid points of the GCM. The orographic effects on the atmospheric flow are visibly weak at such a large spatial scale on the whole 25 km. We can therefore consider that these profiles of meteorologic parameters at macroscopic scale well represent the atmospheric characteristics above Dome C starting from the first ten of meters as previously explained. ### Wind speed The wind speed is one among the most critical parameters defining the quality of an astronomical site. It plays a fundamental role in triggering optical turbulence (\\(C_{N}^{2}\\) ) and it is a fundamental parameter in the definition of the wavefront coherence time \\(\\tau_{0}\\): \\[\\tau_{0}=0.049\\cdot\\lambda^{6/5}\\left[\\int V\\left(h\\right)^{5/3}\\cdot C_{N}^{2 }\\left(h\\right)dh\\right]^{-3/5} \\tag{1}\\] where \\(\\lambda\\) is the wavelength, V the wind speed and \\(C_{N}^{2}\\) the optical turbulence strength. Figure 2 shows the median vertical profile of the wind speed obtained from the ECMWF analyses during the 2003 (a) and 2004 (b) years. Dotted-lines indicate the first and third quartiles i.e. the typical dispersion at all heights. Figure 2 (c) shows the variability of the median profiles obtained during the two years. We can observe that from a qualitative (shape) as well as quantitative point of view (values) the results are quite similar in different years. They can therefore be considered as typical of the site. Due to the particular synoptic circulation of the atmosphere above Antarctica (the so called _'polar vortex'_) the vertical distribution of the wind speed in the summer and winter time is strongly different. The wind speed has important seasonal fluctuations above 10 km. Figure 3 shows the median vertical profiles of the wind speed in summer (left) and winter (right) time in 2003 (top) and 2004 (bottom). We can observe that the wind speed is quite weak in the first \\(\\sim\\)10 km from the sea-level during the whole year with a peak at around 8 km from the sea level (5 km from the ground). **At this height the median value is \\(12\\) m/sec and the wind speed is rarely larger than \\(20\\) m/sec.** Above 10 km from the sea level, the wind speed is extremely weak during the summer time but during the winter time, it monotonically increases with the height reaching values of the order of 30 m/sec (median) at 20 km. The typical seasonal wind speed fluctuations at 5 and 20 km are shown in Fig.4. This trend is quite peculiar and different from that observed above mid-latitude sites. In order to give an idea to the reader of such differences, we show in Fig. 5 the median vertical profiles of the wind speed estimated above Dome C in summer (dashed line) and winter time (full bold line) and above the San Pedro Martir Observatory (Mexico) in summer (dotted line) and winter time (full thin line) (Masciadri & Egner 2004, Masciadri & Egner 2005). San Pedro Martir is located in Baja California (31.0441 N, 115.4569 W) and it is taken here as representative of a mid-latitude site. Above mid-latitude sites (San Pedro Martir - Fig.5) we can observe that the typical peak of the wind speed at the jet-stream height (roughly 12-13 km from the sea-level) have a strong seasonal fluctuation. The wind speed is higher during the winter time (thin line) than during the summer time (dotted line) in the north hemisphere and the opposite happens in the south hemisphere. At this height, the wind speed can reach seasonal variations of the order of 30 m/sec. Near the ground and above 17 km the wind speed strongly decreases to low values (rarely larger than 15 m/sec). During the winter time, the wind speed above Dome C can reach at 20-25 km values comparable to the highest wind speed values obtained above mid-latitude sites at the jet-stream height (i.e. 30 m/sec). On the other side, one can observe that, **in the first \\(12\\) km from the sea-level, the wind speed above Dome C during the winter time is weaker than the wind above mid-latitude site in whatever period of the year.** Figure 6 shows, month by month, the median vertical profile of the wind speed during 2003 (green line) and during 2004 (red line). The different features of the vertical distribution of the wind speed that we have just described and attributed to the winter and summer time are more precisely distributed in the year in the following way. During December, January, February and March the median wind speed above 10 km is not larger than 10 m/sec. During the other months, starting from 10 km, the median wind speed increases monotonically with different rates. September and October show the steepest wind speed growing rates. It is worth to underline the same wind speed vertical distribution appears in different years in the same month. Only during the August month it is possible to appreciate substantial differences of the median profile in the 2003 and 2004 years. This result is extremely interesting permitting us to predict, in a quite precise way, the typical features of the vertical distribution of wind speed in different months. Figure 7 shows the cumulative distribution of the wind speed at 8-9 km from the sea level during each month. We can observe that, in only 20% of cases, the wind speed reaches values of the order of 20 m/sec during the winter time. This height (8-9 km) corresponds to the interface troposphere-tropopause above Dome C. As it will be better explained later, this is, in general, one of the place in which the optical turbulence can be more easily triggered due to the strong gradient and value of the wind speed. We remark that, similarly to what happens above mid-latitude sites, in correspondence of this interface, we find a local peak of the wind speed. In spite of this this value is much smaller above Dome C than above mid-latitude sites. We can therefore expect a less efficient production of turbulence at Dome C than above mid-latitude sites at this height. ### Wind direction Figure 8 shows, for each month, the median vertical profile of the wind direction during 2003 (green line) and during 2004 (red line). We can observe that, during the all months, in the low part of the atmosphere the wind blows principally from the South (\\(\\sim\\)200\\({}^{\\circ}\\)). In the troposphere (1-11 km) the wind changes, in a monotonic way, its direction from South to West, North/West (\\(\\sim\\)300\\({}^{\\circ}\\)). In the slab characterized by the tropopause and stratosphere (above 11 km) the wind maintains its direction to roughly 300\\({}^{\\circ}\\). Above 20 km, during the summer time (more precisely during December, January and February) the wind changes its direction again to South. This trend is an excellent agreement with that measured by Aristidi et al. (2005a)-Fig.6. ### Pressure The pressure is a quite stable parameter showing small variations during the summer and winter time above Antarctica. Figure 9 shows the pressure during the summer and winter time. In this picture, we indicate the values of the pressure associated to the typical interface troposphere-tropopause above mid-latitude sites (200 mbar correspondent to \\(\\sim\\) 11 km from the sea-level) and above Dome C (300-320 mbar \\(\\sim\\) 8 km from the sea-level). As explained before, the interface between troposphere and tropopause corresponds to a favourable place in which the optical turbulence can be triggered. ### Absolute and Potential Temperature The absolute and potential temperature are fundamental elements defining the stability of the atmosphere. Figure 10 shows, for each month, the median vertical profile of the absolute temperature during 2003 (green line) and during 2004 (red line). Figure 11 shows, for each month, the median vertical profile of the potential temperature during 2003 (green line) and during 2004 (red line)2. Footnote 2: We note that, in spite of the fact that the ECMWF data are not optimized for the surface layer, the difference of the absolute temperature at the first grid point above the ground between the summer and winter time is well reconstructed by the GCMs (\\(\\sim\\) 35\\({}^{\\circ}\\) as measured by Aristidi et al.(2005)) The value of \\(\\partial\\ \\theta/\\partial\\ z\\) indicates the level of the atmospheric thermal stability that is strictly correlated to the turbulence production. When \\(\\partial\\ \\theta/\\partial\\ z\\) is positive, the atmosphere has high probabilities to be stratified and stable. We can observe (Fig.11) that this is observed in the ECMWF data-set and it is particularly evident during the winter time. Another way to study the stability near the ground is to analyse the \\(\\partial\\ T/\\partial\\ z\\), i.e. the gradient of the absolute temperature. When \\(\\partial\\ T/\\partial\\ z\\) is positive, the atmosphere is hardly affected by advection because the coldest region of the atmosphere (the heaviest ones) are already in proximity of the ground. This is a typical condition for Antarctica due to the presence of ice on the surface but it is expected to be much more evident during the winter time due to the extremely low temperature of the ice. We can observe (Fig.10) that during the winter time, \\(\\partial\\ T/\\partial\\ z\\) is definitely positive near the ground indicating a strongly stratified and stable conditions. All this indicates that some large wind speed gradient on a small vertical scale have to take place to trigger turbulence in the surface layer in winter time. We discuss these results with those obtained from measurements in Section 2.5. A further important feature for the vertical distribution of the absolute temperature is the inversion of the vertical gradient (from negative to positive) in the free atmosphere indicating the interface troposphere-tropopause in general associated to an instable region due to the fact that \\(\\partial\\ \\theta/\\partial\\ z\\)\\(\\simeq\\) 0. We can observe that, above Dome C, this inversion is located at around 8 km from the sea level during all the months. In the summer time, the median vertical profile of the absolute temperature is quite similar to the one measured by Aristidi et al. (2005a)-Fig.9. However the temperature during the winter time, above the minimum reached at 8 km, does not increase in a monotonic way with the height but it shows a much more complex and not unambiguous trend from one month to the other with successive local minima and a final inversion from negative to positive gradients at 20 km (May, June, July and August) and 15 km (September and October). Considering that the regions of the atmosphere in which \\(\\partial\\ \\theta/\\partial\\ z\\)\\(\\simeq\\) 0 favour the instability of the atmosphere (see Section 2.5), the analysis of the absolute temperature in the 8-25 km range tells us that, at least from the thermal point of view, it is much more complex and difficult to define the stability of the atmosphere during the winter time than during the summer time. The Richardson number maps (Section 2.5) will be able to provide us some further and more precise insights on this topic. We finally observe that, during all the months, the vertical distribution of the absolute temperature is reproduced identically each year. ### Richardson number The stability/instability of the atmosphere at different heights can be estimated by the deterministic Richardson number \\(R_{i}\\): \\[R_{i}=\\frac{g}{\\theta}\\frac{\\partial\\theta/\\partial z}{\\left(\\partial V/ \\partial z\\right)^{2}} \\tag{2}\\] where \\(g\\) is the gravity acceleration 9.8 m\\(\\cdot\\)s\\({}^{-2}\\), \\(\\theta\\) is potential temperature and \\(V\\) is the wind speed. The stability/instability of the atmosphere is tightly correlated to the production of the optical turbulence and it can therefore be an indicator of the turbulence characteristics above a site. The atmosphere is defined as _'stable'_ when \\(R_{i}>1/4\\) and it is _'unstable'_ when \\(R_{i}<1/4\\). Typical conditions of instability can be set up when, in the same region, \\(\\partial\\;V/\\partial\\;z\\gg 1\\) and \\(\\partial\\;\\theta/\\partial\\;z<1\\) or \\(\\partial\\;\\theta/\\partial\\;z\\sim 0\\). Under these conditions the turbulence is triggered in strongly stratified shears. These kind of fluctuations in the atmosphere have a typical small spatial scale and can be detected by radiosoundings. When one treats meteorological parameters described at lower spatial resolution, as in our case, it is not appropriate to deal about a deterministic Richardson number. Following a statistical approach (Van Zandt et al. 1978), we can replace the deterministic \\(R_{i}\\) with a probability density function, describing the stability and instability factors in the atmosphere provided by meteorological data at larger spatial scales. This analysis has already been done in the past by Masciadri & Garfias (2001). Figures 12 and 13 show, for each month, the gradient of the potential temperature \\(\\partial\\;\\theta/\\partial\\;z\\) and the square of the gradient of the wind speed \\((\\partial\\;V/\\partial\\;z)^{2}\\). Finally, Fig.14 shows, for each month, the inverse of the Richardson number \\((1/R)\\) over 25 km. We show \\(1/R\\) instead of \\(R\\) because the first can be displayed with a better dynamic range than the second one. From a visual point of view, \\(1/R\\) permits, therefore, to better put in evidence stability differences in different months. As explained before, with our data characterized by a low spatial resolution, we can analyze the atmospheric stability in relative terms (in space and time), i.e. to identify regions that are less or more stable then others. This is quite useful if we want to compare features of the same region of the atmosphere in different period of the year. The probability that the turbulence is developed is larger in regions characterized by a large \\(1/R\\). If, for example, we look at the \\(1/R\\) distribution in the month of January (middle of the summer time) we can observe that, a maximum is visible at around \\([2-5]\\) km from the ground3, correspondent to the height at which the gradient of the wind speed has a maximum and the gradient of the potential temperature \\(\\partial\\;\\theta/\\partial\\;z\\sim 0\\) (Fig.12). The presence of both conditions is a clear indicator of instability. In the same figure, \\(1/R\\) decreases monotonically above 5 km indicating conditions of a general stability of the atmosphere in this region. If we compare the value of \\(1/R\\) in different months we can easily identify two periods of the year in which the Richardson number present similar characteristics. Footnote 3: We prefer to concentrate our attention to the slab [30 m, inf) because our data-set is not optimized for study of the surface layer During the months of December-April, \\(1/R\\) has a similar trend over the all 25 km. One or two peaks of \\(1/R\\) are visible in the \\([2-5]\\) km region and a monotonically decreasing above 5 km is observed. During the months of May-November, \\(1/R\\) shows more complex features. At \\([2-5]\\) km from the ground we find a similar instability identified in the summer time but, above 5 km, we can observe other regions of instability mainly concentrated at 12 and 17 km from the ground. In a few cases (in September and in October above 12 km from the ground), the probability that the turbulence is triggered can be larger than at \\([2-5]\\) km. The analysis of the \\(R\\) (or \\(1/R\\)) does not give us the value of the \\(C_{N}^{2}\\) at a precise height but it can give us a quite clear picture of _'where'_ and _'when'_ the turbulence has a high probability to be developed over the whole year above Dome C. Summarizing we can state that, during the whole year, we have conditions of instability in the \\([2-5]\\) km from the ground. We can predict the development of the turbulence but probably characterized by an inferior strength than what observed above mid-latitude sites. The wind speed at \\([2-5]\\) km above Dome C is, indeed, clearly weaker than the wind speed at the same height above mid-latitude sites. In the high part of the atmosphere (\\(h>5\\) km), during the summer time the atmosphere is, in general, quite stable and we should expect low level of turbulence. During the winter time the atmosphere is more instable and one should expect a higher level of turbulence than during the summer time. The optical turbulence above 10 km would be monitored carefully in the future during the months of September and October to be sure that \\(\\tau_{0}\\) is competitive with respect to mid-latitude sites in winter time. Indeed, even a weak \\(C_{N}^{2}\\) joint to the large wind speed at these altitudes might induce important decreasing of \\(\\tau_{0}\\) with respect to the \\(\\tau_{0}\\) found above mid-latitude sites. Indeed, as can be seen in Fig.6, the wind speed at this height can be quite strong. On the other side, we un derline that this period does not coincide with the central part of the winter time (June, July and August) that is the most interesting for astronomic observations. We would like to stress again this concept: in this paper we are not providing absolute value of the turbulence but we are comparing levels of instabilities in different regions of the atmosphere and in different periods of the year. This status of stability/instability are estimated starting from meteorological parameters retrieved from ECMWF data-set. Considering that, as we proved once more, the meteorologic parameters are quite well described by ECMWF the relative status of stability/instability of the atmosphere represented by the Richardson number maps provided in our paper is a constrain against which measurements of the optical turbulence need to be compared. We expect that \\(C_{N}^{2}\\) measurements agree with the stability/instability properties indicated by the Richardson maps. Which is the typical seeing above the first 30 m? We should expect that the strength of the turbulence in the free atmosphere is larger in winter time than during the summer time. Are the measurements done so far in agreement with the Richardson maps describing the stability/instability of the atmosphere in different seasons and at different heights? Some sites testing campaigns were organized above Dome C (Aristidi et al. 2005a, Aristidi et al. 2005b, Lawrence et al. 2004) so far employing different instruments running in different periods of the year. We need measurements provided by a vertical profiler to analyze seeing values above 30 m. Balloons measuring vertical \\(C_{N}^{2}\\) profiles have been launched during the winter time (Agabi et al. 2006). Preliminary results indicate a seeing of \\(0\\farcs 36\\) above the first 30 m. Unfortunately, no measurements of the \\(C_{N}^{2}\\) vertical distribution during the summer time is available so far. Luckily, we can retrieve information on the level of activity of the turbulence in the high part of the atmosphere analysing the isoplanatic angle. This parameter is indeed particularly sensitive to the turbulence developed in the high part of the atmosphere. We know, at present, that the median \\(\\theta_{0}\\) measured with a GSM is \\(6\\farcs 8\\) during the summer time and \\(2\\farcs 7\\) during the winter time4. This means that, during the winter time, the level of the turbulence in the free atmosphere is higher than in summer time. This matches perfectly with the estimates obtained in our analysis. Footnote 4: We note that some discrepancies were found between \\(\\theta_{0}\\) measured by a GSM (\\(2\\farcs 7\\)) and balloons (\\(4\\farcs 7\\)) in the same period (Aristidi et al. 2005b). This should be analyzed more in detail in the future. However, in the context of our discussion, we are interested on a relative estimate i.e. on a parameter variation between summer and winter time. We consider, therefore, values measured by the same instrument (GSM) in summer and winter time. On the other side, a DIMM placed at 8.5 m from the ground measured a median value of seeing \\(\\varepsilon_{TOT}\\) = \\(0\\farcs 55\\) in summer time5 (Aristidi et al. 2005b) and \\(\\varepsilon_{TOT}\\) = \\(1\\farcs 3\\) in winter time (Agabi et al. 2006). This instrument measures the integral of the turbulence over the whole troposphere and stratosphere. The large difference of the seeing between the winter and summer time is certainly due to a general increasing of the turbulence strength near the ground in the summer-winter passage6. Indeed, measurements of the seeing above 30 m obtained with balloons and done during the winter time (Agabi et al. 2006) give a typical value of \\(\\varepsilon_{(30m,\\infty)}\\) = \\(0\\farcs 36\\). Using the law: Footnote 5: The seeing can reach high values in summer time as shown by Aristidi et al. 2005b by Aristidi et al. 2005b Footnote 6: This does not mean that one can observe some high values of \\(\\varepsilon\\) in some period of the day in summer time as shown by Aristidi et al. (2005b). \\[\\varepsilon_{(0,30m)}=[\\varepsilon_{tot}^{5/3}-\\varepsilon_{(30m,\\infty)}^{5/3} ]^{3/5} \\tag{3}\\] we can calculate that during the winter time the median seeing in the first 30 m is equal to \\(\\varepsilon_{(0,30m),winter}\\) = \\(1\\farcs 2\\). In spite of the fact that we have no measurements of the seeing above 30 m in summer time, we know, from the Richardson analysis shown in this paper, that the seeing in this region of the atmosphere should be weaker in summer time than in winter time. This means that the seeing above 30 m in summer time should be smaller than \\(0\\farcs 36\\). Knowing that the total seeing in summer time is equal to \\(\\varepsilon_{TOT}\\) = \\(0\\farcs 55\\), one can retrieve that the seeing in the first 30 m should be smaller than \\(0\\farcs 55\\). This means that \\(\\varepsilon_{(0,30m),summer}\\)\\(<0\\farcs 55<\\varepsilon_{(0,30m),winter}\\) = \\(1\\farcs 2\\). This means that the turbulence strength on the surface layer is larger during the winter time than during the summer time. In Section 2.4 we said that during the winter time and near the ground, the thermal stability is larger than during the summer time. This is what the physics says and what the ECMWF data-set show but it is in contradiction with seeing measurements. The only way to explain such a strong turbulent layer near the ground during the winter time is to assume that the wind speed gradient in the first 30 m is larger during the winter time than during the summer time. This is difficult to accept if the wind speed is weaker during the winter time than during the summer time as stated by Aristidi et al. (2005). As shown in Masciadri (2003), the weaker is the wind speed near the surface, the weaker is the gradient of the wind speed. We suggest therefore a more detailed analysis of this parameter near the surface extended over the whole year. This should be done preferably with anemometers mounted on masts or kites. This will permit to calculate also the Richardson number in the first 30 m during the whole year and observe differences between summer and winter time. This can be certainly a useful calculation to validate the turbulence measurements. The ECMWF data-set have no the necessary reliability in the surface layer to prove or disprove these measurements. ## 3 Reliability of ECMWF data As previously explained, measurements obtained recently above Dome C with radiosoundings (Aristidi et al. 2005a) can be useful to quantify the level of reliability of our estimates. In Aristidi et al. (2005a) is shown (Fig.4) the median vertical profile of the wind speed measured during several nights belonging to the summer time. Figure 1 in Aristidi et al.(2005a) gives the histogram of the time distribution of measurements as a function of month. Most of measurements have been done during the December and January months. Figure 15 (our paper) shows the vertical profile of the wind speed obtained with ECMWF data related to the December and January months in 2003 and 2004 (bold line) and the measurements obtained during the same months above Dome C (thin full line). E. Aristidi, member of the LUAN team, kindly selected for us only the measurements related to these two months from their sample. We note that, the ECMWF are all calculated at 00:00 U.T. while the balloons were not launched at the same hour each day. Moreover, the measurements are related to 2000-2003 period while the analyses are related to the 2003-2004 period. In spite of this difference, the two mean vertical profiles show an excellent correlation. The absolute difference remains below 1 m/sec with a mean difference of 0.7 m/sec basically everywhere. In the high part of the atmosphere (Fig.15), the discrepancy measurements/ECMWF analyses is of the order of 1.5 \\(m/sec\\). This is a quite small absolute discrepancy but, considering the typical wind speed value of \\(\\sim\\) 4 \\(m/sec\\) at this height, it gives a relative discrepancy of the order of 25%. We calculated that, assuming measurements of the seeing so far measured above Dome C and \\(C_{N}^{2}\\) profiles as shown in Section 4 (Table 2), this might induce discrepancies on the \\(\\tau_{0}\\) estimates of the order of 13-16%. To produce a more detailed study on the accuracy of the ECMWF analyses and measurements one should know the intrinsic error of measurements and the scale of spatial fluctuations of the wind speed at this height. No further analysis is possible for us above the Dome C to improve the homogeneity of the samples (measurements and analyses) and better quantify the correlation between them because we do not access the raw data of measurements. We decided, therefore, to compare measurements with ECMWF analyses above South Pole in summer as well as in winter time to provide to the reader further elements on the level of reliability of ECMWF analyses above a remote site such as Antarctica. Figure 16 (January - summer time - 12 nights) and Fig.17 (June and July - winter time - 12 nights) show the median vertical profiles of wind speed, wind direction and absolute temperature provided by measurements7 and ECMWF analyses. We underline that, in order to test the reliability of ECMWF analyses, we considered all (and only) nights for which measurements are available on the whole 25 \\(km\\) for the three parameters: wind speed, wind direction and absolute temperature. It was observed that, during the winter time, the number of radiosounding (balloons) providing a complete set of measurements decreases. In this season it is frequent to obtain measurements only in the first 10-12 \\(km\\). Above this height the balloons blow up. To increase the statistic of the set of measurements extended over the whole 25 \\(km\\) we decided to take into account nights related to two months (June and July) in winter time. We can observe (Fig.16, Fig.17) that the correlation ECMWF analyses/measurements is quite good in winter as well in summer time for all the three meteorologic parameters. We expressly did not smoothed the fluctuations characterized by high frequencies of measurements. The discrepancy measurements/ECMWF analyses is smaller than 1 \\(m/sec\\) on the whole troposphere. It is also visible that the natural typical fluctuations at small scales of the measured wind speed is \\(\\sim\\) 1 \\(m/sec\\). We conclude, therefore, that a correlation measurements/ECMWF analyses within 1 m/sec error is a quite good correlation and these data-set can provide reliable initialization data for meso-scale models. As a further output of this study we observe that, during the winter time, the wind speed above South Pole is weaker than above Dome C, particularly above 8 km from the ground. This fact certainly affects the value of the \\(\\tau_{0}\\) placing the South Pole in a more favourable position with respect to Dome C. On the other side, we know that the turbulent surface layer is much more stronger and thicker above South Pole than Dome C. This elements also affects the \\(\\tau_{0}\\) placing Dome C in a more favourable position with respect to South Pole. Further measurements are necessary to identify which of these two elements (a larger wind speed at high altitudes above Dome C or a stronger turbulence surface layer above South Pole) more affects the \\(\\tau_{0}\\). Indeed, if typical values of \\(\\tau_{0}\\) (1.58 msec) in winter time (June, July and August) above South Pole are already available (Marks et al. 1999), we have not yet measurements of \\(\\tau_{0}\\) above Dome C related to the same period. Of course, if \\(\\tau_{0}\\) above Dome C will reveal to be larger than 1.58 msec, this would mean that the stronger turbulence layer in the surface above South Pole affects \\(\\tau_{0}\\) more than the larger wind speed at high altitudes above Dome C. This study is fundamental to define the potentialities of these sites for applications to the interferometry and adaptive optics. ## 4 Discussion We intend here to calculate the value of \\(\\theta_{0}\\), \\(\\tau_{0}\\) in the slab of atmosphere in the range [h\\({}_{surf}\\), h\\({}_{top}\\)] using, as inputs, simple analytical models of the optical turbulence \\(C_{N}^{2}\\) and the median vertical profiles of the wind speed shown in Fig.3. The superior limit (h\\({}_{top}\\)) is defined by the maximum altitude at which balloons provide measurements before exploding and falling down. The inferior limit (h\\({}_{surf}\\)) corresponds to the expected surface layer above Dome C. We define h\\({}_{surf}\\) = 30 m and h\\({}_{ground}\\) = 3229 m the Dome C ground altitude. We consider independent models with h\\({}_{top}\\) = 25 km and h\\({}_{top}\\) = 20 km. Our analysis intend to estimate typical values of some critical astroclimatic parameters (\\(\\theta_{0}\\), \\(\\tau_{0}\\)) without the contribution of the first 30 m above the iced surface. The wavefront coherence time \\(\\tau_{0}\\) is defined as Eq.(1) and the isoplanatic angle \\(\\theta_{0}\\) as: \\[\\theta_{0}=0.049\\cdot\\lambda^{6/5}\\left[\\int h^{5/3}\\cdot C_{N}^{2}\\left(h \\right)dh\\right]^{-3/5} \\tag{4}\\] Table 1 and Table 2 summarize the inputs and outputs of these estimates. **Model (A)-(F)**: The simplest (and less realistic) assumption is to consider the \\(C_{N}^{2}\\) constant over the [h\\({}_{surf}\\), h\\({}_{top}\\)] range. To calculate the \\(C_{N}^{2}\\) we use three values of references: \\(\\varepsilon\\)=0\\(\\farcs\\)27, \\(\\varepsilon\\)=0\\(\\farcs\\)2 and \\(\\varepsilon\\)=0\\(\\farcs\\)1. We do the assumption that the \\(C_{N}^{2}\\) is uniformly distributed in the \\(\\Delta\\)h where \\(\\Delta\\)h= h\\({}_{top}\\) - h\\({}_{ground}\\) - h\\({}_{surf}\\). We then calculate the \\(C_{N}^{2}\\) as: \\[C_{N}^{2}=\\frac{1}{\\Delta h}\\left(\\frac{\\varepsilon}{19.96\\cdot 10^{6}}\\right)^{ 5/3} \\tag{5}\\] The median vertical profiles of wind speed during the summer time in the 2003 and 2004 years (see Fig.3) are used for the calculation of \\(\\tau_{0}\\). **Model (G)-(N)**: As discussed previously the turbulence above Dome C would preferably trigger at around \\([2-5]\\) km from the ground during the summer time. A more realistic but still simple model consists therefore in taking a thin layer of \\(\\Delta\\)h\\({}_{2}\\)=100 m thickness at 5 km from the ground and the rest of the turbulent energy uniformly distributed in the complementary \\(\\Delta\\)h\\({}_{1}\\)=\\(\\Delta\\)h - \\(\\Delta\\)h\\({}_{2}\\). This model is particularly adapted to describe the \\(C_{N}^{2}\\) in summer time in which there is a well localized region of the atmosphere in which the turbulence can more easily trigger (see Section 2.5). Considering the more complex morphology of the Richardson number during the winter time, we think that these simple \\(C_{N}^{2}\\) models **(A)-(N)** should not well describe the turbulence vertical distribution in this season. In other worlds, we have not enough elements to assume a realistic \\(C_{N}^{2}\\) model for the winter season and we will therefore limit our analysis to the summer season. To calculate the best values of \\(\\theta_{0}\\) and \\(\\tau_{0}\\) that can be reached above Dome C we consider the realistic minimum values of C\\({}_{N,1}^{2}\\)=\\(10^{-19}\\) m\\({}^{-2/3}\\) (Marks et al. (1999)) given by the _'atmospheric noise'_ and we calculate the value of the C\\({}_{N,2}^{2}\\) in the thin layer at 5 km using Eq.(5) and the following relation: \\[C_{N}^{2}\\cdot\\Delta h=C_{N,1}^{2}\\cdot\\Delta h_{1}+C_{N,2}^{2}\\cdot\\Delta h_{2}. \\tag{6}\\] Aristidi et al. (2005c) measured an isoplanatic angle \\(\\theta_{0}\\)=6\\(\\farcs\\)8 in the summer time. Looking at Table1 - (Model A-F), we deduce that such a \\(C_{N}^{2}\\) uniform distribution could match with these value (\\(\\theta_{0}\\)=6\\(\\farcs\\)8) only in association with an exceptional seeing of 0\\(\\farcs\\)1. In this case, we should expect a \\(\\tau_{0}\\) of the order of 30-40 msec. Alternatively, under the assumption of a \\(C_{N}^{2}\\) peaked at 8 km from the sea-level (Table1 - (Model G-N)), a seeing of 0\\(\\farcs\\)2 would better match with the \\(\\theta_{0}\\)=6\\(\\farcs\\)8. In this case we should expect a \\(\\tau_{0}\\) of the order of 13-16 msec. Summarizing we can expect the following data sets: [\\(\\varepsilon\\)=0\\(\\farcs\\)1, \\(\\theta_{0}\\)=6\\(\\farcs\\)8, \\(\\tau_{0}\\)=30-40 msec] or [\\(\\varepsilon\\)=0\\(\\farcs\\)2, \\(\\theta_{0}\\)=6\\(\\farcs\\)8, \\(\\tau_{0}\\)=13-16 msec]. The second one is much more realistic. It is interesting to note that the \\(\\tau_{0}\\) can be quite different if one assume a seeing slightly different (0\\(\\farcs\\)1-0\\(\\farcs\\)2) under the hypothesis of a distribution of the \\(C_{N}^{2}\\) as described in this paper. We deduce from this analysis (joint with the discussion done in Section 2.5) that the seeing above 30 m during the summer time is probably of the order of 0\\(\\farcs\\)2 or even smaller. This means that, in the free atmosphere, the seeing should be weaker during the summer time than during the winter time (average \\(\\varepsilon=0\\farcs\\)36 - Agabi et al. 2006). This result well matches with our Richardson number maps. However, it would be interesting to measure the seeing in the free atmosphere during the summer time in order to better constrain the values of \\(\\tau_{0}\\). This is not evident due to the fact that radiosoundings used to measure the \\(C_{N}^{2}\\) so far can not be used to measure this parameter during the summer time. Measurements are not reliable due to fictitious temperature fluctuations experienced by the captors in this season (Aristidi, private communication). From this simple analysis we deduce reasonable values of \\(\\theta_{0,max}\\)\\(\\sim\\)\\(10-11\\arcsec\\) and a \\(\\tau_{0,max}\\)\\(\\sim\\)16 msec during the summer time under the best atmospheric conditions and the most realistic distribution of \\(C_{N}^{2}\\) in the atmosphere. We remind to the reader that some measurements of \\(\\tau_{0}\\) have already been published (Lawrence et al. 2004). Such measurements have been done just in the interface summer-winter time (April-May). Our simple \\(C_{N}^{2}\\) model is not adapted to compare estimates of \\(\\tau_{0}\\) and \\(\\theta_{0}\\) similar to those done in this Section with those measured by Lawrence et al. (2004). A more detailed information on the \\(C_{N}^{2}\\) measurements in winter time will permit in the future to verify measurements done by Lawrence et al. (2004). ## 5 Conclusion In this paper we present a complete study of the vertical distribution of all the main meteorological parameters (wind speed and direction, pressure, absolute and potential temperature) characterizing the atmosphere above Dome C from a few meters from the ground up to 25 km. This study employs the ECMWF _'analyses'_ obtained by General Circulation Models (GCM); it is extended over two years 2003 and 2004 and it provides a statistical analysis of all the meteorological parameters and the Richardson number in each month of a year. This parameter provides us useful insights on the probability that optical turbulence can be triggered in different regions of the atmosphere and in different periods of the year. The Richardson number monitors, indeed, the conditions of stability/instability of the atmosphere from a dynamic as well as thermal point of view. The main results obtained in our study are: * The wind speed vertical distribution showstwo different trends in summer and winter time due to the _'polar vortex'_ circulation. In the first 8 km above the ground the wind speed is extremely weak during the whole year. The median value at 5 km, correspondent to the peak of the profile placed at the interface troposphere/tropopause, is 12 m/sec. At this height the 3rd quartile of the wind speed is never larger than 20 m/sec. Above 5 km the wind speed remains extremely weak (the median value is smaller than 10 m/sec) during the summer time. During the winter time the wind speed increases monotonically with the height and with an important rate reaching, at 25 km, median values of the order of 30 m/sec. A fluctuation of the order of 20 m/sec is estimated at 20 km between the summer and winter time. * The atmosphere above Dome C shows a quite different regime of stability/instability in summer and winter time. During the summer time the Richardson number indicates a general regime of stability in the whole atmosphere. The turbulence can be triggered preferably at [2-5] km from the ground. During the winter time the atmosphere shows a more important turbulent activity. In spite of the fact that the analysis of the Richardson number in different months of the year is qualitative9 our predictions are consistent with preliminary measurements obtained above the site in particular period of the year. Considering the good reliability of the meteorological parameters retrieved from the ECMWF analyses the Richardson maps shown here should be considered as a reference to check the consistency of further measurements of the optical turbulence in the future. Footnote 9: It does not provide a measure of the \\(C_{N}^{2}\\) profiles but the relative probability to trigger turbulence in the atmosphere. * With the support of a simple model for the \\(C_{N}^{2}\\) distribution, the Richardson number maps and the wind speed vertical profile we calculated a best \\(\\theta_{0,max}\\)\\(\\sim\\)10\\({}^{\\prime\\prime}\\)\\(-\\)11\\({}^{\\prime\\prime}\\) and \\(\\tau_{0,max}\\)\\(\\sim\\)16 msec above Dome C during the summer time. * The vertical distribution of all the meteorological parameters show a good agreement with measurements. This result is quite promising for the employing of the ECMWF analyses as initialization data for meso-scale models. Besides, it opens perspectives to employ ECMWF data for a characterization of meteorologic parameters extended over long timescale. Data-sets from MARS catalog (ECMWF) were used in this paper. This study was supported by the Special Project (spdesee) - ECMWF-[http://www.ecmwf.int/about/special_projects/index.html](http://www.ecmwf.int/about/special_projects/index.html). We thanks the team of LUAN (Nice - France): Jean Vernin, Max Azouit, Eric Aristidi, Karim Agabi and Eric Fossat for kindly providing us the wind speed vertical profile published in Aristidi et al. (2005a). We thanks Andrea Pellegrini (PNRA - Italy) for his kindly support to this study. This work was supported, in part, by the Community's Sixth Framework Programme and the Marie Curie Excellence Grant (FOROT). ## References * Aristidi et al. (2003) Aristidi, E., Agabi, K., Vernin, J., Azouit, M., Martin, F., Ziad, A., Fossat, E., 2003, A&A, 406, L19 * Aristidi et al. (2005a) Aristidi, E., Agabi, K., Azouit, M., Fossat, E., Vernin, J., Travouillon, T., Lawrence J., S., Meyer, C., Storey, J., W., V., Halter, B., Roth, W., L., Walden, V., 2005a, A&A, 430, 739 * Aristidi et al. (2005b) Aristidi, E., Agabi, K., Azouit, M., Fossat, E., Martin, F., Sadibekova, T., Vernin, J., Ziad, A., Travouillon, T. 2005b, Proceedings of Conference on \"Wide Field Survey Telescope on DOME C/A\", June 3-4, Beijing, as a supplement of \"Acta Astronomica Sinica\" * Aristidi et al. (2005c) Aristidi, E., Agabi, K., Fossat, E., Azouit, M., Martin, F., Sadibekova, T., Travouillon, T., Vernin, J., Ziad, A. 2005c, A&A, 444, 2, 651 * Agabi et al. (2006) Agabi, K., Aristidi, E., Azouit, M., Fossat, E., Martin, F., Sadibekova, T., Vernin, J., Ziad, A. 2006, PASP, 118, 344 * Azouit & Vernin (2005) Azouit, M. & Vernin, J. 2005, PASP, 117, 536 * Fossat (2005) Fossat, E. 2005, JApA, 26, 349 * Lawrence (2004) Lawrence, J.S., 2004, PASP, 116, 482 * Lawrence et al. (2004) Lawrence J.S., Ashley, M.C.B., Tokovinin, A., Travouillon, T., 2004, Nature, 431, 278 * Loewenstein et al. (1998) Loewenstein, R. F., Bero, C., Lloyd, J. P., Mrozek, F., Bally, J., Theil, D. 1998, ASP Conf. Ser., 141, 296 * Lovis et al. (2005)Gillingham, P.R. 1991, Astron. Soc. Aust. Proc., 9, 55 * () Marks, R.D., Vernin, J., Azouit, M., Briggs, J.W., Burton, M.G., Ashley, M.C.B., Manigault, M., 1996, A&A, 118, 385 * () Marks, R.D., Vernin, J., Azouit, M., Manigault, J.F., Clevelin, C., 1999, A&A, 134, 161 * () Masciadri, E., 2000, ASP Conf. Series, Vol. 266, 288 * () Masciadri, E. & Jabouille, P., 2001, A&A, 376, 727 * () Masciadri, E. & Garfias, T., 2001, A&A, 366, 708 * () Masciadri, E., 2003, RMxAA, 39, 249 * () Masciadri, E., Avila, R., Sanchez, L.J., 2004, RMxAA, 40, 3 * () Masciadri, E. & Egner, S., 2004, SPIE Glasgow, 5490, 818 * () Masciadri, E. & Egner, S., 2005, PASP, submitted * () Schwerdtfeger,W., 1984, Weather and climate of the Antarctic, Developments in atmospheric science, 15 (Elsiever) * () Storey, J.W.V., Ashley, M., C., B., Lawrence, J.S., Burton, M.G. 2003, Memorie Sai, 2, 13 * () Travouillon, T., Ashley, M.C.B., Burton, M.G., Storey, J.W.V., Loewenstein, R.F., 2003a, A&A, 400, 1163 * () Travouillon, T., Ashley, M.C.B., Burton, M.G., Storey, Conroy, P., Hovey, G., Jarnyk, M., Sutherland, R., J.W.V., Loewenstein, R.F., 2003b, A&A, 409, 1169 * () Valenziano, L., Dall'Oglio, G., 1999, PASP, 16, 167 * () VanZandt, T.E., Green, J.L., Gage, K.S. and Clark, W.L., 1978, Radio Science, 13, 819 * () Walden, V.P., Town, M.S., Halter, B., Storey, J.W.V., 2005, PASP, 117, 300Figure 4: Seasonal trend of median wind speed estimated with ECMWF data above Dome C at 8 km (thin line) and 20 km (bold line). This seasonal trend shows the effect of the so called _‘polar vortex’_. Figure 5: Comparison between the median wind speed profile estimated during the winter time (full bold line) and summer time (dashed line line) (2003) above Dome C and the median wind speed profile estimated above the San Pedro Martir Observatory in summer (dotted line) and winter (full thin line) time. San Pedro Martir is taken as representative of a mid-latitude site. Figure 15: Mean wind speed vertical profiles measured by balloons (thin line) and provided by ECMWF data (bold line) in December and January months. Balloons measurements were published by Aristidi et al. 2005. See text for further details. Figure 4: Atmospheric pressure in winter and summer time during the 2003. The figure shows the typical pressure 300-320 mbar associated to the 8 km latitude and the 190-200 mbar associated to the 11 km altitude. Figure 14: Seasonal trend of median wind speed estimated with ECMWF data above Dome C at 8 km (thin line) and 20 km (bold line). This seasonal trend shows the effect of the so called _‘polar vortex’_. Figure 1: Vertical profiles of wind speed, wind direction, absolute and potential temperature in the format of the MARS catalog (ECMWF archive). The asterisk indicate the spatial sampling over which the values of the meteorologic parameters are delivered. Figure 2: Yearly median wind speed vertical profile. (a)-(b) Median wind speed vertical profile (full line) and first and third quartiles (dotted lines) during the 2003 and 2004 years. (c) Median wind speed vertical profiles during the 2003 and 2004 years. Figure 3: Summer (left) and winter (right) median wind speed vertical profile estimated in 2003 (top) and 2004 (bottom). The first and third quartiles are shown with a dotted line. Figure 6: Seasonal median **wind speed** vertical profiles. Green line: year 2003. Red line: year 2004. Figure 7: Seasonal cumulative distribution of the wind speed in the 8-9 km range from the sea level. This corresponds roughly to the tropopause height above Dome C. The pressure at this altitude is around 320 mb. Figure 8: Seasonal median **wind direction** vertical profiles. Green line: year 2003. Red line: year 2004. \\(0^{\\circ}\\) corresponds to the North. Figure 10: Seasonal median **absolute temperature** vertical profiles. Green line: year 2003. Red line: year 2004. Figure 11: Seasonal median **potential temperature** vertical profiles. Thin full line: year 2003. Dotted line: year 2004. Dashed line: years 2003 and 2004. Figure 12: Seasonal median \\(\\frac{\\partial\\theta}{\\partial h}\\) vertical profile calculated with ECMWF analyses of 2003 and 2004. Figure 13: Seasonal median \\(\\left(\\frac{\\partial V}{\\partial h}\\right)^{2}\\) vertical profile calculated with ECMWF analyses of 2003 and 2004. Figure 14: Seasonal median **1/R - Inverse of the Richardson Number** vertical profile calculated with ECMWF analyses of 2003 and 2004. Figure 16: **South Pole**. ECMWF analyses (dotted line) and measurements (full line) related to 12 nights in January 2003. Figure 17: **South Pole**. ECMWF analyses (dotted line) and measurements (full line) related to 12 nights in June and July 2003. \\begin{table} \\begin{tabular}{l c c c c c c} \\hline \\hline & & & & & sum-2003 & sum-2004 \\\\ Models & h\\({}_{top}\\) & \\(\\varepsilon\\) & \\(C_{N,2}^{2}\\) & \\(\\theta_{0}\\) & \\(\\tau_{0}\\) & \\(\\tau_{0}\\) \\\\ & (km) & (arcsec) & m\\({}^{(-2/3)}\\) & (arcsec) & (msec) & (msec) \\\\ \\hline Model G & 25 & 0.27 & 7.46\\(\\cdot 10^{-16}\\) & 4.60 & 10.17 & 9.07 \\\\ Model H & 25 & 0.2 & 4.40\\(\\cdot 10^{-16}\\) & 6.03 & 13.87 & 12.14 \\\\ Model I & 25 & 0.1 & 1.25\\(\\cdot 10^{-16}\\) & 10.18 & 28.34 & 33.00 \\\\ \\hline Model L & 20 & 0.27 & 7.51\\(\\cdot 10^{-16}\\) & 4.73 & 10.15 & 11.89 \\\\ Model M & 20 & 0.2 & 4.49\\(\\cdot 10^{-16}\\) & 6.32 & 13.75 & 16.08 \\\\ Model N & 20 & 0.1 & 1.30\\(\\cdot 10^{-16}\\) & 11.65 & 28.02 & 32.67 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Isoplanatic angle \\(\\theta_{0}\\), wavefront coherence time \\(\\tau_{0}\\) during the summer time in 2003 and 2004. \\(C_{N,1}^{2}=\\)\\(10^{-19}\\)m\\({}^{(-2/3)}\\) in all the models. \\begin{table} \\begin{tabular}{l c c c c c c} \\hline \\hline & & & & & sum-2003 & sum-2004 \\\\ Models & h\\({}_{top}\\) & \\(\\varepsilon\\) & \\(C_{N}^{2}\\) & \\(\\theta_{0}\\) & \\(\\tau_{0}\\) & \\(\\tau_{0}\\) \\\\ & (km) & (arcsec) & m\\({}^{(-2/3)}\\) & (arcsec) & (msec) & (msec) \\\\ \\hline Model A & 25 & 0.27 & 3.53\\(\\cdot 10^{-18}\\) & 1.95 & 14.00 & 15.38 \\\\ Model B & 25 & 0.2 & 2.14\\(\\cdot 10^{-18}\\) & 2.63 & 18.91 & 20.77 \\\\ Model C & 25 & 0.1 & 6.74\\(\\cdot 10^{-19}\\) & 5.26 & 37.83 & 41.54 \\\\ \\hline Model D & 20 & 0.27 & 4.58\\(\\cdot 10^{-18}\\) & 2.53 & 13.52 & 14.87 \\\\ Model E & 20 & 0.2 & 2.78\\(\\cdot 10^{-18}\\) & 3.41 & 18.25 & 9.89 \\\\ Model F & 20 & 0.1 & 8.76\\(\\cdot 10^{-19}\\) & 6.82 & 36.49 & 40.11 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Isoplanatic angle \\(\\theta_{0}\\), wavefront coherence time \\(\\tau_{0}\\) during the summer time in 2003 and 2004.
In this paper we present the characterization of all the principal meteorological parameters (wind speed and direction, pressure, absolute and potential temperature) extended up to 25 km from the ground and over two years (2003 and 2004) above the Antarctic site of Dome C. The data set is composed by _'analyses'_ provided by the General Circulation Model (GCM) of the European Center for Medium Weather Forecasts (ECMWF) and they are part of the catalog MARS. A monthly and seasonal (summer and winter time) statistical analysis of the results is presented. The Richardson number is calculated for each month of the year over 25 km to study the stability/instability of the atmosphere. This permits us to trace a map indicating where and when the optical turbulence has the highest probability to be triggered on the whole troposphere, tropopause and stratosphere. We finally try to predict the best expected isoplanatic angle and wavefront coherence time (\\(\\theta_{0,max}\\) and a \\(\\tau_{0,max}\\)) employing the Richardson number maps, the wind speed profiles and simple analytical models of \\(C_{N}^{2}\\) vertical profiles. atmospheric effects -- turbulence -- site testing
Summarize the following text.
arxiv-format/0606556v1.md
# High Accuracy Matching of Planetary Images Giuseppe Vacanti and Ernst-Jan Buis cosine Science & Computing BV, Niels Bohrweg 11, 2333 CA, Leiden, The Netherlands ## 1 Introduction One of the goals of the European Space Agency's BepiColombo mission to Mercury is the measurement of the amplitude of the libration of Mercury. In order to do this images of the same surface areas will be taken at different times during the libration cycle and compared. When all other effects--spacecraft position, Mercury's rotation, spacecraft attitude, _etc._--are taken into account, any remaining discrepancy between the positions of features on the surface must be due to the libration of the crust of the planet. Here we address the question of to what accuracy images can be matched, and we focus only on the algorithmic aspects of the problem, disregarding all other sources of error that would have to be taken into account to solve the scientific problem. We shall show that by using a shape-based matching algorithm images taken under a wide range of illumination conditions can be matched to one tenth of a pixel root-mean-square. Based on this we conclude that the accuracy of the pattern matching algorithm is not the limiting factor in the ultimate accuracy that can be achieved by the libration experiment on BepiColombo. ## 2 Pattern Matching The pattern matching algorithms to be used in this study will have to deal with images that may appear to be drastically different from one another, still they refer to the same region. Consider for example the images shown in figure 1. To the human eye it is clear that the images refer to the same region, but any algorithm that relied on the presence of identical features in the images would have great difficulty concluding that the images are related at all. What is clear by visual inspection is that a number of edges--sharp changes in the level of illumination between contiguous pixels--are common between images. These edges appear where sharp changes in the altimetric profile occur. It is also clear that not all edges appear in all images, owing to the complex interplay between the position of the Sun, and the orientation and slope of the features on the ground. Compare for instance images \\(b\\) and \\(d\\) in figure 1: the left rim of the crater is bright in one image, and dark in the other. No similar change is observed on the right rim of the crater. Take now images \\(a\\) and \\(c\\). Here the left rim of the crater appears almost to be the same, but the extent of the shadow cast by the right rim is dramatically different. The ideal algorithm must be able to identify the edges in the two images, must be robust against local, non-linear changes in illumination conditions, and it must be able to operate by identifying a subset of features that are common to the pair of images being compared. Finally, based on the common features identified, the algorithm must be able to recover a possible translation between the two images. Algorithms that try to minimize the difference between the two, possibly scaled, images are clearly not going to be suited for the task, unless the images to be compared are taken under very similar illumination conditions. While this is possible, it would be a very strong constraint on the operations of a mission. Based on the considerations above, we have chosen to make use of the image matching algorithms available in the HALCON software library (Ref. [1]). This is a commercial product used in image vision and image recognition applications. One particular technique available in the HALCON library is the so-called _shape-based matching_ (Ref. [2]). This technique is based on an algorithm that identifies the shape of patterns in images, and can be instructed to find in a comparison image the shape identified in a reference image. ### Shape-Based Pattern Matching The detailed description of the algorithm can be found in the HALCON documentation (Ref. [2]) and has been submitted as part of a European Patent Application (Ref. [3]). The algorithm proceeds through the following steps: 1. A so-called _region of interest_ is identified in the reference image. This is a region of the image where edges will be looked for. The region of interest must be selected to be fully contained in both images. This step is done by hand, based on some _a priori_ knowledge, or visual inspection of the images. In our case, where the simulated translations amount to a few pixels along either or both the X and Y axes (see SS 5), the region of interest is the whole reference image, minus a few pixels around the edges. In the case of two partially overlapping images of the same region one would choose the intersection of the two images. 2. Features are identified in the comparison image with an edge detection algorithm. Pixels identified by the edge detection algorithm are part of the _reference pattern_. 3. The edge detection algorithm is run on the comparison image. This results in a second collection of pixels, the _search pattern_. 4. The algorithm now overlays the reference pattern on the search pattern. The reference pattern is stepped over the search pattern in an attempt to maximize the number of overlapping pixels. In doing so the algorithm is allowed to reduce the number of pixels in the reference pattern. The maximum fraction of the search pattern that can be discarded in the process can be set by the user. In our application the reference and search patterns can differ vastly. Therefore we have allowed the algorithm to throw away up to \\(70\\,\\%\\) of the pixels. In trying to maximize the overlap between the two patterns, the algorithm can be instructed to allow for a rotation and a scaling factor. 5. The algorithm reports the recovered translations and the fraction of the pixels in the reference pattern that was used to find a match. The latter is called the _score_. Within the parameters given by the user, the algorithm always chooses the match with the highest score. #### 2.1.1 The Meaning of the Score The HALCON score is the normalized sum of the cross product between the vectors describing the position of the pixels in the reference pattern and those describing the pixels in the search pattern. If the two patterns are identical, it is clear that the score will be equal to one. When pixels have to be dropped from the reference pattern, the score will decrease. In the actual algorithm, the sum of the cross products of the pixels used in the match is slightly modified to take into account the possibility of non-linear changes in the illumination conditions, either locally, around certain features, or globally, across the entire image (Ref. [3]). It is tempting to interpret the HALCON score as a quality factor for the goodness of the translation parameters found. However this would be wrong on two counts. First of all, it is clear that often only a subset of the pattern to be looked for is to be found in the search pattern. (Refer back to the examples shown in figure 1.) In this case the search algorithm must discard some of the pixels in the reference pattern in order to find a good matching sub-pattern. How many pixels are left in the sub-pattern has nothing to do with whether the match is good or not. Second, the notion of _goodness of match_ implies that the matched pattern can be compared to an expected result, Figure 1: A digital elevation model of Olympus Mons viewed by an imaging camera under different illumination conditions: (a) The Sun is at \\(5^{\\circ}\\) elevation; (b) The Sun is at \\(25^{\\circ}\\) elevation; (c) The Sun is at \\(50^{\\circ}\\) elevation; (d) The Sun is at \\(85^{\\circ}\\) elevation. In all cases the Sun’s azimuth is \\(0^{\\circ}\\) (to the right). or true pattern, or that the algorithm proceeds through the optimization of an objective function. But the only measure of how well the algorithm has performed, is how close the recovered translation is to the values injected in the simulation. This means that the accuracy of the algorithm can only be judged through an extensive set of Monte Carlo simulations. Only by repeatedly comparing two images in multiple realizations of the same detection and matching process, is it possible to gauge the statistical errors in the results, and therefore establish to what accuracy and under what conditions the algorithm can be effectively employed. ## 3 Approach The following steps were identified. 1. Render a digital elevation model of the surface of a planet, by choosing the position of the Sun and of the spacecraft. We have used **povray** (Ref. 4) for this task. 2. Create two images, the reference image and the comparison image. The latter is possibly translated along one or both of the image axes. 3. Convert the rendered images to instrument count images of a given signal-to-noise ratio. 4. Recover translation parameters between the two images using a shape-based pattern matching algorithm. 5. Study the accuracy with which the parameters are recovered, and derive information on the range of illumination conditions for which the parameters can be successfully recovered. ## 4 Digital Elevation Models Four digital elevation models have been used in this study. These are shown in figure 21. Footnote 1: The Olympus Mons digital elevation model was kindly provided to us by the Mars Express Team. ## 5 Simulation runs After some preliminary simulations used to determine a useful sampling scheme of the parameter space, the bulk of the simulations were carried out with the following parameters values: * Translations: 100 meter in X, in Y, and in X and Y. * Sun elevation angles: 10\\({}^{\\circ}\\), 30\\({}^{\\circ}\\), 60\\({}^{\\circ}\\), 90\\({}^{\\circ}\\). * Nominal spacecraft height 1500 km 2. Footnote 2: The actual height of the camera above the surface is not important for the results of this study, at least as long as the images recorded from different heights show the same level of detail. * Sun azimuth angle: several (the same azimuth angle for reference and comparison images). For one model a difference in azimuth of \\(30^{\\circ}\\) between the two images was introduced. * Four digital elevation models rendered with a signal-to-noise ratio of 50. The signal-to-noise ratio is determined when the Sun is at the zenith. ## 6 Results For each digital elevation model used, several thousand data points have been calculated. Each data point refers to a particular combination of Sun elevation and azimuth for the reference and comparison images, and a translation along one or both of the image axes. For each combination of parameters, the same number of simulation runs (ten) was carried out. In the following we use \\(\\Delta_{x}\\) and \\(\\Delta_{y}\\) to indicate the difference between the amplitude of the translation recovered by the algorithm and the amplitude of the translation used in the simulation. Therefore the expectation value of \\(\\Delta_{x}\\) and \\(\\Delta_{y}\\) is always 0, and the width of their distributions is a measure of the statistical error in the reconstruction. In table 1 we give a summary of how successful the algorithm has been. For each digital elevation model we give the number of realizations (all Sun angles and all translations), how many times the algorithm failed to return a match, and how many times the returned result was more than 2 pixels away from the expected result. The latter figure has no special meaning, but is meant to give an idea of the global behavior of the algorithm. One thing is immediately apparent: for the synthetic digital elevation model the algorithm always returned a Figure 2: The four digital elevation models used in this study. From left: The Olympus Mons caldera; a bowl-shaped crater close to the Olympus Mons caldera; a synthetic landscape with several bowl-shaped craters; another synthetic landscape containing approximately 5000 craters. (An image of Mercury taken from an height of 400 km might contain a few thousands of craters with a diameter larger than a few tens of meters.) Darker colors represent lower elevations. match, but a larger fraction of the returned answers was significantly wrong. Because the synthetic model is significantly more regular than the other two -- in particular the craters are identical but for a scale factor, the algorithm has an easier job at finding some matching pattern, although relatively more often the pattern found is not the good one. Based on these observations, we present the results for the synthetic model separately. However, we will be able to show that the same conclusions on the accuracy of the algorithm can be reached for all digital elevation models by applying the same selection criteria on the illumination conditions. In the following sections we use the following notation: * \\(\\theta_{cut}\\) refers to the following selection: \\(\\theta_{sun}>10^{\\circ}\\) and \\(\\theta_{sun}\ eq 90^{\\circ}\\), where \\(\\theta_{sun}\\) is the Sun elevation angle in the reference or the comparison image. * \\(\\phi_{cut}\\) refers to the following selection: \\(|\\phi_{sun}-n\\times 90^{\\circ}|>20\\) for \\(n\\in 0,1,2,3,4\\), where \\(\\phi_{sun}\\) is the Sun azimuth -- this is the same in both the reference and the comparison images. ### The Olympus Mons Models Most of the high-deviation points come from images where the Sun elevation is equal to \\(90^{\\circ}\\), or images where the Sun elevation is lower than \\(10^{\\circ}\\) (\\(\\theta_{cut}\\)). This is shown in figure 3. In figure 4 we plot the data with \\(\\theta_{cut}\\) applied versus the Sun azimuth. We observe that the mean of \\(\\Delta_{x}\\) and \\(\\Delta_{y}\\) vary with \\(\\phi_{sun}\\) in a quasi-periodic fashion. What we observe is that the deviation is larger when the Sun azimuth is orthogonal to one of the image axes. Namely, the largest deviations for \\(\\Delta_{x}\\) are observed when \\(\\phi_{sun}\\approx 90^{\\circ}\\) or \\(270^{\\circ}\\), whereas the largest deviations for \\(\\Delta_{y}\\) are observed when \\(\\phi_{sun}\\approx 0^{\\circ}\\) and \\(180^{\\circ}\\). The direction defined by the Sun azimuth appears to be a preferential direction: translations along this direction can be more accurately recovered, because the features on the terrain create sharper shadows along the direction to the Sun. Figure 4: The average \\(\\Delta_{x}\\) and \\(\\Delta_{y}\\) versus the Sun azimuth for the Olympus Mons data. The error bar on each point represents the root-mean-square. Figure 3: The effect of the Sun elevation cut on the distribution of \\(\\Delta_{x}\\) (top) and \\(\\Delta_{y}\\) (bottom) for the Olympus Mons data. \\begin{table} \\begin{tabular}{|c|c|c|c|} \\hline DEM & Total & No match & \\(\\Delta_{x}>2\\) or \\(\\Delta_{y}>2\\) \\\\ \\hline \\hline A & 9878 & \\(2.8\\%\\) & \\(2.3\\%\\) \\\\ B & 7300 & \\(3.2\\%\\) & \\(4.2\\%\\) \\\\ C & 6254 & \\(0\\%\\) & \\(12.6\\%\\) \\\\ \\hline \\end{tabular} \\end{table} Table 1: Global success statistics for the simulation runs. For each digital elevation model (DEM) the following data are reported: the total number of independent realizations; the fraction of realizations for which the algorithm was not able to find any match; the fraction of realizations for which either \\(|\\Delta_{x}|\\) or \\(|\\Delta_{y}|\\) were larger than two pixels. DEM keys: \\(A=\\) Olympus Mons, \\(B=\\) bowl crater, \\(C=\\) synthetic. Based on the data in figure 4 we can devise a selection criterion for the Sun azimuth, so that translations along both axes can be recovered with comparable accuracy. The selection criterion is that the Sun azimuth must be more than \\(20^{\\circ}\\) away from both image axes. The distributions of \\(\\Delta_{x}\\) and \\(\\Delta_{y}\\) when both \\(\\theta_{cut}\\) and \\(\\phi_{cut}\\) are applied are shown in figure 5. Figure 5 represents the end point of our analysis. We observe that the two distributions are centered on 0, and have a width of \\(\\approx\\) 0.1 pixel root-mean-square. The two distributions have a tail in the direction of the translation applied in the simulations (-100 m along the X axis, and +100 m along the Y axis). The nature of this slight asymmetry in not understood at present. #### 6.1.1 Changing the Sun Azimuth The bulk of the simulation runs was carried out with the same Sun azimuth for both the reference and comparison images. We however also made a set of simulation runs where the azimuth of the Sun in the comparison image was \\(30^{\\circ}\\) away from the azimuth used in the reference image; only a translation of 100 meters along the X axis was applied. The results are shown in figure 6. Even in this case the algorithm is able to recover the injected translation with an accuracy of \\(\\approx\\) 0.1 pixel root-mean-square. ### The Synthetic Model As already hinted to, the results based on the synthetic digital elevation model give a slightly different picture, although the main conclusions do not change. The \\(\\theta_{cut}\\) criterion is still effective in rejecting data points that return a large deviation from the expectation. A point of discrepancy with respect to the Olympus Mons data is the behavior of the recovered translations as a function of Sun azimuth. Figure 7 shows that the effect observed for the Olympus Mons models is almost not observed here. After the \\(\\theta_{cut}\\) criterion is applied, any remaining offset is smaller than 0.05 pixel. Finally, figure 8 shows the distribution for \\(\\Delta_{x}\\) and \\(\\Delta_{y}\\). Again, the translation is recovered with an accuracy of \\(\\approx\\) 0.1 pixel root-mean-square, but the details of the distributions differ from what was observed before. ## 7 Conclusions We have performed a study of the accuracy with which a shape-based pattern matching algorithm can identify translations between remote sensing images of the same planetary features. We have applied the algorithms in a Monte Carlo fashion to digital elevation models (both real Figure 5: The distribution of \\(\\Delta_{x}\\) and \\(\\Delta_{y}\\) for the Olympus Mons data, once both the Sun cuts are applied. Figure 6: The distribution of \\(\\Delta_{x}\\) (top) and \\(\\Delta_{y}\\) (bottom) for the Olympus Mons data for date where the Sun azimuth of the comparison and reference images differ by \\(30^{\\circ}\\). and synthetic) in order to investigate the statistical performance of the procedure. We find that for a broad range of illumination conditions translations between images can be recovered with an accuracy of 0.1 pixel _r.m.s._ The algorithm performs best for translations along the projected direction to the Sun on the image plane. This study shows that translations along both image axes at the same time can be recovered with the same accuracy of 0.1 pixel as long as the projected direction to the Sun lies more than \\(\\approx 20^{\\circ}\\) away from the same image axes. Finally, this study demonstrates that the images to be compared need not be taken under the very same illumination conditions in order to be effectively matched. For a given Sun azimuth, any pair of images taken with Sun elevation angles larger than \\(10^{\\circ}\\) can be used; images taken when the Sun is at the zenith must also be avoided. The range of useful illumination conditions is further broadened because this study concludes that differences in Sun azimuth of at least \\(30^{\\circ}\\) do not affect the accuracy of the matching algorithm. The error contributed by the matching algorithm is but one of the several error contributions to be taken into account during the analysis of the data pertaining to the measurement of the possible libration of the surface of Mercury. This study shows that the accuracy of the pattern matching algorithm is not a limiting factor in the ultimate accuracy of the libration experiment aboard the BepiColombo mission to Mercury. ## Acknowledgments This study was carried out under ESA contract ESTEC 18624. ## References * [1]_HALCON Documentation_. [http://www.mvtec.com/halcon/](http://www.mvtec.com/halcon/). * [2]_HALCON Manual_. [http://www.mvtec.com/download/documentation/](http://www.mvtec.com/download/documentation/). * [3] EP1193642, April 2002. [http://www.espacenet.com/](http://www.espacenet.com/). * [4] Persistence Of Vision Raytracer. [http://www.povray.org/](http://www.povray.org/). Figure 8: The distribution of \\(\\Delta_{x}\\) (top) and \\(\\Delta_{y}\\) (bottom) for the Synthetic Model data, once both the Sun cuts are applied. Figure 7: The average \\(\\Delta_{x}\\) (top) and \\(\\Delta_{y}\\) (bottom) versus the Sun azimuth for the Synthetic Model data. The error bar on each point represents the root-mean-square.
We address the question of to what accuracy remote sensing images of the surface of planets can be matched, so that the possible displacement of features on the surface can be accurately measured. This is relevant in the context of the libration experiment aboard the European Space Agency's BepiColombo mission to Mercury. We focus here only on the algorithmic aspects of the problem, and disregard all other sources of error (spacecraft position, calibration uncertainties, _etc._) that would have to be taken into account. We conclude that for a wide range of illumination conditions, translations between images can be recovered to about one tenth of a pixel _r.m.s._ pattern matching; BepiColombo; libration figure
Summarize the following text.
arxiv-format/0608013v2.md
# Neutron star matter in an effective model T. K. Jha [email protected] P. K. Raina Indian Institute of Technology, Kharagpur, India - 721302 P. K. Panda S. K. Patra Institute of Physics, Bhubaneswar, India - 751005 November 4, 2021 ## I Introduction Dense matter studies have opened up new dimensions in understanding the nature and behavioral aspects of nuclear matter at extremes. An ideal laboratory for such studies can be neutron stars, which contains matter around ten times denser than atomic nuclei. These compact stars are believed to be made in the aftermath of type II supernova explosions resulting from the gravitational core collapse of massive stars. All known forces of nature i.e, strong, weak, electromagnetic and gravitational, play key roles in the formation, evolution and the composition of these stars. Thus the study of dense matter not only deals with astrophysical problems such as the evolution of neutron stars, the supernovae mechanism but also reviews the implications from heavy-ion collisions. Neutron stars are charge neutral, and the fact that charge neutrality drives the stellar matter away from isospin-symmetric nuclear matter, the study of neutron stars lends important clues in understanding the isospin dependence of nuclear forces. Due to \\(\\beta\\)-stability conditions, neutron star is much closer to neutron matter than the symmetric nuclear matter [1]. However, with increasing densities, the fermi energy of the occupied baryon states reaches eigenenergies of other species such as \\(\\Lambda^{0}(1116)\\), \\(\\Sigma^{-,0,+}(1193)\\) and \\(\\Xi^{-,0}(1318)\\) and the possibility of these hyperonic states are speculated in the dense core of neutron stars ([2]-[4]). Studies on hypernuclei experiments suggests the presence of hyperons in dense matter such as neutron stars. Theoretically also, it has been found that the inclusion of hyperons in neutron star cores lowers the energy and pressure of the system resulting in the lowering of the maximum mass of neutron stars, in the range of observational limits. Various hadronic models have been applied to describe the structure of neutron stars. Non-relativistic [5; 6] and relativistic models ([7]-[10]) predict nearly same maximum mass of neutron star. Relativistic models have been successfully applied to study finite nuclei [11] and infinite nuclear matter [12] where they not only satisfy the properties of nuclear matter at saturation but also the extrapolation to high density is automatically causal. Field theories such as the non-linear \\(\\sigma-\\omega\\) model [13] have been phenomenal in this respect. Presently we apply an effective hadronic model to study the equation of state (EOS) for neutron star matter in the mean-field type approach [12]. Along with non-linear terms, which ensure reasonable saturation properties of nuclear matter, the model embodies dynamical generation of the vector meson mass that ensures a reasonable incompressibility. Therefore, one of the motivation for the present study is to check the applicability of the model to the study of high density matter. Secondly, the parameter sets of the model are in accordance with recently obtained heavy-ion data [14]. With varying incompressibility and effective nucleon mass the study can impart vital information about their dependency and the underlying effect on the resulting EOS. Also the existing knowledge on the presence of hyperons in the dense core of these compact stars is inadequate, largely because the coupling strength of these hyperons are unknown. So it would be interesting to see the effect of hyperons in the dense core of neutron stars and the predictive power of the present model in establishing the global properties of the resulting neutron star sequences. The outline of the paper is as follows: First we give a brief description of the ingredients of the hadronic model that we implement in our calculations. After introducing the Tolman-Oppenheimer-Volkov (TOV) equations for the static star, we present some general features of the equation of state and then look at the gross properties of the neutron stars in our calculations and compare our results with the observed masses of the neutron stars, and also with predictions from some of the field-theoreticaltron star mass and radius imposed by recent estimates of the gravitational redshift in the M-R plane. Finally we conclude with outlook on the possible extensions of the current approach. ## II The equation of state We start with an effective Lagrangian generalized to include all the baryonic octets interacting through mesons: \\[{\\cal L} = \\bar{\\psi}_{B}\\ \\left[\\left(i\\gamma_{\\mu}\\partial^{\\mu}-g_{\\omega B} \\gamma_{\\mu}\\omega^{\\mu}-\\frac{1}{2}g_{\\rho B}\\vec{\\rho}_{\\mu}\\cdot\\vec{\\tau} \\gamma^{\\mu}\\right)-g_{\\sigma B}\\ \\left(\\sigma+i\\gamma_{5}\\vec{\\tau}\\cdot\\vec{\\pi} \\right)\\right]\\ \\psi_{B} \\tag{1}\\] \\[+\\frac{1}{2}\\big{(}\\partial_{\\mu}\\vec{\\pi}\\cdot\\partial^{\\mu}\\vec {\\pi}+\\partial_{\\mu}\\sigma\\partial^{\\mu}\\sigma\\big{)}-\\frac{\\lambda}{4}\\big{(} x^{2}-x_{0}^{2}\\big{)}^{2}-\\frac{\\lambda B}{6}\\big{(}x^{2}-x_{0}^{2}\\big{)}^{3}- \\frac{\\lambda C}{8}\\big{(}x^{2}-x_{0}^{2}\\big{)}^{4}\\] \\[-\\frac{1}{4}F_{\\mu\ u}F_{\\mu\ u}+\\frac{1}{2}g_{\\omega B}{}^{2}x^ {2}\\omega_{\\mu}\\omega^{\\mu}-\\frac{1}{4}\\vec{R}_{\\mu\ u}\\cdot\\vec{R}^{\\mu\ u}+ \\frac{1}{2}m_{\\rho}^{2}\\vec{\\rho}_{\\mu}\\cdot\\vec{\\rho}^{\\mu}\\.\\] Here \\(F_{\\mu\ u}\\equiv\\partial_{\\mu}\\omega_{\ u}-\\partial_{\ u}\\omega_{\\mu}\\) and \\(x^{2}=\\vec{\\pi}^{2}+\\sigma^{2}\\), \\(\\psi_{B}\\) is the baryon spinor, \\(\\vec{\\pi}\\) is the pseudoscalar-isovector pion field, \\(\\sigma\\) is the scalar field. The subscript \\(B=n,p,\\Lambda,\\Sigma\\) and \\(\\Xi\\), denotes for baryons. The terms in eqn. (1) with the subscript \\({}^{\\prime}B^{\\prime}\\) should be interpreted as sum over the states of all baryonic octets. In this model for hadronic matter, the baryons interact via the exchange of the \\(\\sigma\\), \\(\\omega\\) and \\(\\rho\\)-meson. The Lagrangian includes a dynamically generated mass of the isoscalar vector field, \\(\\omega_{\\mu}\\), that couples to the conserved baryonic current \\(j_{\\mu}=\\psi_{B}\\gamma_{\\mu}\\psi_{B}\\). In this paper we shall be concerned only with the normal non-pion condensed state of matter, so we take \\(\\vec{\\pi}=0\\) and also the pion mass \\(m_{\\pi}=0\\). The interaction of the scalar and the pseudoscalar mesons with the vector boson generate the mass through the spontaneous breaking of the chiral symmetry. Then the masses of the baryons, scalar and vector mesons, which are generated through \\(x_{0}\\), are respectively given by \\[m_{B}=g_{\\sigma B}x_{0},\\ \\ m_{\\sigma}=\\sqrt{2\\lambda}x_{0},\\ \\ m_{\\omega}=g_{\\omega B}x_{0}. \\tag{2}\\] In the above, \\(x_{0}\\) is the vacuum expectation value of the \\(\\sigma\\) field, \\(\\lambda\\ =\\ ({m_{\\sigma}}^{2}-{m_{\\pi}}^{2})/({2f_{\\pi}}^{2})\\), with \\(m_{\\pi}\\), the pion mass and \\(f_{\\pi}\\) the pion decay constant, and \\(g_{\\omega B}\\) and \\(g_{\\sigma B}\\) are the coupling constants for the vector and scalar fields, respectively. In the mean-field treatment we ignore the explicit role of \\(\\pi\\) mesons. The Dirac equation for baryons is the Euler-Lagrange equation of \\({\\cal L}\\) and is obtained as \\[[\\gamma_{\\mu}(p^{\\mu}-g_{\\omega B}\\omega^{\\mu}-\\frac{1}{2}g_{\\rho B}\\vec{\\tau }\\cdot\\vec{\\rho}^{\\mu})-g_{\\sigma B}\\sigma]\\psi_{B}=0. \\tag{3}\\] The mass term in the above equation appears in the form \\(g_{\\sigma B}\\sigma\\), which is referred to as the effective baryon mass, \\(m_{B}^{*}=g_{\\sigma B}\\sigma\\). We will now proceed to calculate the equation of motion for the scalar field. The scalar field dependent terms from the Lagrangian density are: \\[-\\frac{\\lambda}{4}\\big{(}x^{2}-x_{0}^{2}\\big{)}^{2}-\\frac{\\lambda B}{6}\\big{(} x^{2}-x_{0}^{2}\\big{)}^{3}-\\frac{\\lambda C}{8}\\big{(}x^{2}-x_{0}^{2}\\big{)}^{4}-g_{ \\sigma B}\\ \\bar{\\psi}_{B}\\ \\sigma\\ \\psi_{B}+\\frac{1}{2}g_{\\omega B}{}^{2}x^{2}\\omega_{0}^{2}\\, \\tag{4}\\] where in the mean-field limit \\(\\omega=\\omega_{0}\\). The constant parameters \\(B\\) and \\(C\\) are included in the higher-order self-values of nuclear matter properties at saturation point. Using equation (2) and \\(m_{B}^{*}/m_{B}\\equiv x/x_{0}\\equiv Y\\), the above \\[-\\frac{1}{4}(1-Y^{2})^{2}+\\frac{B}{6c_{\\omega B}}(1-Y^{2})^{3}-\\frac{C}{8c_{\\omega B }^{2}}(1-Y^{2})^{4}+\\frac{2g_{\\omega B}^{2}{\\omega_{0}^{2}}^{2}}{2\\lambda x_{0}^ {2}}^{2}Y^{2}+\\frac{g_{\\sigma B}Y}{\\lambda x_{0}^{3}}\\bar{\\psi}_{B}\\psi_{B} \\tag{5}\\] Differentiating with respect to \\(Y\\), we have the equation of motion for the scalar field including all baryons as: \\[\\sum_{B}\\left[(1-Y^{2})-\\frac{B}{c_{\\omega B}}(1-Y^{2})^{2}+\\frac{C}{c_{ \\omega B}^{2}}(1-Y^{2})^{3}+\\frac{2c_{\\sigma B}c_{\\omega B}\\rho_{B}^{2}}{m_{B} ^{2}Y^{4}}-\\frac{2c_{\\sigma B}\\rho_{SB}}{m_{B}Y}\\right]=0\\, \\tag{6}\\] where the effective mass of the baryonic species is \\(m_{B}^{\\star}\\equiv Ym_{B}\\) and \\(c_{\\sigma B}\\equiv g_{\\sigma B}^{2}/m_{\\sigma}^{2}\\) and \\(c_{\\omega B}\\equiv g_{\\omega B}^{2}/m_{\\omega}^{2}\\) are the usual scalar and vector coupling constants respectively. It should be noted that although the term '\\(\\lambda\\)' in the Lagrangian does not appear explicitly in eqn. (6), however the effect is there through the mass term, following equation (2) and through \\(x_{0}\\). For a baryon species, the scalar density (\\(\\rho_{SB}\\)) and the baryon density (\\(\\rho_{B}\\)) are, \\[\\rho_{SB}=\\frac{\\gamma}{(2\\pi)^{3}}\\int_{o}^{k_{B}}\\frac{m_{B}^{\\star}d^{3}k}{ \\sqrt{k^{2}+m_{B}^{\\star 2}}}, \\tag{7}\\] \\[\\rho_{B}=\\frac{\\gamma}{(2\\pi)^{3}}\\int_{o}^{k_{B}}d^{3}k, \\tag{8}\\] The equation of motion for the \\(\\omega\\) field is then calculated as \\[\\omega_{0}=\\sum_{B}\\frac{\\rho_{B}}{g_{\\omega B}x^{2}}\\, \\tag{9}\\] The quantity \\(k_{B}\\) is the Fermi momentum for the baryon and \\(\\gamma=2\\) is the spin degeneracy. Similarly, the equation of motion for the \\(\\rho-\\)meson is obtained as: \\[\\rho_{03}=\\sum_{B}\\frac{g_{\\rho B}}{m_{\\rho}^{2}}I_{3B}\\rho_{B}\\, \\tag{10}\\] where \\(I_{3B}\\) is the 3rd-component of the isospin of each baryon species (given in the Table 2). Traditionally, neutron stars were believed to be composed mostly of neutrons, some of which eventually \\(\\beta\\)-decay until an equilibrium between neutron, proton and electron is reached. The respective chemical potentials them. Along with charge neutrality condition, \\(n_{p}=n_{e}\\), the various particle composition is then determined and the neutron star is believed to be composed of neutrons, protons and electrons. Muons come into picture when \\(\\mu_{e}=\\mu_{\\mu}\\), which happens roughly around nuclear matter density, and the charge neutrality condition is altered to \\(\\rho_{p}=\\rho_{e}+\\rho_{\\mu}\\). Hyperons can form in neutron star cores when the nucleon chemical potential is large enough to compensate the mass differences between nucleon and hyperons, which happens roughly around two times normal nuclear matter density, when the first species of the hyperon family starts appearing. The neutron and electron chemical potentials are constrained by the requirements of conservation of total baryon number and the charge neutrality condition given by, \\[\\sum_{B}Q_{B}\\rho_{B}+\\sum_{l}Q_{l}\\rho_{l}=0, \\tag{11}\\] with \\(\\rho_{B}\\) and \\(\\rho_{l}\\) are the baryon and lepton densities respectively. These two conditions combine to determine the appearance and concentration of these particles in the dense core of compact objects. A general expression may be written down for each baryonic chemical potentials (\\(\\mu_{B}\\)) in terms of these two independent chemical potentials, i.e., \\(\\mu_{n}\\) and \\(\\mu_{e}\\) as, \\[\\mu_{B}=\\mu_{n}-Q_{B}\\mu_{e} \\tag{12}\\] where \\(\\mu_{B}\\) and \\(Q_{B}\\) are the chemical potentials and electric charge of the concerned baryon species. After achieving the solution to these conditions, one obtains the total energy density \\(\\varepsilon\\) and pressure P for a \\[\\varepsilon = \\frac{2}{\\pi^{2}}\\int_{0}^{k_{B}}k^{2}dk\\sqrt{k^{2}+m_{B}^{*2}}+\\frac {m_{B}^{2}(1-Y^{2})^{2}}{8c_{\\sigma B}}-\\frac{m_{B}^{2}B}{12c_{\\omega B}c_{\\sigma B }}(1-Y^{2})^{3} \\tag{13}\\] \\[+ \\frac{m_{B}^{2}C}{16c_{\\omega B}^{2}c_{\\sigma B}}(1-Y^{2})^{4}+ \\frac{1}{2Y^{2}}c_{\\omega B}\\rho_{B}^{2}+\\frac{1}{2}m_{\\rho}^{2}\\rho_{03}^{2} +\\frac{1}{\\pi^{2}}\\sum_{\\lambda=e,\\mu^{-}}\\int_{0}^{k_{\\lambda}}k^{2}dk\\sqrt{k ^{2}+m_{\\lambda}^{2}}\\;,\\] \\[P = \\frac{2}{3\\pi^{2}}\\int_{0}^{k_{B}}\\frac{k^{4}dk}{\\sqrt{k^{2}+m_{B} ^{*2}}}-\\frac{m_{B}^{2}(1-Y^{2})^{2}}{8c_{\\sigma B}}+\\frac{m_{B}^{2}B}{12c_{ \\omega B}c_{\\sigma B}}(1-Y^{2})^{3} \\tag{14}\\] \\[- \\frac{m_{B}^{2}C}{16c_{\\omega B}^{2}c_{\\sigma B}}(1-Y^{2})^{4}+ \\frac{1}{2Y^{2}}c_{\\omega B}\\rho_{B}^{2}+\\frac{1}{2}m_{\\rho}^{2}\\rho_{03}^{2} \\ +\\frac{1}{3\\pi^{2}}\\sum_{\\lambda=e,\\mu^{-}}\\int_{0}^{k_{\\lambda}}\\frac{k^{4}dk}{ \\sqrt{k^{2}+m_{\\lambda}^{2}}}\\] As explained earlier, the terms in eqns. (13) and (14) with the subscript \\({}^{\\prime}B^{\\prime}\\) should be interpreted as sum over the states of all baryonic octets. The meson field equations ((6), (9) and (10)) are then solved self-consistently at a fixed baryon density to obtain the respective fields along with the requirements of conservation of total baryon number and charge neutrality condition given in equation (12) and the energy and pressure is computed for the neutron star matter. Using the computed EOS for the neutron star sequences, we calculate the properties of neutron stars. The equations for the structure of a relativistic spherical and static star composed of a perfect fluid were derived from Einstein's equations by Oppenheimer and Volkoff [15]. They are \\[\\frac{dp}{dr}=-\\frac{G}{r}\\frac{[\\varepsilon+p]\\left[M+4\\pi r^{3}p\\right]}{(r- 2GM)}, \\tag{15}\\] \\[\\frac{dM}{dr}=4\\pi r^{2}\\varepsilon, \\tag{16}\\] with \\(G\\) as the gravitational constant and \\(M(r)\\) as the enclosed gravitational mass. We have used \\(c=1\\). Given an EOS, these equations can be integrated from the origin as an initial value problem for a given choice of central energy density, (\\(\\varepsilon_{c}\\)). The value of \\(r\\) (\\(=R\\)), where the pressure vanishes defines the surface of the star. We solve the above equations to study the structural properties of the neutron star, using the EOS derived for the electrically charge neutral hyperon rich dense matter. In order to include hyperons, one needs to specify the hyperon coupling strength, which is more or less unknown [3; 17]. The EOS at high density is very sensitive to the underlying hyperon couplings, since hyperons are the majority population at high densities and is in turn reflected in the structural properties of the compact stars. The ratio of hyperon to nucleon couplings to the meson fields are not defined by the ground state of nuclear matter, but are chosen on other grounds such as, (1) Universal coupling scheme (UC): \\(x_{\\sigma}\\)=\\(x_{\\omega}\\)=1, where the hyperons and nucleons couple to the meson fields with equal strength (2) Moszkowski coupling (MC) : \\(x_{\\sigma}\\)=\\(x_{\\omega}\\)=\\(\\sqrt{(2/3)}\\)[18], which is based on the quark sum rule approach and (3) In our present work, we take \\(x_{\\sigma}\\)=\\(g_{\\sigma H}/g_{\\sigma N}\\)=0.7, \\(x_{\\omega}\\)=\\(g_{\\omega H}/g_{\\omega N}\\)=0.783 and \\(x_{\\omega}\\) =\\(x_{\\rho}\\), to calculate the EOS for the neutron star matter and gross properties of neutron stars. Here, binding of \\(\\Lambda^{0}\\) in nuclear matter: \\((B/A)_{\\Lambda}\\)=\\(x_{\\omega}\\)\\(g_{\\omega}\\)\\(\\omega_{0}+m_{\\Lambda}^{\\star}-m_{\\Lambda}\\approx-30\\) MeV. However prescription (3) restricts the equation of state of neutron star matter following the constraint of \\(\\Lambda^{0}\\) binding in nuclear matter. The choice of \\(x_{\\sigma}<0.72\\) has been emphasized [19] and also from studies based on hypernuclear levels [20], the choice (\\(x_{\\sigma}<0.9\\)) is bounded from above. The nucleon effective mass and incompressibility strongly influence the EOS of neutron-rich and neutron star matter. Figure 1 displays the equation of state for the five parameter sets. From the figure, it is to be noted that parameter set I, II, and III (with same nucleon effective mass but different incompressibility) follows similar trend upto ten times normal nuclear matter density, and neutron Ward-Wernernststhoff The coupling strength employed here for the present calculation (PC) is closer to that of MC and is the softest among the three prescriptions. Thus it is conclusive that weaker hyperon coupling leads to softer equation of state because of the underlying weak repulsion in the matter. Similar feature has been noticed in works by Glendening [22] and Ellis et. al [23]. For the sake of completeness we plot in figure 4 the respective particle population of \\(n\\), \\(p\\), \\(e\\) and \\(\\mu^{-}\\) matter in beta equilibrium upto \\(10\\rho_{0}\\). Muons appear when the chemical potential of the electrons exceeds the rest mass of the muons (106 MeV), which happens roughly at around normal nuclear matter density and becomes one of the particle species in the composition. Consequently the proton fraction increases with appearance of muons in the medium to maintain charge neutrality of the matter. The proton fraction and the electron chemical potential have been found to be important in assessing the cooling rates of neutron stars [24], and the possibility of Kaon condensation in neutron star interiors [25; 26]. We refrain ourselves from further details in this direction. Figure 5, 6 and 7 displays the relative particle composition of neutron star matter for parameter sets I, II and III with all baryon octets in equilibrium rendering a charge neutral hyperon rich matter. From the plots, it is noteworthy that the difference in the incompressibilities doesn't manifest in the particle composition of the matter very much. In all three cases, the hyperons start appearing at around \\(2\\rho_{0}\\), where \\(\\Sigma^{-}\\) appears first, closely followed by \\(\\Lambda^{0}\\). However the former gets saturated because of isospin isospin isospin isospin isospin is not found for \\(\\Lambda^{0}\\), and \\(\\Sigma^{-}\\) are calculated by fitting the \\(\\Lambda^{0}\\) and \\(\\Lambda^{0}\\) values. The \\(\\Lambda^{0}\\) values are \\(\\pm 0. stant. For all the sets at \\(\\approx\\) 2 \\(\\rho_{0}\\), we see a sharp turn in the electron potential, where the first charged hyperon species, \\(\\Sigma^{-}\\) appears. At that point \\(\\mu_{e}\\) compensates the mass difference between \\(\\Sigma^{-}\\) and \\(\\Lambda^{0}\\) thereby triggering the appearance of the former. Leptons primarily maintain the same stability of the neutrino mechanism. In fact, the \\(\\Lambda^{0}\\) is not predict any charged hyperon species after \\(\\approx\\) 6 \\(\\rho_{0}\\), the electron potential remains constant thereafter. As stated, the properties of neutron star is unique to the EOS considered. Using these EOS, we now calculate some of the global properties of neutron star by solving the TOV equation. Figure 11 shows the maximum baryonic mass \\(M_{b}\\) (\\(M_{\\odot}\\)) obtained as a function of star mass for the five parameter sets. The curves for set I, II and III coincides with each other, whereas set IV and V are distinctly apart, the reason can be attributed to their different effective mass values. However the baryonic mass always exceeds the gravitational mass, which is typical of compact objects. The difference between the two is defined as the gravitational binding of the star. The baryonic masses obtained for set I, II and III are 1.83\\(M_{\\odot}\\), 1.81\\(M_{\\odot}\\) and 1.79\\(M_{\\odot}\\) respectively. Whereas sets IV (stiff) and V (soft) EOS represents the two extremes among all the parameter sets. The corresponding baryon masses obtained are 2.18\\(M_{\\odot}\\) and 1.31\\(M_{\\odot}\\). Gravitational mass of the neutron star as a function of central density of the star is plotted in Figure 12. Stable neutron star configurations are the regions where \\(\\frac{dM}{dec_{e}}>\\) 0. Beyond the maximum mass, gravity overcomes and results in the collapse of the star. Set I, II and III which vary in incompressibilities predicts almost same central density \\(\\approx\\) 7.9 \\(\\times\\) 10\\({}^{14}gcm^{-3}\\) for the star at maximum mass denoted by filled circles in the plot. The maximum mass obtained are 1.66\\(M_{\\odot}\\), 1.65\\(M_{\\odot}\\) and 1.63\\(M_{\\odot}\\) for set I, II and III respectively. Set IV and V predicts the maximum mass to be 1.96\\(M_{\\odot}\\) and 1.21\\(M_{\\odot}\\) respectively with corresponding central densities 7.7\\(\\times\\) 10\\({}^{14}gcm^{-3}\\) and 0.26\\(\\times\\) 10\\({}^{14}gcm^{-3}\\). Figure 11: Baryonic mass (\\(M_{\\odot}\\)) of the star as a function of Maximum mass (\\(M_{\\odot}\\)) for the five sets. Figure 10: Electron chemical potential as a function of baryon density upto 10 \\(\\rho_{0}\\). Figure 9: Relative particle population for the neutron star matter with hyperons for Set V (K=300 MeV, \\(m_{N}^{\\star}=\\)0.90 \\(m_{N}\\)). neutron star masses like \\(M_{J0751\\pm 1807}\\) =2.1\\(\\pm\\)0.2 \\(M_{\\odot}\\)[27], \\(M_{U1636\\pm 536}\\)=2.0\\(\\pm\\)0.1 \\(M_{\\odot}\\)[28], \\(M_{VelaX-1}\\)=1.86\\(\\pm\\)0.16 \\(M_{\\odot}\\)[29] and \\(M_{VelaX-2}\\)=1.78\\(\\pm\\)0.23 \\(M_{\\odot}\\)[30; 31] predicts massive stars. Our results agrees remarkably with these observed masses except for set V. Figure 13 displays the maximum mass of the neutron star as a function of the star radius. In order to calculate the radius, we included the results of Baym, Pethick and Sutherland [32] EOS at low baryonic densities. The radius predicted for the sets I, II, and III are \\(\\approx\\) 16.7 km, whereas for set IV and V, it comes out to be 17.43 and 15 km respectively. It is to be noted that in the relativistic regime, the maximum masses obtained by the non-linear walecka model (NLWM) and the quark-meson coupling model [33] are 1.90 \\(M_{\\odot}\\) and 1.98 \\(M_{\\odot}\\) respectively. The masses obtained in our calculations are in fair agreement with these calculations. In the relativistic mean field approach the properties of neutron star was studied [9] where it was pointed out that a bigger effective nucleon mass results in a low mass star but with larger radius. Our results lead to the same interpretation. However because wide range of masses and radius of neutron star being placed by different models, it is therefore important to impose constraints that can put stringent condition in the M-R plane. Constraints on the mass-radius plane can be obtained from accurate measurements of the gravitational redshift of spectral lines produced in neutron star photospheres. Measuring M/R is particularly important to constrain the EOS of dense matter. Recently a constraint to M-R plane was reported [34] based on the observations of star formation rates in the source spectrum of the IE 1207.4-5209 neutron star, which limits M/R=(0.069 - 0.115) \\(M_{\\odot}\\)/km. The region enclosed in figure 13 by two solid lines denotes the area enclosed in accordance with the observed range. All the parameter set of the present model satisfies the criterian very well. Another important aspect of compact objects is the observed gravitational redshift, which is given by \\[Z=\\frac{1}{\\sqrt{1-2GM/Rc^{2}}} \\tag{17}\\] The gravitational redshift interpreted by the \\(M/R\\) ratio comes out to be in the range \\(Z\\)=0.12-0.23, which is plotted in figure 14. For Set I, II and III, the redshift is nearly same because redshift primarily depends on the mass to radius ratio of the star, which in case of first three sets is nearly same. For all the parameter sets, the redshift obtained at various redshifts is Figure 12: Maximum mass of the neutron star sequences as a function of central density of the star (in \\(10^{14}gcm^{-3}\\)). Figure 13: Maximum mass of the neutron star (in solar mass) as a function of radius (in Km) for the five Sets. The two solid curves corresponds to \\(M/R=0.069\\) and \\(M/R=0.115\\). (The solid circles represent the values at maximum mass.) \\begin{table} \\begin{tabular}{c c c c c c} \\hline \\hline SET & \\(M(M_{\\odot})\\) & \\(E_{c}(10^{14}gcm^{-3})\\) & \\(R\\) (\\(Km\\)) & \\(M_{b}(M_{\\odot})\\) & \\(Z\\) \\\\ \\hline \\hline I & 1.66 & 7.90 & 16.78 & 1.83 & 0.19 \\\\ II & 1.65 & 7.99 & 16.70 & 1.81 & 0.19 \\\\ III & 1.63 & 7.99 & 16.62 & 1.79 & 0.19 \\\\ IV & 1.96 & 7.72 & 17.44 & 2.18 & 0.22 \\\\ V & 1.21 & 9.34 & 15.03 & 1.31 & 0.15 \\\\ \\hline \\end{tabular} \\end{table} Table 3: Properties of Neutron star as predicted by the model responds to \\(R/M=(8.8-14.2)\\) km/\\(M_{\\odot}\\). Our calculations predicts \\(R/M\\) in the range \\((8.90-12.40)\\) km/\\(M_{\\odot}\\), which is consistent with the observed value. The predictive power of the model is evident from figure 14, where we compare the gravitational redshift as a function of the star mass for the five parameter sets. The overall results of our calculation are presented in Table 3. ## IV Summary and Outlook We studied the equation of state of high density matter in an effective model and calculated the gross properties for neutron stars like mass, radius, central density and redshift. We analysed five set of parameters with incompressibility values \\(K\\)=210, 300 and 380 MeV and effective masses \\(m^{\\star}=\\)0.80, 0.85 and 0.90 \\(m_{n}\\), that satisfies the nuclear matter saturation properties. The results are then compared with some recent observations and also a few field theoretical models. It was found that the difference in nuclear incompressibility is not much reflected in either equation of state or neutron star properties, but nucleon effective masses were quite decisive. At maximum mass, the central density of the star for sets I, II, III and IV was found to be \\(\\approx 3\\)\\(\\rho_{nm}\\) (nuclear matter density) but for set V, it was found to be \\(\\approx 3.5\\)\\(\\rho_{nm}\\), which has the highest effective mass value. Similarly the maximum mass obtained for the the five EOS lies in the range 1.21-1.96\\(M_{\\odot}\\). Set V, which is softest among all parameter sets, predicts lowest maximum mass \\(1.21M_{\\odot}\\), whereas set IV (stiff) predicts the maximum mass to be \\(1.96M_{\\odot}\\) and also is the star with the largest radius. The difference in maximum mass and radius of the star in case of set I, II and III is negligible, and so the predicted redshift comes out nearly same, whereas set IV and V presents the two extremes in overall properties, which is the reflection of their different effective mass values. Overall, mass predicted by all the parameter sets agree well with most of the theoretical work and observational limits. The results were also found to be in good agreement with recently imposed constrains on neutron star properties in the M-R plane, and the redshift interpreted therein. Further, the precise measurements of mass of both neutron stars in case of PSR B1913+16 [35], PSR B1534+12 [36] and PSR B2127+11C [37] are available which can put constrains on the nuclear equation of state. Masses of neutron stars in X-ray pulsars are also consistent with these values, although are measured less accurately. In case of radii, the values are still unknown, however some estimates are expected in a few years, which would further constrain the EOS of neutron star in the M-R plane. In future, we intend to study the effect of rotation to neutron star structure and also the phase transition aspects in the model. It is worth mentioning that the density-dependent meson-nucleon couplings is very much successful in non-linear Walecka model[38] and similar work in this direction would be interesting. ###### Acknowledgements. One of us TKJ would like to thank facilities and hospitality provided by Institute of Physics, Bhubaneswar where a major part of the work was done. This work was supported by R/P, under DAE-BRNS, grant no 2003/37/14/BRNS/669. ## References * (1) S.L. Shapiro and S.A. Teukolski, _Black holes, white dwarfs, and Neutron stars_ (Wiley, New York, 1983). * (2) N.K. Glendening, Phys. Lett. **B114**, 392 (1982); N. K. Glendening, Astrophys. J. **293**, 470 (1985); N. K. Glendening, Z. Phys. **A 326**, 57 (1987). * (3) M. Prakash, I. Bombaci, M. Prakash, P.J. Ellis, J.M. Lattimer and R. Knorren, Phys. Rep. **280**, 1 (1997). * (4) J. Schaffner-Beilich and I.N. Mishustin, Phys. Rev. **C 53**, 1416 (1996). * (5) A. Akmal, V.R. Pandharipande and D.G. Ravenhall, Phys. Rev. **C 58**, 1804 (1998). * (6) R.B. Wiringa, V. Fiks and A. Fabrocini, Phys. Rev. **C 38**, 1010 (1988). * (7) J.D. Walecka, Ann. Phys. **83**, 491 (1974). Figure 14: Gravitational Redshift (Z) as a function of Maximum mass of the neutron star for the five parameter sets. (The solid circles represent the values at maximum mass). The area between solid horizontal lines represents the redshift values \\(Z=(0.12-0.23)\\)[34]. * (8) A. Lang, B. Blattel, W. Cassing, V. Koch, U. Mosel and K. Weber, Z. Phys. **A 340**, 207 (1991). * (9) N. K. Glendening, F. Weber and S.A. Moszkowski, Phys. Rev. **C 45**, 844 (1992). * (10) H. Heiselberg and M. Hjorth-Jensen, Astro. J. Lett. **525**, L45 (1999). * (11) T. Sil, S.K. Patra, B.K. Sharma, M. Centelles and X. Vinas, Phys. Rev. **C 69**, 044315 (2004); M. Del Estal, M. Centelles, X. Vinas, and S.K. Patra, Phys. Rev. **C 63**, 044321 (2001); S. K. Patra, M. Del Estal, M. Centelles and X. Vinas, Phys. Rev. **C 63**, 024311 (2001). * (12) P. Arumugam, B.K. Sharma, P.K. Sahu, S. K. Patra, T. Sil, M. Centelles and X. Vinas, Phys. Lett. **B 601**, 51 (2004); P.K. Panda, A. Mishra, J.M. Eisenberg and W. Greiner, Phys. Rev. **C56**, 3134 (1997). * (13) J. Boguta and A.R. Bodmer, Nucl. Phys. **A 292**, 413 (1977). * (14) P. Danielewicz, R. Lacey, W.G. Lynch, Science **298**, 1592 (2002). * (15) J.R. Oppenheimer and G.M. Volkoff, Phys. Rev **55**, 374 (1939); R.C. Tolman, Phys. Rev **55**, 364 (1939). * (16) P. Moller, W.D. Myers, W.J. Swiatecki and J. Treiner, At. Data Nucl. Data Tables **39**, 225 (1988). * (17) N.K. Glendening Phys. Rev. **C 64**, 025801 (2001). * (18) S.A. Moszkowski, Phys. Rev. **D 9**, 1613 (1974). * (19) N.K. Glendening and S.A. Moszkowski, Phys. Rev. Lett. **67**, 2414 (1991). * (20) M. Rufa, J. Schaffner, J. Maruhn, H. Stocker, W. Greiner and P. G. Reinhard, Phys. Rev. **C 42**, 2469 (1990). * (21) A. Mishra, P.K. Panda and W. Greiner, J. Phys. **G 28**, 67 (2002). * (22) N.K. Glendening, Nucl. Phys. **A 493**, 521 (1989). * (23) J. Ellis, J.I. Kapusta and K.A. Olive, Nucl. Phys. **B 348**, 345 (1991). * (24) C.J. Pethick, Rev. Mod. Phys. **64**, 1133 (1992). * (25) G.E. Brown, C.H. Lee, M. Rho and V. Thorsson, Nucl Phys. **A 567**, 937 (1994). * (26) V.R. Pandharipande, C.J. Pethick and V. Thorsson, Rev. Lett. **75**, 4567 (1995). * (27) D.J. Nice, E.M. Spalver, I.H. Stairs, O. Loehmer, A. Jessner, M. Kramer and J.M. Cordes, Astrophys. J. **634** 1242 (2005). * (28) D. Barret, J.F. Olive and M.C. Miller, _astro-ph/0605486_. * (29) O. Barziv, L. Kaper, M.H. van Kerkwijk, J.H. Telting and J. van Paradijs, Astron. & Astrophys. **377** 925 (2001). * (30) J. Casares, P.A. Charles and E. Kuulkers, Astro. J. **493** L39 (1998). * (31) J.A. Orosz and E. Kuulkers, Mon. Not. R. Astron. Soc. **305** 132 (1999). * (32) G. Baym, C. Pethick and P. Sutherland, Astrophys. J. **170**, 299 (1971). * (33) P.K. Panda, D.P. Menezes, C. Providencia, Phys.Rev. **C69**, 025207 (2004); P.K. Panda, D.P. Menezes, C. Providencia, Phys.Rev. **C69**, 058801 (2004); D.P. Menezes, P.K. Panda and C. Providencia, Phys. Rev. **C 72**, 035802 (2005). * (34) D. Sanyal, G.G. Pavlov, V.E. Zavlin and M.A. Teter, Astrophys. J. **574** L61 (2002). * (35) J.H. Taylor and J.M. Weisberg, Astrophys. J. **345**, 434 (1989). * (36) A. Wolsczan, Nature **350**, 688 (1991). * (37) W.T.S. Deich, S.R. Kulkarni in _Compact Stars in Binaries_, J. van Paradijs, E.P.J. van den Heuvel and E. Kuulkers eds., Dordrecht, Kluwer (1996). * (38) T. Niksic, D. Vretenar, P. Finelli and P. Ring, Phys. Rev. **C66**, 024306 (2002).
We study the equation of state (EOS) for dense matter in the core of the compact star with hyperons and calculate the star structure in an effective model in the mean field approach. With varying incompressibility and effective nucleon mass, we analyse the resulting EOS with hyperons in beta equilibrium and its underlying effect on the gross properties of the compact star sequences. The results obtained in our analysis are compared with predictions of other theoretical models and observations. The maximum mass of the compact star lies in the range \\(1.21-1.96\\ M_{\\odot}\\) for the different EOS obtained, in the model. pacs: 21.65.+f, 13.75.Cs, 97.60.Jd, 21.30.Fe, 25.75.-q, 26.60.+c
Summarize the following text.
arxiv-format/0608091v1.md
# On-line topological simplification of weighted graphs Floris Geerts \\({}^{1,3}\\) Peter Revesz\\({}^{2}\\) Work done while on a sabbatical leave from the University of Nebraska-Lincoln. Work supported in part by USA NSF grants IRI-9625055 and IRI-9632871. Jan Van den Bussche\\({}^{1}\\) \\({}^{1}\\) Hasselt University/Transnational University Limburg \\({}^{2}\\) University of Nebraska-Lincoln \\({}^{3}\\) University of Edinburgh ## 1 Introduction Many GIS applications involve data in the form of a network, such as road, railway, or river networks. It is common to represent network data in the form of so-called _polylines_. A polyline consists of a sequence of consecutive straight-line segments of variable length. Polylines allow for the modeling of both straight lines and curved lines. A point on a polyline in which exactly two straight-line segments meet, is called a _regular_ point. Regular points are important for the modeling of curved lines. Indeed, to represent accurately a curved line by a polyline, one needs to use many regular points. Curved lines often occur in river networks, or in road networks over hilly terrain. We illustrate this in Figure 1 in which we show a part of the road network in the Ardennes (Belgium). In this hilly region, many bended roads occur. As can be seen in the Figure, there is an abundance of regular points -- which is often the case in real network maps [14]. Although regular points are necessary to model the reality accurately, for many applications they can be disregarded. More specifically, for topological queries such as path queries, one can \"topologically simplify\" the network by eliminating all regular points; and answer the query (more efficiently) on the much simplified network. Even when the network contains distance information, one still can topologically simplify the network, but maintain the distance information, as we will show in the present paper. More generally, we work with arbitrary weight information. Thus, the simplification of a network contains in a compact manner the same topological and distance information as the original network. Such \"lossless topological representations\" have been studied by a number of researchers [7, 10, 11]. For example, initial experiments reported by Segoufin and Vianu have shown drastic compression of the size of the data by topological simplification. (The inclusion of distance information is new to the present paper.) Of course, if we want to answer queries using the simplified network instead of the original one, we are faced with the problem of on-line maintenance of the simplified network under updates to the original one. This problem is important due to the dynamic character of certain network data. For example, suppose that there is a huge snowstorm which makes all roads unusable. As a result, many snow clearing crews are sent to all parts of the city. They continuously report back to a central station the road segments that they have cleared. The central station also continuously updates its map of the usable network of roads. Moreover, big arteries are cleared first, and therefore, the usable network will have a high percentage of regular vertices in the initial stages. While the snow is being cleared, thousands of people may query the database of the central station to find out what is the shortest path they can take using the already cleared roads. Analogous applications requiring on-line monitoring involve traffic jams in road networks, or downlinks in computer networks. Two of us have reported on an initial investigation of this problem [5]. The result was a maintenance algorithm that was _fully-dynamic_, i.e., insertions and deletions of edges and vertices are allowed. This algorithm, however, is (in certain \"worst cases\") not any better than redoing the simplification from scratch after every update, resulting in an \\(O(n^{2})\\) time algorithm, where \\(n\\) is the number of updates. This is clearly not very practical. The present paper proposes two very different algorithms for on-line topological simplification: 1. **Renumbering Algorithm**, which relies on the numbering and renumbering of the regular vertices, takes on the average, only _logarithmic_ time per edge insertion to keep the simplified network up-to-date; and 2. **Topology Tree Algorithm**, is based on the topology tree data structure of Frederickson [3] and has the same time complexity \\(O(n\\log(n))\\). Neither algorithm makes any assumptions on the graph, such as planarity and the like. Real-life network data is often _not_ planar (e.g., in a road or railway network, bridges occur). The presented algorithms are only _semi-dynamic_, in that they can react efficiently to insertions (of vertices and edges), but not to deletions. Insertions are sufficient for many applications (such as the snow clearing mentioned above, were simply more and more road segments become available again), but for applications requiring also deletion, the Topology Tree Algorithm easily can be extended to also react correctly to edge deletions. We have performed an empirical comparison of the Renumbering Algorithm and the Topology Tree Algorithm using random, non-random and two real data sets. This paper is further organized as follows. Basic definitions are given in Section 2. The general description of the on-line simplification algorithm is described in Section 3. In Section 4, we describe the Renumbering Algorithm, and in Section 5, the Topology Tree Algorithm is described. The empirical comparison of both algorithms is presented in Section 6. Basic Definitions Consider an undirected graph without self-loops \\(G=(V,E,\\lambda)\\) with weighted edges; the weights of the edges are given by a mapping \\(\\lambda:E\\rightarrow\\mathbf{R}^{+}\\). We will use the following definitions: 1. A vertex \\(v\\) is _regular_ if and only if it is adjacent to precisely two edges. 2. A vertex that is not regular is called _singular_. 3. A path between two singular vertices that passes only through regular vertices is called a _regular path_. We assume that the graph \\(G\\) does not contain regular cycles: cycles consisting of regular vertices only. The _simplification_\\(G_{s}=(V_{s},E_{s},\\lambda_{s})\\) of \\(G\\) is a multigraph with self-loops and weighted edges, which is obtained as follows: (see Figure 1) 1. \\(V_{s}\\), the set of nodes of \\(G_{s}\\), consists of all singular vertices of \\(G\\). 2. \\(E_{s}\\), the set of edges of \\(G_{s}\\), formally consists of all regular paths of \\(G\\). Every regular path between two singular vertices \\(v\\) and \\(w\\) represents a _topological edge_ in \\(G_{s}\\) between \\(v\\) and \\(w\\). There might be multiple regular paths between two singular vertices, so in general \\(G_{s}\\) is a multigraph. 3. the weight \\(\\lambda_{s}(e)\\) of a topological edge \\(e\\) is equal to the sum of all weights of edges on the regular path corresponding to \\(e\\). In the following, when a particular regular path \\(e\\) between two singular vertices \\(v\\) and \\(w\\) is clear from the context, we will abuse notation and conveniently denote the topological edge \\(e\\) by \\(\\{v,w\\}\\). ## 3 Online Simplification: General Description We consider only insertions of a new isolated vertex and insertions of edges between existing vertices in the graph \\(G\\) (other more complex insertion operations can be translated into a sequence of these basic insertion operations). The insertion of an isolated vertex is handled trivially, i.e., we insert it into \\(V_{s}\\). For the insertion of an edge we distinguish between six cases that are explained below. The left side of each figure shows the situation before the insertion of the edge \\(\\{x,y\\}\\), drawn as the dotted line, and the right side shows the situation after the insertion. The topological edges are drawn in thick lines. **Case 1**: Vertices \\(x\\) and \\(y\\) are both singular and \\(\\deg(x)\ eq 1\\) and \\(\\deg(y)\ eq 1\\). Then the edge \\(\\{x,y\\}\\) is also _inserted_ in \\(G_{s}\\). **Case 2**: Vertices \\(x\\) and \\(y\\) are both singular and one of them, say \\(x\\), has degree one. Let \\(\\{z,x\\}\\) be the edge in \\(G_{s}\\) adjacent to \\(x\\). _Extend_ this edge the new edge \\(\\{z,y\\}\\) in \\(G_{s}\\), putting \\(\\lambda_{s}(\\{z,y\\}):=\\lambda_{s}(\\{z,x\\})+\\lambda(\\{x,y\\})\\). Note that \\(x\\) becomes a regular vertex after the insertion. **Case 3**: Vertices \\(x\\) and \\(y\\) are both singular and \\(\\deg(x)=\\deg(y)=1\\). Let \\(\\{z_{1},x\\}\\) (\\(\\{z_{2},y\\}\\)) be the edge in \\(G_{s}\\) adjacent with \\(x\\) (\\(y\\)). (Since we disallow regular cycles in \\(G\\), we have \\(z_{1}\ eq y\\) and \\(z_{2}\ eq x\\).) Then _merge_ the edges \\(\\{z_{1},x\\}\\) and \\(\\{y,z_{2}\\}\\) in \\(G_{s}\\) into a single, new edge \\(\\{z_{1},z_{2}\\}\\) in \\(G_{s}\\), putting \\(\\lambda_{s}(\\{z_{1},z_{2}\\}):=\\lambda_{s}(\\{z_{1},x\\})+\\lambda_{s}(\\{y,z_{2} \\})+\\lambda(\\{x,y\\})\\). **Case 4**: One of the vertices \\(x\\) and \\(y\\) is regular, say \\(x\\), and the other vertex, \\(y\\), is singular and has degree one. First, the edge \\(\\{z_{1},z_{2}\\}\\) of \\(G_{s}\\) which corresponds to the regular path between \\(z_{1}\\) and \\(z_{2}\\) on which \\(x\\) lies, must be _split_ into two new edges \\(\\{z_{1},x\\}\\) and \\(\\{x,z_{2}\\}\\) of \\(G_{s}\\). Here, we put \\(\\lambda_{s}(\\{z_{1},x\\}):=\\sum\\lambda(\\{u,v\\})\\), where the summation is over all edges in \\(G\\) on the regular path from \\(z_{1}\\) to \\(x\\). We similarly define \\(\\lambda_{s}(\\{x,z_{2}\\})\\). Secondly, let \\(\\{z_{3},y\\}\\) be the edge in \\(G_{s}\\) adjacent to \\(y\\). Then we _extend_ this edge to a new edge \\(\\{z_{3},x\\}\\) in \\(G_{s}\\), putting, \\(\\lambda_{s}(\\{x,z_{3}\\}):=\\lambda_{s}(\\{y,z_{3}\\})+\\lambda(\\{x,y\\})\\). A special subcase occurs when \\(z_{1}=z_{2}\\). In that case, the two paths from \\(x\\) to \\(z_{1}\\) give rise to two different edges from \\(x\\) to \\(z_{1}\\) in \\(G_{s}\\) (recall that \\(G_{s}\\) was defined as a multigraph). **Case 5**: One of the vertices, say \\(x\\), is regular and the other one, \\(y\\), is singular with degree not equal to one. Then we split exactly as in case 4, and now we also insert \\(\\{x,y\\}\\) as a new edge in \\(G_{s}\\). **Case 6**: Both \\(x\\) and \\(y\\) are regular. Now, two _splits_ must be performed. As can be seen in the above description, if no regular vertices are involved, then the update on the graph \\(G\\) translates in a straightforward way to an update on the simplification \\(G_{s}\\). It is only in cases 4, 5, and 6, that the update on the graph \\(G\\) involves vertices which _have no counterpart_ in the simplification \\(G_{s}\\). In these cases, we need to find the edge to split and the weights of the topological edges created by the split. Consequently, the problem of maintaining the simplification \\(G_{s}\\) of a graph \\(G\\) amounts to two tasks: * Maintain a function _find topological edge_, which takes a regular vertex as input, and outputs the topological edge whose corresponding regular path in \\(G\\) contains the input vertex. * Maintain a function _find weights_ which outputs the weights of the edges created when a topological edge is split at the input vertex. In an earlier, naive approach [5], we only discussed the function _find topological edge_. It worked by storing for each regular vertex a direct pointer to its topological edge. This made the topological edge accessible in constant time, but the maintenance of the pointers under updates can be very inefficient in the worst case. We next describe two algorithms which are more efficient. Both algorithms keep the simplification of a graph up-to-date when the graph is subject to edge insertions only. ## 4 Online Simplification: Renumbering Algorithm In this section we introduce an algorithm for keeping the simplification of a graph up-to-date when this graph is subject to edge insertions. We first show how the topological edges can be found efficiently. ### Assigning numbers to the regular vertices We number the regular vertices, that lie on a regular path, consecutively. The numbers of the regular vertices on any regular path will always form an interval of the natural numbers. The Renumbering Algorithm will maintain two properties: **Interval property:**: the assignment of _consecutive_ numbers to _consecutive_ regular points; **Disjointness property:**: _different_ regular paths have _disjoint_ intervals. We then have a unique interval associated with each regular path, and hence with each topological edge of size \\(>0\\). Moreover, we choose the minimum of such an interval as a unique number associated with a topological edge. Specifically, the minimal number serves as a _key_ in a _dictionary_. Recall that in general, a dictionary consists of pairs \\(\\langle\\operatorname{key},\\operatorname{item}\\rangle\\), where the item is unique for each key. Given a number \\(k\\), the function which returns the item with the maximal key smaller than \\(k\\) can be implemented in \\(O(\\log N)\\) time, where \\(N\\) is the number of items in the dictionary [1]. The items we use contain the following information. 1. An identifier of the topological edge associated with the key. 2. The number of regular vertices on the regular path corresponding to this topological edge. 3. An identifier of the regular vertex that has the key as number on this path. In Figure 2 we give an example of a dictionary containing three keys, corresponding to the three topological edges in the simplification \\(G_{s}\\) of the graph \\(G\\). ### Maintaining the numbers of the regular vertices We must now show how to maintain this numbering under updates, such that the interval and disjointness properties mentioned above remain satisfied. Actually, only in case 3 in Section 3 we need to do some maintenance work on the numbering. Indeed, by merging two topological edges, the numbering of the regular vertices is no longer necessarily consecutive. We resolve this by _renumbering_ the vertices on the shorter of the two regular paths. Note that the size of a regular path is stored in the dictionary item for that path. In order to keep the intervals disjoint, we must assume that the maximal number of edge insertions to which we need to respond is known in advance. Concretely, let us assume that we have to react to at most \\(\\ell\\) update operations. This assumption is rather harmless. Indeed, one can set this maximum limit to a large number. If it is eventually reached, we restart Figure 2: Dictionary example. from scratch. A regular path is \"born\" with at most two regular vertices on it. Every time a new regular path is created, say the \\(k\\)th time, we assign the number \\(2k\\ell\\) to one of the two regular vertices on it. Hence, newly created topological edges correspond to numbers which are \\(2\\ell\\) apart from each other. Since a newly created topological edge can become at most \\(\\ell-1\\) vertices longer, no interference is possible. ### Finding the topological edge Consider that we are in one of the cases 4-6 described in Section 3, where we have to split the topological edge at vertex \\(x\\). We look at the number of \\(x\\), say \\(k\\), and find in the dictionary the item associated with the maximal key smaller than \\(k\\). This key corresponds to the interval to which \\(k\\) belongs, or equivalently, to the regular path to which \\(x\\) belongs. In this way we find the topological edge which has to be split, since this edge is identified in the returned item. The numbering thus enables us to find an edge in \\(O(\\log m^{\\prime})\\) time, where \\(m^{\\prime}\\) is the number of edges in \\(G_{s}\\) which correspond to a regular path passing through at least one regular vertex. Because \\(m^{\\prime}\\) is at most \\(m\\), the number of edges in \\(G\\), we obtain: **Proposition 4.1**.: _Given a regular vertex and its number, the dictionary returns in \\(O(\\log m)\\) time the topological edge corresponding to the regular path on which this regular vertex lies._ We next show how, when a topological edge is split, we can quickly find the weights of the two new edges created by the split. ### Assigning weights to the regular vertices The _weight_ of a regular vertex \\(v\\) will be denoted by \\(\\lambda^{*}(v)\\). Weights will be assigned to the regular vertices such that if \\(v\\) and \\(w\\) are two consecutive regular vertices with weights \\(\\lambda^{*}(v)\\) and \\(\\lambda^{*}(w)\\) respectively, then \\(\\lambda(\\{v,w\\})=|\\lambda^{*}(v)-\\lambda^{*}(w)|\\). ### Maintaining the weights of regular vertices The maintenance of the weights of regular vertices under edge insertions is easy. It requires only constant time when a topological edge is extended. Indeed, let \\(\\{x,y\\}\\) be a topological edge, and suppose that we extend this edge by inserting \\(\\{y,z\\}\\). Let \\(u\\) be the regular vertex adjacent to \\(y\\). Then, * if \\(\\lambda^{*}(u)<0\\), then \\(\\lambda^{*}(y):=\\lambda^{*}(u)-\\lambda(\\{u,y\\})\\). * if \\(\\lambda^{*}(u)\\geqslant 0\\), and no regular vertex with a positive weight is adjacent to \\(u\\), then \\(\\lambda^{*}(y):=\\lambda^{*}(u)+\\lambda(\\{u,y\\})\\). Otherwise, let \\(v\\) be the regular vertex adjacent to \\(u\\). If \\(\\lambda^{*}(v)>\\lambda^{*}(u)\\), then let \\(\\lambda^{*}(y)=\\lambda^{*}(u)-\\lambda(\\{u,y\\})\\), else let \\(\\lambda^{*}(y)=\\lambda^{*}(u)+\\lambda(\\{u,y\\})\\). When a topological edge is split, no adjustments to the weight of the remaining regular vertices is needed at all. However, when two topological edges are merged we need to adjust the weights of the regular vertices on the shortest of the two regular paths, as shown in Figure 3. This adjustment of the weights can clearly be done simultaneously with the renumbering of the vertices. ### Finding the weights The weights of regular vertices now enable us to find the weights of the two edges created by a split of a topological edge in logarithmic time. Indeed, given the number of the regular vertex where the split occurs, we search in the dictionary which topological edge needs to be split; call it \\(\\{z_{1},z_{2}\\}\\). In the returned item we find the vertex which has the minimal number of the vertices on the regular path corresponding to \\(\\{z_{1},z_{2}\\}\\). Denote this vertex with \\(u\\) which is adjacent to either \\(z_{1}\\) or \\(z_{2}\\). We assume that \\(u\\) is adjacent to \\(z_{1}\\), the other case being analogous. The weight of the two new topological edges \\(\\{z_{1},x\\}\\) and \\(\\{x,z_{2}\\}\\) can be computed easily: * \\(\\lambda(\\{z_{1},x\\}):=\\lambda(\\{z_{1},u\\})+|\\lambda^{*}(u)-\\lambda^{*}(x)|\\); and * \\(\\lambda(\\{x,z_{2}\\}):=\\lambda(\\{z_{1},z_{2}\\})-\\lambda(\\{z_{1},x\\})\\). If only one regular vertex remains on a regular path after a split, or a regular vertex becomes singular, then the weight of this vertex is set to \\(0\\). This can all can be done in constant time, after the topological edge which needs to be split has been looked up in the dictionary. ### Complexity analysis By the _amortized complexity_ of an on-line algorithm [13, 8], we mean the total computational complexity of supporting \\(\\ell\\) updates (starting from the empty graph), as a function of \\(\\ell\\), divided by \\(\\ell\\) to get the average time spent on supporting one single update. We will prove here that the Renumbering Algorithm has \\(O(\\log\\ell)\\) amortized time complexity. We only count edge insertions because the insertion of an isolated vertex has zero cost. **Theorem 4.1**.: _The total time spent on \\(\\ell\\) updates by the Renumbering Algorithm is \\(O(\\ell\\log\\ell)\\)._ Proof.: If we look at the general description of the Renumbering Algorithm, we see that in each case only a constant number of steps are performed. These are either elementary operations on the graph, or dictionary lookups. There is however one important exception to this. In cases where we need to merge two topological edges, the renumbering of regular vertices (and simultaneous adjustment of their weights) is needed. Since every elementary operation on Figure 3: Assigning new numbers and weights of regular vertices simultaneously when two topological edges are merged. The numbers of regular vertices are in bold, the weights are inside the vertices. to prove is that the total number of vertex renumberings is \\(O(\\ell\\log\\ell)\\). A key concept in our proof is the notion of a _super edge_ (see Figure 4). Super edges are sets of topological edges which can be defined inductively: initially each topological edge (with one or two regular vertices on it) is a member of a separate super edge. If a member \\(\\mathbf{a}\\) of a super edge \\(\\mathbf{A}\\) is merged with a member \\(\\mathbf{b}\\) of another super edge \\(\\mathbf{B}\\), then the two super edges are unioned together in a new super edge \\(\\mathbf{C}\\) and \\(\\mathbf{a}\\) and \\(\\mathbf{b}\\) are merged into a new member \\(\\mathbf{c}\\) of the new super edge \\(\\mathbf{C}\\). If a member \\(\\mathbf{d}\\) of a super edge is split into \\(\\mathbf{e}\\) and \\(\\mathbf{f}\\), then both \\(\\mathbf{e}\\) and \\(\\mathbf{f}\\) will belong to the same super edge as \\(\\mathbf{d}\\) did. The important property of super edges is that the total number of vertices can only grow. We call this number the _size_ of a super edge. A split operation does not affect the size of super edges, while merge operations can only increase it. It now suffices to show that the total number of vertex renumberings in a super edge of size \\(\\ell\\) is \\(\\ell\\log\\ell\\). We will do this by induction. The statement is trivial for \\(\\ell=0\\), so we take \\(\\ell>0\\). We may assume that the \\(\\ell\\)th update involves a merge of two topological edges, since this is the only update for which we have to do renumbering. Suppose that the sizes of the two super edges being unioned are \\(\\ell_{1}\\) and \\(\\ell_{2}\\). Without loss of generality assume that \\(\\ell_{1}\\leq\\ell_{2}\\). Hence, according to the Renumbering Algorithm which renumbers the shortest of the two, we have to do \\(2\\ell_{1}\\) renumbering steps: \\(\\ell_{1}\\) to assign new numbers, and \\(\\ell_{1}\\) to assign new weights. The size of the new super edge will be \\(\\ell=\\ell_{1}+\\ell_{2}\\). By the induction hypothesis, the total number of renumberings already done while building the two given super edges are \\(\\ell_{1}\\log\\ell_{1}\\) and \\(\\ell_{2}\\log\\ell_{2}\\). It is known ([12]) that \\[2\\min\\{x,1-x\\}\\leqslant\\ x\\log\\frac{1}{x}+(1-x)\\log\\frac{1}{1-x}, \\tag{1}\\] for \\(x\\in[0,1]\\). Define \\(x=\\ell_{1}/\\ell\\). By (1), we then obtain the inequality \\[\\ell_{1}\\log\\ell_{1}+\\ell_{2}\\log\\ell_{2}+2\\ell_{1} \\leqslant \\ell\\log\\ell,\\] as had to be shown. To conclude this section, we recall from Section 4.2 that the maximal number assigned to a regular vertex is \\(2\\ell^{2}\\). So, all numbers involved in the Renumbering Algorithm take only Figure 4: An example of some super edges (dotted lines) \\(O(\\log\\ell)\\) bits in memory. Theorem 4.1 assumes the standard RAM computation model with unit costs. If logarithmic costs are desired, the total time is \\(O(\\ell\\log^{2}\\ell)\\). ## 5 Online-Simplification: Topology Tree Algorithm In this section we introduce another algorithm for keeping the simplification of a graph up-to-date when this graph is subject to edge insertions. We only describe the case of edge insertion, but it is straightforward to extend the Topology Tree Algorithm to a fully dynamic algorithm, which can also react to deletions. The algorithm uses a direct adaptation of the topology-tree data structure introduced by Frederickson [3, 4]. This data structure has been used extensively in other partially and fully dynamic algorithms [6]. We first show how the topological edge can be found efficiently. ### Regular multilevel partition We define a _cluster_ as a set of vertices. The _size_ of a cluster is the number of vertices it contains. A _regular cluster_ is a cluster of size at most two, containing adjacent regular vertices. A _regular partition_ of a graph \\(G\\) is a partition of the set \\(V_{r}\\) of regular vertices, such that for any two adjacent regular vertices \\(v\\) and \\(w\\), the following holds: * either \\(v\\) and \\(w\\) are in the same regular cluster \\(\\mathcal{C}\\); or * \\(v\\) and \\(w\\) are in different regular clusters \\(\\mathcal{C}_{v}\\) and \\(\\mathcal{C}_{w}\\), and at least one of these regular clusters has size two. A _regular multilevel partition_ of a graph \\(G\\) is a set of partitions of \\(V_{r}\\) that satisfy the following (see Figure 5): 1. For each level \\(i=0,1,\\ldots,k\\), the clusters at level \\(i\\) form a partition of \\(V_{r}\\). 2. The clusters at level \\(0\\) form a regular partition of \\(V_{r}\\). 3. The clusters at level \\(i\\) form a regular partition when viewing each cluster at level \\(i-1\\) as a regular vertex. A _regular forest_ of a graph \\(G\\) is a forest based on a regular multilevel partition of \\(G\\). We focus on the construction of a single tree in the forest corresponding to a single regular path. A single tree is constructed as follows (see Figure 6). 1. A vertex at level \\(i\\) in the tree represents a cluster at level \\(i\\) in the regular multilevel partition. 2. A vertex at level \\(i>0\\) has children that represent the clusters at level \\(i-1\\) whose union is the cluster it represents. The height of a topology tree is logarithmic in the number of regular vertices in the leafs [3]. We also store adjacency information for the clusters. Two regular clusters \\(\\mathcal{C}\\) and \\(\\mathcal{C}^{\\prime}\\) at level \\(0\\) are _adjacent_, if there exists a vertex \\(v\\in\\mathcal{C}\\) and a vertex \\(w\\in\\mathcal{C}^{\\prime}\\) such that \\(v\\) and \\(w\\) are adjacent in \\(G\\). We call two clusters \\(\\mathcal{C}\\) and \\(\\mathcal{C}^{\\prime}\\) at level \\(i\\) adjacent, if they have adjacent children. A regular cluster \\(\\mathcal{C}\\) at level \\(0\\) is adjacent to a singular vertex \\(s\\) if there exists a regular vertex \\(v\\in\\mathcal{C}\\) adjacent to \\(s\\). A cluster at level \\(i>0\\) is adjacent to a singular vertex \\(s\\) if it has a child adjacent to \\(s\\). Figure 5: Example of a regular multilevel partition of a graph. Figure 6: The regular forest corresponding to the regular multilevel partition shown in Figure 5. ### Maintaining a regular multilevel partition The following procedure, for maintaining a regular multilevel partition under edge insertions, closely follows the procedure described by Frederickson [3], as our data structure is a direct adaptation of Frederickson's. level \\(0\\)It is very easy to adjust the regular partition, i.e., the regular clusters at level \\(0\\) of the regular multilevel partition. When an edge \\(e=\\{x,y\\}\\) is inserted, we distinguish between the following cases: 1. the edge \\(e\\) destroys a regular vertex \\(u\\); 2. the edge \\(e\\) destroys two regular vertices \\(u\\) and \\(v\\); 3. the edge \\(e\\) creates a regular vertex \\(u\\); 4. the edges \\(e\\) creates two regular vertices \\(u\\) and \\(v\\); 5. the edge \\(e\\) does not change the number of regular vertices. We denote with \\(C_{u}\\) (\\(C_{v}\\)) the regular cluster containing the vertex \\(u\\) (\\(v\\)). We treat these cases as follows. 1. If the size of \\(C_{u}\\) is \\(1\\), then this cluster is deleted. Otherwise if \\(C_{u}\\) is adjacent to a cluster \\(C\\) of size one, remove \\(u\\) from \\(C_{u}\\) and union \\(C_{u}\\) with \\(C\\). 2. Apply case 1 to both \\(C_{u}\\) and \\(C_{v}\\). 3. Create a new cluster \\(C_{u}\\) only containing \\(u\\). If \\(C_{u}\\) is adjacent to a cluster \\(C\\) of size one, union \\(C_{u}\\) with \\(C\\). 4. Apply case 3, but if both \\(C_{u}\\) and \\(C_{v}\\) are not adjacent to a cluster of size one, then they are unioned together. 5. Nothing has to be done. As an example consider the graph depicted in Figure 7. The insertion of edge \\(\\{x,y\\}\\) destroys the regular vertex \\(x\\), so we are in case 1. Because \\(\\mathcal{C}^{\\prime}\\) is adjacent to \\(\\mathcal{C}^{\\prime\\prime}\\) and the size of \\(\\mathcal{C}^{\\prime\\prime}\\) is one, we must union \\(\\mathcal{C}^{\\prime}\\) and \\(\\mathcal{C}^{\\prime\\prime}\\) together into a new regular cluster \\(\\mathcal{C}\\). The maintenance of the regular partition is completed after adjusting the adjacency information of both \\(\\mathcal{C}\\) and \\(\\mathcal{D}\\), as shown in Figure 7. level \\(>0\\)We assume that the regular partition at level \\(0\\) reflects the insertion of an edge, as discussed above. The number of clusters which have changed, inserted or deleted is at most Figure 7: Adjusting the regular partition after inserting edge \\(\\{x,y\\}\\). some constant. We put these clusters in a list \\(L_{C}\\), \\(L_{I}\\), and \\(L_{D}\\) according to whether they are changed, inserted or deleted. More specifically, these lists are initialized as follows. Each regular cluster that has been split or combined to form a new regular cluster is inserted in \\(L_{D}\\), while each new regular cluster is inserted in list \\(L_{I}\\). The adjacency information is stored with the clusters in \\(L_{I}\\). For clusters in \\(L_{D}\\) every adjacency information is set to null, except the parent information. For each regular cluster whose set of vertices has not changed but its adjacency information has changed, update the adjacency information and insert it into \\(L_{C}\\). We create lists \\(L^{\\prime}_{D}\\), \\(L^{\\prime}_{I}\\), and \\(L^{\\prime}_{C}\\) to hold the clusters at the next higher level of the regular multilevel partition. These lists are initially empty. We first adjust the clusters in the list \\(L_{D}\\). Every cluster \\(\\mathcal{C}\\) in \\(L_{D}\\) is removed from \\(L_{D}\\), and \\(\\mathcal{C}\\) is removed as child from its parent \\(\\mathcal{P}\\) (if existing). * If \\(\\mathcal{P}\\) has no more children, then insert \\(\\mathcal{P}\\) in \\(L^{\\prime}_{D}\\). * If \\(\\mathcal{P}\\) still has a child \\(\\mathcal{C}^{\\prime}\\), then if \\(\\mathcal{C}^{\\prime}\\) is not already in \\(L_{C}\\) or \\(L_{D}\\), then insert \\(\\mathcal{C}^{\\prime}\\) into \\(L_{C}\\). Next, we search the list \\(L_{C}\\) for clusters that have siblings. Suppose that \\(\\mathcal{C}\\in L_{C}\\) has a sibling \\(\\mathcal{C}^{\\prime}\\) and parent \\(\\mathcal{P}\\). * If \\(\\mathcal{C}\\) and \\(\\mathcal{C}^{\\prime}\\) are adjacent, then remove \\(\\mathcal{C}\\) from the list \\(L_{C}\\), and remove \\(\\mathcal{C}^{\\prime}\\) from \\(L_{C}\\) if it is in this list. Insert \\(\\mathcal{P}\\) into \\(L^{\\prime}_{C}\\). * If \\(\\mathcal{C}\\) and \\(\\mathcal{C}^{\\prime}\\) are not adjacent, then remove \\(\\mathcal{C}\\) and \\(\\mathcal{C}^{\\prime}\\) as children from \\(\\mathcal{P}\\). Remove \\(\\mathcal{C}\\) from the list \\(L_{C}\\), and also remove \\(\\mathcal{C}^{\\prime}\\) from \\(L_{C}\\) if it is in this list. Insert both \\(\\mathcal{C}\\) and \\(\\mathcal{C}^{\\prime}\\) into \\(L_{I}\\), and insert \\(\\mathcal{P}\\) in \\(L^{\\prime}_{D}\\). Finally, we treat the remaining clusters in \\(L_{C}\\) and in \\(L_{I}\\). Let \\(\\mathcal{C}\\) be such a cluster. Remove \\(\\mathcal{C}\\) from the appropriate list. In what follows, the degree of \\(\\mathcal{C}\\) is the number of adjacent clusters. * If \\(\\mathcal{C}\\) has degree zero, then it is the root of a tree in the regular forest. Insert its parent \\(\\mathcal{P}\\), if existing, in \\(L^{\\prime}_{D}\\). * If \\(\\mathcal{C}\\) has degree one or two, then we have the following possibilities: * If every adjacent cluster to \\(\\mathcal{C}\\) has a sibling, then insert the parent \\(\\mathcal{P}\\) of \\(\\mathcal{C}\\) into \\(L^{\\prime}_{C}\\) in case \\(\\mathcal{P}\\) exists. In case \\(\\mathcal{C}\\) does not have a parent, create a new parent cluster \\(\\mathcal{P}\\) and insert it into \\(L^{\\prime}_{I}\\). * Let \\(\\mathcal{C}^{\\prime}\\) be a cluster adjacent to \\(\\mathcal{C}\\) which has no sibling. Remove \\(\\mathcal{C}^{\\prime}\\) from the appropriate list, if it is in a list. If both \\(\\mathcal{C}\\) and \\(\\mathcal{C}^{\\prime}\\) have a parent, denoted by \\(\\mathcal{P}\\) and \\(\\mathcal{P}^{\\prime}\\) respectively, then remove \\(\\mathcal{C}\\) as child of \\(\\mathcal{P}\\) and make it a child of \\(\\mathcal{P}^{\\prime}\\). Insert \\(\\mathcal{P}\\) into \\(L^{\\prime}_{D}\\), and insert \\(\\mathcal{P}^{\\prime}\\) into \\(L^{\\prime}_{C}\\). If both \\(\\mathcal{C}\\) and \\(\\mathcal{C}^{\\prime}\\) have no parent, then create a new parent \\(\\mathcal{P}\\) of \\(\\mathcal{C}\\) and \\(\\mathcal{C}^{\\prime}\\), and insert \\(\\mathcal{P}\\) into \\(L^{\\prime}_{I}\\). If \\(\\mathcal{C}\\) has a parent \\(\\mathcal{P}\\), and \\(\\mathcal{C}^{\\prime}\\) has no parent, then make \\(\\mathcal{C}^{\\prime}\\) a child of \\(\\mathcal{P}\\) and insert \\(\\mathcal{P}\\) into \\(L^{\\prime}_{C}\\). The case that \\(\\mathcal{C}^{\\prime}\\) has a parent \\(\\mathcal{P}^{\\prime}\\), and \\(\\mathcal{C}\\) has no parent, is analogous. When all clusters are removed from \\(L_{D}\\), \\(L_{C}\\), and \\(L_{I}\\), determine and adjust the adjacency information for all clusters in \\(L^{\\prime}_{D}\\), \\(L^{\\prime}_{C}\\), and \\(L^{\\prime}_{I}\\) and reset \\(L_{C}\\) to be \\(L^{\\prime}_{C}\\), \\(L_{C}\\) to be \\(L^{\\prime}_{C}\\), and \\(L_{I}\\) to be \\(L^{\\prime}_{I}\\). If no clusters are present in \\(L^{\\prime}_{D}\\), \\(L^{\\prime}_{C}\\) or \\(L^{\\prime}_{I}\\), nothing needs to be done and the iteration stops. This completes the description of how to handle the lists \\(L_{D}\\), \\(L_{C}\\), and \\(L_{I}\\). ### Finding a topological edge Consider that we are in one of the cases 4-6 described in Section 3, where we have to split a topological edge. Let \\(x\\) be the regular vertex at which we have to split the topological edge. We store a pointer from \\(x\\) to the regular cluster \\(\\mathcal{C}_{x}\\) in which it is contained. We also store a pointer from each root of a tree \\(T\\) in the regular forest to the topological edge, corresponding to the regular path formed by all vertices in the leaves of \\(T\\). We find the topological edge which needs to be split by going from \\(\\mathcal{C}_{x}\\) to the root of the tree containing \\(\\mathcal{C}_{x}\\). Since the height of the tree is at most \\(O(\\log\\ell)\\), where \\(\\ell\\) is the current number of edge insertions, we obtain the following. **Proposition 5.1**.: _Given a regular vertex \\(x\\), the regular forest returns the topological edge corresponding to the regular path on which this regular vertex lies in \\(O(\\log\\ell)\\) time._ ### Storing weight information We store weight information in two different places. We define the weight of a regular cluster at level \\(0\\) of size one as zero. Let \\(\\mathcal{C}\\) be a cluster at level \\(0\\) of size two, and let \\(v\\) and \\(w\\) be the two regular vertices in \\(\\mathcal{C}\\). Then we define the weight of \\(\\mathcal{C}\\) as the weight of the edge \\(\\{v,w\\}\\). If a cluster at level \\(0\\) is adjacent to a singular vertex \\(s\\), then we store the weight of \\(\\{v,s\\}\\) together with the adjacency information (here, \\(v\\) is the vertex in \\(\\mathcal{C}\\) adjacent to \\(s\\)). If two clusters \\(\\mathcal{C}\\) and \\(\\mathcal{C}^{\\prime}\\) at level \\(0\\) are adjacent, then we store the weight of \\(\\{v,w\\}\\) together with their adjacency information (here \\(v\\in\\mathcal{C}\\) and \\(w\\in\\mathcal{C}^{\\prime}\\) and \\(v\\) is adjacent to \\(w\\)). The weight of a cluster of size one at level \\(i>0\\), is defined as the weight of its child at the next lower level. The weight of a cluster of size two at level \\(i>0\\) equals the sum of the weights of its two children and the weight stored with their adjacency information. If two clusters at level \\(i>0\\) are adjacent, we store the weight of the adjacency information of their adjacent children. If a cluster at level \\(i>0\\) is adjacent to a singular node, we store the weight of the adjacency information of its child and the singular node. ### Maintaining weight information The weight of clusters and the weights stored together with the adjacency information, is updated after each run of the update procedure for the regular multilevel partition, with an extra constant cost. Indeed, both the weights of clusters at level \\(0\\) and the weights stored with the adjacency information, are trivially updated. When we assume that all levels lower than \\(i\\) represent the weight information correctly, the weight information of clusters in \\(L_{C}\\) and \\(L_{I}\\) is trivially updated using the weight information at level \\(i-1\\). ### Finding the weights As mentioned above, each root of a regular tree in the regular forest, has a pointer to a unique topological edge. This root has its own weight, as defined above, and is adjacent to two singular vertices. The weight of the topological edge is obtained by summing the weight of the root together with the weights of the adjacency information of the two singular vertices. This is illustrated in Figure 8. ### Complexity Analysis The complexity of the Topology Tree Algorithm is governed by two things: the maximal height of a single tree in the regular forest, and the amount of work that needs to be done at each level in the maintenance of the regular multilevel partition. We already saw that the height of a single tree is logarithmic in the number of regular vertices on the regular path on which the tree is built. Moreover, Frederickson has proven that in the lists \\(L_{C}\\), \\(L_{D}\\), and \\(L_{I}\\) only a constant number of clusters are stored [3]. These lists are updated at most \\(O(\\log\\ell)\\) times, where \\(\\ell\\) is the number of edge insertions, so that the total update time is \\(O(\\log\\ell)\\) per edge insertion. Hence, we may conclude the following: **Theorem 5.1**.: _The total time spent on \\(\\ell\\) updates by the Topology Tree Algorithm is \\(O(\\ell\\log\\ell)\\)._ ## 6 Experimental Comparison The Renumbering Algorithm and the Topology Tree Algorithm are very different, but have the same theoretical complexity. Hence, the question arises how they compare experimentally. In this section we try to obtain some insight into this question. Both algorithms were implemented in C++ using LEDA [9]. We used the GNU g++ compiler version 2.95.2 without any optimization option. Our experiments were performed on a SUN Ultra 10 running at 440 Mhz with 512 MB internal memory. Implementing the Renumbering algorithm was considerably easier than implementing the Topology Tree Algorithm. We conducted our experiments on three types of inputs. First of all, we extensively studied random inputs, which are random sequences of updates on random graphs. Next, we used two kinds of non-random graph inputs which focus specifically on the merging and the splitting of topological edges. Thereto, we constructed an input sequence which repeatedly merges topological edges, and an input sequence which first creates a very large number of small topological edges, and then splits these edges randomly. Finally, we ran both algorithms on two inputs originating from real data sets. Figure 8: Example of a regular tree together with its weight information. MethodologySince the experiments have an element of randomness, we show the results in the form of 95% confidence intervals. For each test, we perform a large number of runs. For each run, we compute the ratio between the total time taken by Topology Tree and that taken by Renumbering. We took the average of these ratios and computed the 95% confidence interval. So, for example, the interval \\([1.10,1.15]\\) means that Topology Tree was 10 to 15% slower than Renumbering in 95% of the runs in the test. Random InputsThe random inputs consist of random graphs that are generated, given the number of vertices and edges. Each run builds a random graph incrementally with the insertions uniformly distributed over the set of edges. We conducted a series of tests for different number of nodes \\(n\\) and number of edges \\(m\\). For every pair of values for \\(n\\) and \\(m\\) we did 1000 runs. The results of these experiments are shown in Table 1. For small numbers of edge insertions, i.e., when the probability of having many regular vertices is large, we see that the Renumbering Algorithm is faster. However, when the number of edge insertions increases, the Topology Tree Algorithm becomes slightly faster. This is probably due to the fact that the dictionary in the Renumbering Algorithm becomes very large, i.e., there are many short topological edges, and hence it takes longer to search for topological edges. Non-Random InputsThe non-random inputs consisted of two types. For the first type, we created a large number of topological edges and then started to merge these edges pairwise. The end result was a very long topological edge. For the second type, we first created a very large number of short regular paths consisting of a single regular vertex, and then started to split these randomly. Each result shown in Table 2 is obtained from 100 runs. The first type of input was designed in order to reproduce the cases, observed in the random inputs, where Renumbering is much faster than Topology Tree. This is confirmed by \\begin{table} \\begin{tabular}{|c|c|c|c|} \\hline vertices\\textbackslash{}edges & m=5 000 & m=10 000 & m=20 000 \\\\ \\hline n=1 000 & \\([1.10,1.15]\\) & \\([1.03,1.06]\\) & \\([0.97,0.99]\\) \\\\ \\hline \\hline vertices\\textbackslash{}edges & m=5 000 & m=25 000 & m=75 000 \\\\ \\hline n=5000 & \\([1.25,1.29]\\) & \\([1.01,1.03]\\) & \\([0.96,0.98]\\) \\\\ \\hline \\hline vertices\\textbackslash{}edges & m=10 000 & m=50 000 & m=150 000 \\\\ \\hline n=50 000 & \\([1.30,1.35]\\) & \\([1.06,1.07]\\) & \\([0.91,0.92]\\) \\\\ \\hline \\hline vertices\\textbackslash{}edges & m=10 000 & m=100 000 & m=300 000 \\\\ \\hline n=100 000 & \\([1.21,1.23]\\) & \\([0.98,0.99]\\) & \\([0.85,0.86]\\) \\\\ \\hline \\end{tabular} \\end{table} Table 1: 95% confidence intervals on ratio between Topology Tree and Renumbering, from 1000 runs on random inputs. \\begin{table} \\begin{tabular}{|l|c|} \\hline Merge (\\(n=20\\,099\\), \\(m=20\\,098\\)) & \\([3.60,3.74]\\) \\\\ \\hline Split (\\(n=280\\,000\\), \\(m=200\\,000\\)) & \\([1.15,1.17]\\) \\\\ \\hline \\end{tabular} \\end{table} Table 2: 95% confidence intervals on ratio between Topology Tree and Renumbering, from 100 runs on non-random inputs. the experimental result. Indeed, on this type of inputs, the Topology Tree Algorithm has to maintain large topology trees, which is probably the reason that it is slower. The second type of input was designed in an attempt to reproduce the cases where Topology Tree is faster than Renumbering. Our attempt failed, however, as the experimental result does not confirm this. Indeed, although the topology trees all have height one, while the dictionary is very large, the Renumbering Algorithm nevertheless still is faster. Real Data InputsWe also tested the relative performance of both algorithms with respect to graphs representing real data. We present the results on two data sets: _Hydrography graph_: A data set representing the hydrography of Nebraska. This set contains \\(157\\,972\\) vertices, of which \\(96\\,636\\) are regular. _Railroad graph_: A data set representing all railway mainlines, railroad yards, and major sidings in the continental U.S. compiled at a scale \\(1:100\\,000\\). It contains \\(133\\,752\\) vertices of which only \\(14\\,261\\) are regular. It is available at the U.S. Bureau of Transportation Statistics (www.bts.gov/gis). The results shown in Table 3 are obtained after performing \\(100\\) experiments. In each experiment, we ran both algorithms in a random way on these data sets. We computed the ratio between the total time the Topology Tree Algorithm needed to perform the test and the total time the Renumbering Algorithm needed to accomplish the same task. We took the average of this ratio and computed the \\(95\\%\\) confidence interval. Again, we see that when there are only few, but long, topological edges, the Renumbering Algorithm is faster than the Topology Tree Algorithm. When there are many, short, topological edges, like in the railroad graph, the Topology Tree Algorithm is slightly faster than the Renumbering Algorithm. In summary, our experimental study shows that when the percentage of regular vertices is high in a graph, then the Renumbering Algorithm is clearly better than the Topological Tree Algorithm, and when the same percentage is low, then the reverse often holds. However, our experimental study did not compare any specific problem solving with and without using topological simplification. Intuitively, the value of topological simplification should increase with the percentage of regular vertices in the graph. Therefore, when the percentage of the regular vertices is high, the Renumbering Algorithm should be not only better than the Topological Tree Algorithm but also yield a significant time saving over problem solving without topological simplification. We expect this to be the most important practical implication of our study for the case when there are only insertions of edges and vertices into the graph. However, when a fully dynamic structure is needed, then the Topological Tree Algorithm should be also advantageous in practice. \\begin{table} \\begin{tabular}{|l|l|} \\hline Hydrography & \\([1.62,1.66]\\) \\\\ \\hline Railroad & \\([0.95,0.96]\\) \\\\ \\hline \\end{tabular} \\end{table} Table 3: \\(95\\%\\) confidence intervals on ratio between Topology Tree and Renumbering, from \\(100\\) runs on real datasets. ## Acknowledgement We would like to thank Bill Waltman for providing us with the hydrography data set. ## References * [1] T.H. Cormen, C.E. Leierson, and R.L. Rivest. Introduction to Algorithms. MIT Press, 1990. * [2] M.J. Egenhofer and J.R. Herring, editors. Advances in Spatial Databases, Volume 951 of Lecture Notes in Computer Science, Springer-Verlag, 1995. * [3] G.N. Frederickson. \"Data structures for on-line updating of minimal spanning trees\", SIAM J. Comput., Vol 14:781-798, 1985. * [4] G.N. Frederickson. \"Ambivalent data structures for dynamic 2-edge-connectivity and \\(k\\) smallest spanning trees\", SIAM J. Comput., Vol 26(2):484-538, 1997. * [5] F. Geerts, B. Kuijpers, and J. Van den Bussche. \"Topological canonization of planar spatial data and its incremental maintenance\", In T. Polle, T. Ripke, and K.-D. Schewe, editors, Fundamentals of Information Systems, Kluwer Academic Publishers, 1998, pp 55-68. * [6] G. Italiano. \"Dynamic graph algorithms\", In Mikhail J. Atallah, editor, Handbook on Algorithms and Theory of Computation, CRC Press, 1998. * [7] B. Kuijpers, J. Paredaens, and J. Van den Bussche. \"Lossless representation of topological spatial data\", In Egenhofer and Herring [2], pages 1-13. * [8] K. Mehlhorn. Data Structures and Algorithms 1: Sorting and Searching, EACTS Monographs on Theoretical Computer Science. Springer-Verlag, 1984. * [9] K. Mehlhorn and S. Naher. \"LEDA: A platform for combinatorial and geometric computing\", Comm. of the ACM, Vol 38(1):96-102, 1995. * [10] C.H. Papadimitriou, D. Suciu, and V. Vianu. \"Topological queries in spatial databases\", Journal of Computer and System Sciences, Vol 58(1):29-53,1999. * [11] L. Segoufin and V. Vianu. \"Querying spatial databases via topological invariants\", Journal of Computer and System Sciences, Vol 61(2):270-301, 2000. * [12] R. Tamassia \"On-line planar graph embedding\", Journal of Algorithms, Vol 21:201-239, 1996. * [13] R.E. Tarjan. \"Data structures and network algorithms\", In CBMS-NSF Regional Conference Series in Applied Mathematics, Vol 44. SIAM, 1983. * [14] M.F. Worboys. GIS: A Computing Perspective, Taylor&Francis, 1995.
We describe two efficient on-line algorithms to simplify weighted graphs by eliminating degree-two vertices. Our algorithms are on-line in that they react to updates on the data, keeping the simplification up-to-date. The supported updates are insertions of vertices and edges; hence, our algorithms are partially dynamic. We provide both analytical and empirical evaluations of the efficiency of our approaches. Specifically, we prove an \\(O(\\log n)\\) upper bound on the amortized time complexity of our maintenance algorithms, with \\(n\\) the number of insertions.
Give a concise overview of the text below.
arxiv-format/0608105v2.md
# Lagrangian transport through an ocean front in the North-Western Mediterranean Sea Ana M. Mancho Instituto de Matematicas y Fisica Fundamental, Consejo Superior de Investigaciones Cientificas (CSIC), Serrano 121, 28006 Madrid, Spain Emilio Hernandez-Garcia Instituto Mediterraneo de Estudios Avanzados, IMEDEA (CSIC - Universitat de les Illes Balears) E-07122 Palma de Mallorca, Spain Des Small and Stephen Wiggins School of Mathematics, University of Bristol, Bristol BS8 1TW, United Kingdom Vicente Fernandez Istituto Nazionale di Geofisica e Vulcanologia INGV, Via Donato Creti 12, 40128 Bologna, Italy ###### Introduction Ocean water masses of different origins have distinct contents of salt, heat, nutrients and chemicals. Currents transport them, and energetic mesoscale features are responsible for most of their mixing with surrounding waters. Vortices are the most well studied of such structures. Frequently they are long lived, and their cores remain relatively protected from the neighboring areas, so that water trapped inside could maintain its biogeochemical properties for long time, being transported with the vortex. In steady horizontal velocity fields, the centers of vortices are readily identified as elliptic points in the streamfunction level sets. The presence of closed streamlines around them is the mathematical reason for the isolation of the vertex core from the exterior fluid. When the velocity field changes in time, closed streamlines are replaced by more complex structures, some of which can be related in idealized cases to the Kolmogorov-Arnold-Moser tori of dynamical systems theory. For slowly varying velocity fields vortex cores remain coherent during some time, but there is vigourous stirring of the surrounding fluid that finally leads to water mixing. In order to understand this mixing process, one should focus on features of the velocity field different from the elliptic points characterizing the vortex cores. Since some time ago (Ju et al. 2003; Ide et al. 2002; Mancho et al. 2004) hyperbolic trajectories (trajectories with saddle-like stability properties which are solutions to a dynamical system) have been recognized as the structures responsible for most of the stretching and generation of intertwined small scales that finally leads to mixing. In particular there are Distinguished Hyperbolic Trajectories (DHTs), characterized by its special persistence as compared with the other hyperbolic structures, that act as the organizing centers of the fluid stirring processes. Despite the great amount of attention devoted to the identification and characterization of vortices and their dynamics in oceanographic contexts (Olson 1991; Puillat et al. 2002; Ruiz et al. 2002; Isern-Fontanet et al. 2006), there are few studies focussed on hyperbolic objects, and most of them in idealized settings. A possible reason for this may be that the intrinsic instability of trajectories close to hyperbolic points makes more difficult their identification, in contrast with the recurrent character of trajectories in vortices that allow their tracking for long times from in situ and satellite measurements. In addition, whereas many aspects of vortex dynamics can be analyzed in a Eulerian framework, hyperbolic trajectories are defined by their Lagrangian characteristics and only in this frame can they be fully characterized (Ide et al., 2002; Mancho et al., 2004). In this Paper we identify relevant hyperbolic trajectories from the surface velocity field of the Western Mediterranean Sea, obtained from a three dimensional model simulation under climatological atmospheric forcing. Our aim is to show that transport mechanisms, in particular the so called 'turnstile' mechanism, previously identified in abstract dynamical systems (Channon and Lebowitz, 1980; Bartlett, 1982; MacKay et al., 1984; Wiggins, 1992), and discussed in the context of rather simple model flows (Rom-Kedar et al., 1990; Beigie et al., 1994; Samelson, 1992; Duan and Wiggins, 1996), are also at work in this complex and rather realistic ocean flow. More broadly, nonlinear dynamics techniques are shown to be powerful enough to identify the key geometric structures in this part of the Mediterranean. Western Mediterranean surface layers (up to a depth of about 150 m) contain Modified Atlantic Waters of rather different characteristics (Millot, 1999). Fresher waters (salinity about 36.5 psu) recently entered from the Atlantic occupy the Algerian Basin at the South and older and saltier waters (salinity above 38 psu) occupy the Northern part of the area. The dynamics of the contact zone between these two surface water masses, and eventually their mixing, is important to understand the physical and biogeochemical properties of the Western Mediterranean. We focus on one of the oceanographic structures known to be of importance in these processes, the so-called North Balearic Front (Lopez-Garcia et al., 1994; Pinot et al., 1995; Millot, 1999). It extends roughly along the Southwest-Northeast direction at the North of the Balearic Islands, with significant displacements and deformations. It is characterized by a strong salinity jump of about 0.6 psu (37.4 psu to the South and 38.0 to the North) down to 150 m depth. This identifies the front as the main transition zone between the two water masses. In Winter a weak but detectable temperature gradient of 0.5-1 K/5 km can be observed in satellite images. After selecting an interval of time in our simulation during which the front is well formed, we explore the transport properties of the surface velocity field in the region, and find the relevant Lagrangian structures, hyperbolic points and their manifolds, responsible for the establishment and permanence of the front. The location of the front is identified as a \"Lagrangian barrier\" across which transport is small (as quantified with the tools of lobe dynamics), and occurs via filaments that entrain water along transport routes that we identify. The presence of eddies strongly affect the Lagrangian structures in a process that can be interpreted as the break-down of the front. The Paper is organized as follows: In Section 2 we introduce some of the dynamical systems concepts that will be used in the following. Section 3 describes our numerical ocean model and Section 4 addresses the adaptation of the standard algorithms to the kind of oceanographic data provided by the model. Section 5 contain our main results and the conclusions are summarized in Section 6. ## 2 Distinguished Hyperbolic Trajectories and their Manifolds In recent years there have been many applications of the dynamical systems approach to transport in oceanographic flows. Recent reviews are Jones and Winkler (2002); Wiggins (2005); Samelson and Wiggins (2006). In this section we describe the basic ideas that we will use in our analysis. Although the same concepts are of use in three dimensional flows, we describe here just the two dimensional situation, since as discussed below the structures we identify can be considered two dimensional to a good approximation during the time scales relevant here. Stagnation points are well known features of steady flows that generally play an important role in \"organizing\" the qualitatively distinct streamlines in the flow. For example, saddle-type stagnation points can occur on boundaries where streams of flow tan gential to the boundary coming from opposite directions meet and then separate from that boundary. Saddle-type stagnation points can occur in the interior of a flow at a point where fluid seems to both converge to the point along two opposite directions and diverge from it along two different directions. In steady flows the stagnation point is a trivial example of a fluid particle trajectory (i.e., it is a \"solution,\" in this case a fixed point solution, of the equations for fluid particle motions defined by the velocity field) and the saddle point nature is manifested by the fact that there are directions for which nearby trajectories approach the stagnation point at an exponential rate and move away from the stagnation point at an exponential rate. These directions are sometimes referred to as \"stagnation streamlines\". They define material curves that \"cross\" at the stagnation point and typically form the boundaries between qualitatively distinct regions of flow. A related \"time-dependent\" picture as that described above exists for unsteady flows, with both similar, as well as much more complex, implications for transport. In unsteady flows a stagnation point at a given time -or rather, an Instantaneous Stagnation Point (ISP)- is a location at which velocity vanishes at that time. The sequence of locations is generally not a fluid particle trajectory (Ide et al. 2002). The true analog of the saddle-type stagnation point of steady flows is a _Distinguished Hyperbolic Trajectory_ (DHT) (Ide et al. 2002). \"Hyperbolic\" is the dynamical systems terminology for \"saddle-type\". These are fluid particle trajectories that have (time-dependent) directions for which nearby trajectories approach and move away from the DHT at exponential rates. \"Distinguished\" is a notion that is discussed in detail in Ide et al. (2002), but the idea is that these are the key, isolated hyperbolic trajectories that serve to organize the transport behavior in a flow because they remain substantially more localized (in a well defined sense, see Ide et al. (2002)) than neighboring hyperbolic trajectories. Ide et al. (2002), Ju et al. (2003), and Mancho et al. (2004) develop the algorithms that allow us to compute DHTs in a given flow. They are iterative methods that start with a first guess for the DHT positions in an interval of time (i.e. an initial curve in space and time) and then refine the space-time curve by imposing the criteria of hyperbolicity and localization. In the flows considered here, a good first guess is the location of ISPs, since it turns out that a DHT is often found in the neighborhood of an Eulerian ISP. Just as in the steady case, there are analogs to the stagnation streamlines: in the dynamical systems terminology these are referred to as the _stable and unstable manifolds of the DHT_, and they are time-dependent material curves. In dynamical systems terminology the fact that they are material curves means that they are _invariant_, i.e. a fluid particle trajectory starting on one of these curves must remain on that material curve during the course of its time evolution. \"Stable manifold\" means that trajectories starting on this material curve approach the DHT at an exponential rate as time goes to infinity, and \"unstable manifold\" means that trajectories starting on this material curve approach the DHT at an exponential rate as time goes to minus infinity. Mancho et al. (2003, 2004) develop the algorithms that enable us to compute the stable and unstable manifolds of hyperbolic trajectories. In unsteady flows stable and unstable manifolds of DHTs can intersect in isolated points different from the DHTs. This is a fundamental difference with respect to steady flows, and give rise to moving regions of fluid bounded by pieces of stable and unstable manifolds, the so called \"lobes\". Since the manifolds are material lines, fluid can not cross them by purely advective processes and thus they are perfect Lagrangian barriers (diffusion, or motion along the third dimension can however induce cross-manifold transport). Motion of the \"lobes\" is thus the mechanism responsible for mediating Lagrangian transport between different regions. References describing \"lobe dynamics\" in general are Rom-Kedar and Wiggins (1990); Beigie et al. (1994); Wiggins (1992); Malhotra and Wiggins (1998); Samelson and Wiggins (2006). Examples of applications of lobe dynamics to oceanographic flows are Ngan and Shepherd (1997); Rogerson et al. (1999); Yuan et al. (2001, 2004); Miller et al. (2002); Deese et al. (2002). We will describe these ideas more fully in the context of transport associated with the Balearic front in the Mediterranean. The Ocean Circulation Model In this work we analyze velocity fields which are obtained from an ocean model, DieCAST (Dietrich, 1997), adapted to the Mediterranean Sea (Dietrich et al., 2004; Fernandez et al., 2005). The 3D primitive equations are discretized with a fourth-order collocated control volume method. In zones adjacent to boundaries a conventional second order method is used. A fundamental feature of control volume based models is that the predicted quantities are control volume averages, while face-averaged quantities are used to evaluate fluxes across control volume faces (Sanderson and Brassington, 1998). These quantities are computed using fourth-order approximations and numerical dispersion errors are further reduced in the modified incompressibility algorithm by Dietrich (1997). Horizontal resolution is the same in both the longitudinal (\\(\\phi\\)) and latitudinal (\\(\\lambda\\)) directions, with \\(\\Delta\\phi\\)= (1/8) of degree and \\(\\Delta\\lambda=\\cos\\lambda\\Delta\\phi\\) thus making square horizontal control volume boundaries. Vertical resolution is variable, with 30 control volume layers. The thickness of control volumes in the top layer is 10.3 m and they are smoothly increased up to the deepest bottom control volume face at 2750 m. Thus ETOP05 bathymetry is truncated at 2750 m depth and it is not filtered or smoothed. Horizontal viscosity and diffusivity values are constant and equal to 10 m\\({}^{2}\\) s\\({}^{-1}\\). For the vertical viscosity and diffusivity, a formulation based on the Richardson number developed in Pacanowski and Philander (1981) is used, with background values set at near-molecular values (10\\({}^{-6}\\) and 2 \\(\\times\\) 10\\({}^{-7}\\) m\\({}^{2}\\)s\\({}^{-1}\\) respectively). We use monthly mean wind stress reanalyzed from 10 m wind output from ECMWF, as chosen for the Mediterranean Sea Models Evaluation Experiment (Beckers et al., 2002). The heat and the freshwater fluxes used to force the model are model-determined from monthly climatological SST and SSS as described in Dietrich et al. (2004). The only open boundary is the Strait of Gibraltar, where inflow conditions are set similar to observations and outflow is model-determined by upwind. Everywhere else, free-slip lateral boundary conditions are used. All bottom dissipation is represented by a conventional nonlinear bottom drag with a coefficient of 0.002. Lateral and bottom boundaries are thermally insulating. The model is initialized at a state of rest with the annual mean temperature and salinity fields taken from the climatological data. The spin-up phase of integration is carried out for 16 years. Each year is considered to have 12 months 30 days length each (i.e. 360 days). The climatological forcings we use are adequate to identify the mechanisms and processes occurring under typical or average circumstances. Under this approach, high frequency motions are weak in our model and a daily sampling is adequate. The impact on transport of disturbances containing high frequencies, such as storms or wind bursts, is not the focus of the present Paper and would need specific modelling beyond climatological forcing. We focus on velocity fields obtained at the second layer which has its center at a depth of 15.93 m. This is representative of the surface circulation and is not as directly driven by wind as the top layer. We have recorded velocities, temperatures and salinities in this model layer for five years. Dynamical systems approaches have already been applied to this data set, in particular Lyapunov techniques to quantify mixing strength (d'Ovidio et al., 2004), and the \"leaking\" approach (Schneider et al., 2005) to quantify escape and residence times in several areas. Here we concentrate in the Northwestern region and apply the methods of lobe dynamics to characterize transport processes in the North Balearic Front area. Figure 1 shows an example of the output of the model for the velocity field in the selected layer of the Western Mediterranean Sea at day 649 (the 19th day of the tenth month -October- of the second year). Two well known currents, the Northern Current flowing southwards close to the Spanish coast and the Balearic Current associated with the North Balearic Front and flowing northeastwards North of the Balearics, are observed although significantly deformed by the presence of eddies. The Figure shows also the ISPs of the velocity field: circles for the elliptic and crosses for the saddle-type ones. The velocity field has small vertical components, so that this is not strictly a two dimensional flow. With vertical velocities of the order of \\(10^{-5}\\) m/\\(s\\), particles in the second model layer require about 13 days to traverse the layer. But during that time this vertical velocity is not constant. Averaging over the relevant time scales we find effecFigure 1: Velocity field at 15.93 m depth in the Northwestern Mediterranean at simulation day 649 (October). The Westernmost coast is the Spanish one, the islands are the Balearics, and the Easternmost coasts are portions of Corsica and Sardinia. Circles indicate elliptic ISPs and crosses indicate saddle-type ISPs. tive velocities of 0.1-0.7 m/day, depending on location and season (d'Ovidio et al., 2004; Schneider et al., 2005), and thus residence times in the second layer are between two weeks and several months. As a rule of thumb we can consider that trajectories preserve two dimensionality during time intervals of about 20 days. Since most of our trajectory integrations will be restricted to time intervals below that duration, they can be considered two dimensional to a good approximation. By inspection of the temperature and salinity model outputs we identify a four month interval starting on October of the second simulation year (i.e. Autumn and early Winter) as an example in which gradients are strong most of the time and thus the North Balearic Front well defined (see Fig. 2). We select this interval of time as our study case for which dynamic structures will be calculated. ## 4 Computation of Trajectories and Manifolds in Mediterranean Data Sets The equations of motion that describe the horizontal evolution of particle trajectories in our velocity field are \\[\\frac{\\mathrm{d}\\phi}{\\mathrm{d}t} = \\frac{u(\\phi,\\lambda,t)}{R\\cos(\\lambda)} \\tag{1}\\] \\[\\frac{\\mathrm{d}\\lambda}{\\mathrm{d}t} = \\frac{v(\\phi,\\lambda,t)}{R} \\tag{2}\\] where \\(u\\) and \\(v\\) represent the eastwards and northwards components of the surface velocity field coming from the simulations described in the previous section, \\(R\\) is the radius of the Earth (6400 km in our computations), \\(\\phi\\) is longitude and \\(\\lambda\\) latitude. Particle trajectories must be integrated in equations (1)-(2) and since information is provided just in a discrete space-time grid, a first issue to deal with is that of interpolation of discrete data sets. A recent paper by Mancho et al. (2006) compares different interpolation techniques in tracking particle trajectories. Bicubic spatial interpolation in space (Press et al., 1992) and third order Lagrange polynomials in time are shown to provide a computationally and unstable manifolds that we will use to describe and quantify transport associated with the North Balearic front. ## 5 Lagrangian structures and transport in the Balearic Sea We focus on the region North of the Balearic Islands, the Balearic Sea. The main oceanic structures known to be present there are the Balearic current and the associated North Balearic Front (Lopez-Garcia et al. 1994; Millot 1999). This last feature is known to represent a transition zone between saltier and fresher waters in the Western Mediterranean. The salinity fields obtained from our computer simulation display significant salinity gradients (and also temperature gradients in Winter-in Summer the surface layer is heated in a rather homogeneous way) in the area (see Fig. 2). Our aim here is to interpret the presence of the gradients and the front in terms of a semipermanent \"Lagrangian barrier\" across which little transport occurs. This construction would also reveal the routes along which this transport happens. Topological changes in that picture, associated with the crossing by eddies and that may be interpreted as the breakdown of the front, are also observed during the simulation. Gradients are rather well defined during Autumn of the second simulation year and early Winter of the third one. During this period we find a long interval (from day 649 to day 731) in which a Lagrangian structure constructed using stable and unstable manifolds of DHTs remains persistent and acts as a partial barrier to transport. Its location is well correlated with the salinity gradients, so that it can be interpreted as a Lagrangian identification of the North Balearic Front. The weak transport across the structure can be described in terms of lobe dynamics. The situation resembles the one in Coulliette and Wiggins (2001) in which quasigeostrophic dynamics has been used to model a double gyre situation and the central jet between the two gyres in that problem plays a similar role as that of the Balearic current in our problem. Lobe dynamics was there successfully applied to quantify transport across the jet, occurring by the so called turnstile mechanism. However in the more realistic data set analyzed here we need to generalize some ideas used there. For example, in Coulliette and Wiggins (2001) DHTs stay on one dimensional boundaries for all time (as boundaries are invariant), however in our scenario the relevant current does not start nor end on clear one dimensional boundaries. This introduces some ambiguity in the identification of the relevant DHTs (and of the saddle-type ISPs used as starting positions in the iterative algorithm of (Mancho et al., 2004) that we use to determine the DHTs) from which to compute the manifolds that will define the Lagrangian barrier. Most of the time, however, pieces of manifolds computed from different DHTs in the same area rapidly converge towards each other, thus indicating that the location of the dominant hyperbolic curve is not a property of the particular choice of DHTs, but a property of the flow. Changes in the topology of the flow cause this convergence property to be lost. This happens at the end of the interval of time chosen in our study case and will be commented on below. ### Cross frontal transport: the turnstile mechanism The geometric objects used to characterize transport by lobe dynamics methods are constructed by the rules discussed in (Malhotra and Wiggins, 1998; Coulliette and Wiggins, 2001; Samelson and Wiggins, 2006). Here we describe in some detail these procedures in a way that is particular to our flow situation. Figs. 2 - 4 illustrate the construction at days 649 and 657. Fig. 2 contains all the dynamic structures superimposed on a salinity field, whereas for clarity only the 'boundary' and the 'lobes' are displayed in Figs. 3, and 4, respectively. 1. Two DHTs should be identified, one in the western part of the front to be characterized and another in the eastern, that persists close to their initial positions during the whole time interval of interest (denoted by \\([t_{0},t_{N}]\\)). Since our algorithm to locate DHTs uses saddle ISP positions as first guesses, figures displaying ISPs such as Fig. 1 are used to estimate these positions and the temporal persistence of the ISPs. The positions calculated for the selected DHTs are plotted in Figs. 2 - 6 as black dots and labelled as \\(H_{W}\\) (the western one) and \\(H_{E}\\) (the eastern one). 2. As the mean current flows eastwards we proceed as in Coulliette and Wiggins (2001) and compute the unstable manifold of the western DHT and the stable manifold of the one in the East. They are the red and blue lines in Fig. 2, respectively. For clarity in the presentation, of the two branches of each manifold (one at each side of the DHT from which they emanate) we display in Fig. 2 only the one pointing in the direction of the other DHT. Along both pieces of manifolds, in the region between the two DHTs, the dominant direction of motion is from west to east. As characteristic in unsteady flows, both manifolds intersect repeatedly. Some of the intersection points are marked with cyan dots. It is also typical that the unstable manifold (red) of a DHT (\\(H_{W}\\) in this case) emerges from it relatively straight, whereas it fluctuates widely when approaching the vicinity of the opposite DHT. In the same way, the stable manifold (blue) of \\(H_{E}\\) joins it smoothly, but displays characteristic oscillations when close to the western DHT. The stable and unstable manifolds displayed in Fig. 2 a) have been computed after backwards and forwards time integration of a small segment in the direction of the stable and unstable linear subspaces in the neighborhood of the DHT for time periods of 14 and 19 days, respectively. Integrations for longer time periods provide longer manifolds. However due to the restrictions of the two dimensional approximation, a time period beyond 20 days is not completely trustworthy. In practice this means that the pieces of the manifold that are far from the DHT may deviate from the true manifold as those pieces are the ones obtained with longer time integrations. For instance, the unstable manifold in Fig. 2 b) has been computed for a time integration of 27 days. This means that unstable structure in the far East is not completely reliable. However the piece of manifold governing the turnstile mechanism, which is closer to the DHT, is obtained with numerical integrations below the validity limit of 20 days and predictions obtained from them are correctly approached in the two dimensional approximation. 3. Between the beginning and the end of the chosen time interval, we choose a sequence of times at which to analyze the manifold positions and compute objects relevant for Lagrangian transport. The sequence of \"observation times\" is denotedby \\(t_{0}<t_{1}<t_{2}<\\ldots<t_{N-1}<t_{N}\\). We note that the \\(t_{i}\\) do _not_ need to be equally spaced. In order to illustrate the construction of boundaries and the turnstile mechanism for crossing the boundaries we need only two times, and for this purpose we will choose to show days 649 and 657. 4. At each of the selected times \\(t_{i}\\), a \"boundary\" is constructed by choosing a finite piece of the unstable manifold of the western DHT, \\([H_{W},b_{t_{i}}]\\), and a finite piece of the stable manifold of the eastern DHT, \\([b_{t_{i}},H_{E}]\\), so that they intersect in precisely one point, \\(b_{t_{i}}\\), which is called a _boundary intersection point_. The points \\(\\{b_{t_{i}}\\}\\), in addition to satisfying an ordering constraint specified below, should be selected in such a way that a boundary relatively straight, i.e. free of the violent oscillations displayed by each of the manifolds when approaching the opposite DHT, is obtained. Since the boundary is pinned at the points \\(H_{W}\\) and \\(H_{E}\\) we obtain a sequence of boundaries that fluctuate in time but remain approximately in the same place. Figure 2 shows the selection of the boundary intersection points \\(b_{649}\\) and \\(b_{657}\\) at times 649 and 657, respectively. For clarity, Fig. 3 displays just the resulting boundaries. Since the boundary is made of material lines, no fluid can cross it by horizontal advection processes, except at the observation times \\(t_{1},t_{2}, ,t_{N}\\) at which the boundary is redefined. At these times, the only way fluid at one side of the boundary can be transferred to the other side is by the turnstile mechanism described and quantified below. If this transport amount is small (as it will be shown to be the case), the boundary can be characterized as a \"barrier\", and gradients will be maintained across it. As seen in Fig. 2, the position of the boundary is well correlated with the position of the salinity front thus confirming that the dynamical systems techniques developed here are useful to identify the North Balearic front in terms of Lagrangian objects. 5. Construct turnstiles at \\(t_{i}\\). At time \\(t_{i}\\) we consider the point, denoted by \\(b_{t_{i+1}}^{-}\\), that will evolve into the boundary intersection point \\(b_{t_{i+1}}\\) (clearly, we cannot do this at \\(t_{N}\\)). Since the stable and unstable manifolds are invariant, this point is also on both the stable and unstable manifolds. In the same way, \\(b_{t_{i-1}}^{+}\\) is the location at time \\(t_{i}\\) of the boundary intersection point that was located at \\(b_{t_{i-1}}\\) at the previous time \\(t_{i-1}\\). The additional constraint that needs to be imposed when choosing the sequence \\(\\{b_{t_{i}}\\}\\) is that \\(b_{t_{i+1}}^{-}\\) results \"upstream\" (i.e. closer to \\(H_{W}\\) along its unstable manifold, or further from \\(H_{E}\\) along its stable manifold) from \\(b_{t_{i}}\\). This introduces a restriction on the choice of \\(b_{t_{i+1}}\\) once \\(b_{t_{i}}\\) is chosen. Since ordering of points along manifolds is preserved by time evolution, it turns out that \\(b_{t_{i-1}}^{+}\\) would be \"downstream\" from \\(b_{t_{i}}\\). Figures 2, 3 and 4 display some of these intersection points, showing that our selection satisfies the ordering constraints. The segments of stable and unstable manifolds between \\(b_{t_{i}}\\) and \\(b_{t_{i+1}}^{-}\\) (and between \\(b_{t_{i}}\\) and \\(b_{t_{i-1}}^{+}\\)) trap regions of fluid, and these regions of fluid defined in this way are referred to as _lobes_. Some of them can be seen in Fig. 2, and more clearly in Fig. 4. 6. Consider the evolution of the turnstile lobes from \\(t_{i}\\) to \\(t_{i+1}\\). In this case \\(b_{t_{i+1}}^{-}\\) evolves to the boundary intersection point, \\(b_{t_{i+1}}\\), and the boundary intersection point at \\(t_{i}\\) evolves to a point \\(b_{t_{i}}^{+}\\), which is also on both the stable and unstable manifolds. Then the lobes between \\(b_{t_{i+1}}\\) and \\(b_{t_{i}}^{+}\\) represent the time evolution of the turnstile lobes from \\(t_{i}\\) to \\(t_{i+1}\\). Note that, because of the way in which the boundary is redefined at each observation time, and because of the different shape of the manifolds when close or far from the DHT from which they emanate, each lobe is on opposite sides of the boundary at the two considered times. These turnstile lobes contain all the fluid that has crossed the boundary between \\(t_{i}\\) and \\(t_{i+1}\\). This transport process is illustrated more clearly in Fig. 4 where the lobes experiencing the turnstile mechanism are plotted before and after crossing the boundary. The geometric construction performed at every observation time \\(t_{i}\\) as explained above allows us to calculate the amount of transport occurring across the boundary during each time interval. In computing the area \\(A(\\Gamma)\\) of a lobe \\(\\Gamma\\) we use the formula \\[A(\\Gamma)=-R^{2}\\int_{\\partial\\Gamma}\\sin\\lambda d\\phi \\tag{7}\\] where the integration is around the closed curve which forms the boundary of the lobe. That this formula gives the area can be seen by considering the differential form \\(\\omega\\equiv-R^{2}\\sin\\lambda d\\phi\\), calculating its differential (Spivak, 1965): \\(d\\omega=R^{2}\\cos\\lambda d\\phi d\\lambda\\) which is identical to the area element on the sphere in spherical coordinates, and recalling Stokes theorem (Spivak, 1965): \\[\\int_{\\partial\\Gamma}\\omega=\\int_{\\Gamma}d\\omega. \\tag{8}\\] For day 649, as shown in Fig. 4, the two turnstile lobes have areas of \\(493.2\\mathrm{km}^{2}\\) (the lobe below the boundary, to the east) and \\(716.9\\mathrm{km}^{2}\\) (the lobe to the west above the boundary). At day 657, the eastern lobe is above the new boundary, and the western lobe is below it. Assuming that the divergence of the surface flow can be neglected so that the areas are unchanged - in numerical experiments we have never observed more than a 3% change - we can calculate the flux across the barrier to be \\((716.9-493.2)\\mathrm{km}^{2}=223.7\\mathrm{km}^{2}\\), in the southerly direction. Modified Atlantic Waters occupy the surface layers of the area until an average depth of about \\(150\\mathrm{m}\\). Using as an approximation for the mean horizontal speed of this water mass the values at the second model layer considered here, multiplication of the 150m depth by the area of the lobes gives \\(33.56\\times 10^{9}\\mathrm{m}^{3}\\) in 8 days, or an average flux over this interval of 0,049 Sv (1Sv=\\(10^{6}\\mathrm{m}^{3}/\\mathrm{s}\\)). The average flux obtained from area calculations of further turnstile lobes until the middle of November remains below that value (the average is 0.025 Sv, always southwards). This should be compared to the 0.75-0.5 Sv which are transported by the Balearic current. We see that cross-boundary transport is small during this time interval and thus the Lagrangian boundary acts as a \"barrier to transport\" permitting only small amounts of mixing between Northern and Southern waters. It will maintain a salinity (and thus density) front that we identify with the observed NorthBalearic front. There is some indeterminacy in the definition of the boundary that we identify as the front, arising from some freedom in the selection of the the intersections \\(\\{\\mathbf{b}_{\\mathbf{t}_{i}}\\}\\). But other choices can only displace the boundary by a distance of the order of the size of the lobes, which we see is small when not too close to the DHTs, and in fact also of the order of the width of the transition region in salinity distributions such as the one in Fig. 2, i.e. the width of the front. ### Spatio-temporal structure of cross frontal transport Since the only way in which our constructed Lagrangian boundaries can be crossed is via the turnstile mechanism, the earlier and later location of the turnstile lobes reveals the dominant routes along which the weak cross-front transport occurs. Figure 5 shows the time evolution of a pair of turnstile lobes, one initially above and the other initially below the boundary, as they evolve in time. The crossing of the boundary by the turnstile mechanism occurs between days 676 and 681, and the remaining panels in the figure show the position of these lobes at some earlier and later times. The sequence illustrate that lobes move essentially along the boundary except when close to \\(H_{E}\\), where they are ejected as filaments transverse to the front (the one initially in the south ejected towards the north and reciprocally for the one started in the north) and when close to \\(H_{W}\\), where they have also the shape of transverse filaments and they become entrained into the boundary region. Fig. 5 also illustrates how lobes transport water of different salinity (coded in colors) and how the above routes for lobe motion and shape correlate with the salinity distribution in the area. Note that the length of the manifolds and the whole process depicted in Fig. 5 lasts only 21 days, so that the plotted manifolds remain approximately horizontal, with only small corrections from the vertical flow. ### An Eddy-front interaction: disruption of the Lagrangian boundary Not all flow configurations allow the geometric construction identifying the Lagrangian boundary and the associated turnstile lobes to be performed. It may happen that no pair of DHTIs persist in a given area long enough to support the mechanism, or their manifolds can fail to intersect. It may also happen that manifolds started at relatively close DHTIs do not converge to the same curve but remain significatively distinct, so that a unique well defined boundary can not be properly identified. In such cases, the turnstile transport mechanism is not the most relevant one. At the end of the simulation interval analyzed here we observe a change in the topology that signals the end of the predominance of the turnstile mechanism in a process that can be interpreted as the breakdown of the front by the interaction with an eddy. As a first symptom, calculations of turnstile lobe areas reveal an increase in cross-front transport starting at day 674 (middle of November). The average transport between days 674 and 700 is of 0.303 Sv (southwards), still smaller than the Balearic current transport but significantly larger than the average cross-frontal transport during the previous month (0.025 Sv, also southwards). At day 711, stable manifolds emerging from \\(H_{E}\\) and from another rather close DHT cease to converge into each other, signaling the end of a situation with an essentially unique well defined boundary. Later, at day 731 our algorithm is unable to find the location of \\(H_{E}\\) starting from ISPs in the area. This probably means that \\(H_{E}\\) has moved away from the area under study. The black dot in Fig. 6 is another DHT found in the region. But its stable manifold (i.e., the finite length of manifold that we are able to compute) does not intersect the unstable manifold from \\(H_{W}\\), thus revealing that is in fact a DHT different from \\(H_{E}\\), and that it can not support the turnstile mechanism. The time evolution of the unstable manifold from \\(H_{W}\\) suggests that the reason from the change in behavior is the breakdown of the Lagrangian barrier by the crossing of an eddy, identified by the rolling up of the manifold around an elliptic ISP (Fig. 6). Note that even in this situation the manifold position is well correlated with the salinity distribution, thus indicating that still Lagrangian structures are relevant. But the transport mechanism is clearly different from the turnstile described above, being more appropriately described as water transport inside an eddy. Conclusions In this work we have applied in a systematic way some tools developed in the context of dynamical systems theory and known generically as 'lobe dynamics'. The computer generated surface velocity field studied here is more complex and less regular that other velocity fields previously considered in this context, but we have found that one of the main mechanisms of transport by lobe motion, the turnstile, is still at work. The methodology includes the construction of a 'barrier' across which to compute transport, and in our application to the Northwestern Mediterranean dynamics it has been identified with one of the main oceanographic structures present here, the North Balearic Front. Transport across it proceeds in the form of filaments that are entrained into the front close to a DHT 'upstream', and released also in the form of filaments close to another DHT located 'downstream'. The ejection of these filaments at that location can explain recent observations of waters saltier than expected just east of the island of Menorca (Emelianov, 2006). The identification of the DHTs and the calculation of their locations is by itself an important subject, since they organize the flow in the area and, because of this and of the sensitivity of the trajectories in their neighborhood, they are candidates for launch locations in efficiently designed drifter release experiments (Poje et al., 2002; Molcard et al., 2006). Despite the success of the approach described here, much work remains to be done in order to develop dynamical systems techniques into a collection of systematic tools for analyzing general oceanographic data. A classification and understanding of the different topological regimes leading to qualitatively different modes of transport and the transitions among them, similar to the existing ones for steady and time-periodic flows, would be desirable for the cases of turbulent aperiodic flows. A characteristic of the dynamical systems approach is that it provides an unusually high detailed description of the spatio-temporal structure of Lagrangian transport. Therefore it may well turn out to be the optimal tool for analyzing data from ocean models with much higher spatio-temporal resolution (e.g., higher frequency atmospheric forcing and more resolved spatial scales)which capture more physical processes. This would enable us to better define, for example, the process of the destruction of a barrier. In addition, consideration of the impact of non-Lagrangian processes such as diffusion, of vertical motions, and of strong localized perturbations beyond climatological forcing such as storms, would be needed to have a more complete vision of transport phenomena and mechanisms. ## Acknowledgements A.M.M. acknowledges the MCyT (Spanish Government) for a Ramon y Cajal Research Fellowship and financial support from MEC (Spanish Government) reference MTM2004-00797 and the Royal Society-CSIC cooperation agreement reference B2003GB03. E.H.-G. acknowledges financial support from MEC and FEDER through Project CONOCE2 (FIS2004-00953). We also acknowledge M. Emelianov for communicating us results from a recent cruise before publication. D.S. and S. W. acknowledge financial support from ONR Grant No. N00014-01-1-0769. ## References * Bartlett (1982) Bartlett, J. H., 1982: Limits of stability for an area-perserving polynomial mapping. _Cel. Mech._, **28**, 295-317. * Beckers et al. (2002) Beckers, J.-M., M. Rixen, P. Brasseur, J.-M. Brankart, A. Elmoussaoui, M. Crepon, Herbaut, F. Martel, F. V. den Berghe, L. Mortier, A. Lascaratos, P. Drakopoulos, G. Korres, K. Nittis, N. Pinardi, E. Masetti, S. Castellari, P. Carini, J. Tintore, A. Alvarez, S. Monserrat, D. Parrilla, R. Vautard, and S. Speich, 2002: Model intercomparison in the Mediterranean: MEDMEX simulations of the seasonal cycle. _J. Mar. Sys._, **33**, 215-251. * Beigie et al. (1994) Beigie, D., A. Leonard, and S. Wiggins, 1994: Invariant manifold templates for chaotic advection. _Chaos, Solitons, and Fractals_, **4(6)**, 749-868. * Beigie et al. (2003)Channon, S. R. and J. L. Lebowitz, 1980: Numerical experiments in stochasticity and homoclinic oscillations. _Ann. New York Acad. Sci._, **357**, 108-118. * Coulliette and Wiggins (2001) Coulliette, C. and S. Wiggins, 2001: Intergyre transport in a wind-driven, quasi-geostrophic double gyre: An application of lobe dynamics. _Nonlinear Processes in Geophysics_, **8**, 69-94. * Deese et al. (2002) Deese, H. E., L. J. Pratt, and K. R. Helfrich, 2002: A laboratory model of exchange and mixing between western boundary layers and subbasin recirculation gyres. _J. Phys. Oceanogr._, **32(6)**, 1870-1889. * Dietrich (1997) Dietrich, D., 1997: Application of a modified \"a\" grid ocean model having reduced numerical dispersion to the Gulf of Mexico circulation. _Dyn. Atmos. Oceans_, **27**, 201-217. * Dietrich et al. (2004) Dietrich, D., R. Haney, V. Fernandez, S. Josey, and J. Tintore, 2004: Air-sea fluxes based on observed annual cycle surface climatology and ocean model internal dynamics: a non-damping zero-phase-lag approach applied to the Mediterranean sea. _J. Mar. Sys._, **52**, 145-165. * Ovidio et al. (2004) Ovidio, F., V. Fernandez, E. Hernandez-Garcia, and C. Lopez, 2004: Mixing structures in the Mediterranean sea from finite-size Lyapunov exponents. _Geophys. Res. Lett._, **31**, L12203 (1-4), doi:10.1029/2004GL020328. * Duan and Wiggins (1996) Duan, J. Q. and S. Wiggins, 1996: Fluid exchange across a meandering jet with quasi-periodic time variability. _J. Phys. Oceanogr._, **26**, 1176-1188. * Emelianov (2006) Emelianov, M., 2006: Personal communication. * Fernandez et al. (2005) Fernandez, V., D. Dietrich, R. Haney, and J. Tintore, 2005: Mesoscale, seasonal and interannual variability in the Mediterranean sea using a numerical ocean model. _Progress in Oceanography_, **66**, 321-340. * Ide et al. (2002) Ide, K., D. Small, and S. Wiggins, 2002: Distinguished hyperbolic trajectories in time dependent fluid flows: analytical and computational approach for velocity fields defined as data sets. _Nonlinear Processes in Geophysics_, **9**, 237-263. * Isern-Fontanet et al. (2006) Isern-Fontanet, J., Garcia-Ladona, and J. Font, 2006: The vortices of the Mediterranean sea: an altimetric perspective. _J. Phys. Ocean._, **36**, 87103. * Jones and Winkler (2002) Jones, C. K. R. T. and S. Winkler, 2002: Invariant manifolds and Lagrangian dynamics in the ocean and atmosphere. _Handbook of dynamical systems_, North-Holland, Amsterdam, 55-92. * Ju et al. (2003) Ju, N., D. Small, and S. Wiggins, 2003: Existence and computation of hyperbolic trajectories of aperiodically time-dependent vector fields and their approximations. _Int. J. Bif. Chaos_, **13**, 1449-1457. * Lopez-Garcia et al. (1994) Lopez-Garcia, M., C. Millot, J. Font, and E. Garcia-Ladona, 1994: Surface circulation variability in the Balearic Basin. _J. Geophys. Res._, **99 (C2)**, 3285-3296. * MacKay et al. (1984) MacKay, R. S., J. D. Meiss, and I. C. Percival, 1984: Transport in Hamiltonian systems. _Physica D_, **13**, 55-81. * Malhotra and Wiggins (1998) Malhotra, N. and S. Wiggins, 1998: Geometric structures, lobe dynamics, and Lagrangian transport in flows with aperiodic time-dependence, with applications to Rossby wave flow. _J. Nonlinear Science_, **8**, 401-456. * Mancho et al. (2004) Mancho, A. M., D. Small, and S. Wiggins, 2004: Computation of hyperbolic and their stable and unstable manifolds for oceanographic flows represented as data sets. _Nonlinear Processes in Geophysics_, **11**, 17-33. * Mancho et al. (2006) -- 2006: A comparison of methods for interpolating chaotic flows from discrete velocity data. _Computers & Fluids_, **35**, 416-428. * Mancho et al. (2007)Mancho, A. M., D. Small, S. Wiggins, and K. Ide, 2003: Computation of stable and unstable manifolds of hyperbolic trajectories in two-dimensional, aperiodically time-dependent vector fields. _Physica D_, **182**, 188-222. * Miller et al. (2002) Miller, P. D., L. J. Pratt, K. R. Helfrich, and C. K. R. T. Jones, 2002: Chaotic transport of mass and potential vorticity for an island recirculation. _J. Phys. Oceanogr._, **32(1)**, 80-102. * Millot (1999) Millot, C., 1999: Circulation in the western Mediterranean sea. _J. Mar. Sys._, **20**, 423-442. * Molcard et al. (2006) Molcard, A., A. Poje, and T. Ozgokmen, 2006: Directed drifter launch strategies for Lagrangian data assimilation using hyperbolic trajectories. _Ocean Modell._, **12**, 268-289. * Ngan and Shepherd (1997) Ngan, K. and T. G. Shepherd, 1997: Chaotic mixing and transport in Rossby wave critical layers. _J. Fluid. Mech._, **334**, 315-351. * Olson (1991) Olson, D., 1991: Rings in the ocean. _Annu. Rev. Earth Planet. Sci._, **19**, 283-311. * Pacanowski and Philander (1981) Pacanowski, R. C. and S. G. H. Philander, 1981: Parametrization of vertical mixing in numerical models of tropical oceans. _J. Phys. Oceanog._, **11(11)**, 1443-1451. * Pinot et al. (1995) Pinot, J. M., J. Tintore, and D. Gomis, 1995: Multivariate analysis of the surface circulation in the Balearic sea. _Prog. Oceanog._, **36**, 343-376. * Poje et al. (2002) Poje, A. C., M. Toner, A. D. Kirwan, and C. K. R. T. Jones, 2002: Drifter launch strategies based on Lagrangian templates. _J. Phys. Oceanogr._, **32(6)**, 1855-1869. * Press et al. (1992) Press, W. H., S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, 1992: _Numerical Recipes in C_. Cambridge University Press. * Puillat et al. (2002) Puillat, I., I. Taupier-Letage, and C. Millot, 2002: Algerian eddies lifetime can be near 3 years. _J. Mar. Sys._, **31**, 245-259. * Rogerson et al. (1999) Rogerson, A. M., P. D. Miller, L. J. Pratt, and C. K. R. T. Jones, 1999: Lagrangian motion and fluid exchange in a barotropic meandering jet. _J. Phys. Oceanogr._, **29**, 2635-2655. * Rogerson et al. (2003)Rom-Kedar, V., A. Leonard, and S. Wiggins, 1990: An analytical study of transport, mixing, and chaos in an unsteady vortical flow. _J. Fluid Mech._, **214**, 347-394. * Rom-Kedar and Wiggins (1990) Rom-Kedar, V. and S. Wiggins, 1990: Transport in two-dimensional maps. _Arch. Rat. Mech. Anal._, **109**, 239-298. * Ruiz et al. (2002) Ruiz, S., J. Font, M. Emelianov, J. Isern-Fontanet, C. Millot, J. Salas, and I. Taupier-Letage, 2002: Deep structure of an open sea eddy in the Algerian Basin. _J. Mar. Sys._, **33-34**, 179-195. * Samelson and Wiggins (2006) Samelson, R. and S. Wiggins, 2006: _Lagrangian Transport in Geophysical Jets and Waves_. Springer-Verlag, New York. * Samelson (1992) Samelson, R. M., 1992: Fluid exchange across a meandering jet. _J. Phys. Oceanogr._, **22**, 431-440. * Sanderson and Brassington (1998) Sanderson, B. and G. Brassington, 1998: Accuracy in the context of a control-volume model. _Atmosphere-Ocean_, **36**, 355-384. * Schneider et al. (2005) Schneider, J., V. Fernandez, and Hernandez-Garcia, 2005: Leaking method approach to surface transport in the Mediterranean sea from a numerical ocean model. _J. Mar. Sys._, **57**, 111-126. * Spivak (1965) Spivak, M., 1965: _Calculus on Manifolds_. Perseus Books, New York. * Wiggins (1992) Wiggins, S., 1992: _Chaotic Transport in Dynamical Systems_. Springer-Verlag, New York. * Wiggins (2005) -- 2005: The dynamical systems approach to Lagrangian transport in oceanic flows. _Annu. Rev. Fluid Mech._, **37**, 295-328. * Yuan et al. (2001) Yuan, G.-C., L. J. Pratt, and C. K. R. T. Jones, 2001: Barrier destruction and Lagrangian predictabilty at depth in a meandering jet. _Dyn. Atmos. Oceans_, **35**, 41-61. * Yuan et al. (2004) -- 2004: Cross-jet Lagrangian transport and mixing in a \\(2\\frac{1}{2}\\) layer model. _J. Phys. Oceanogr._, **34**, 1991-2005. * Yuan et al. (2005)Figure 2: Salinity front and manifolds at days 649 and 657. Salinity (in psu) is color coded as indicated by the color bar. The black circles denote the DHT in the West (H\\({}_{W}\\)) and the DHT in the East (H\\({}_{E}\\)). These are the DHTs whose unstable (red) and stable (blue) manifolds, respectively, are used to construct the Lagrangian boundary of the front. The blue dots are boundary intersection points, as described in the text. Figure 3: Boundaries at days 649 and 657 constructed from a (finite length) segment of the unstable manifold of \\(H_{W}\\) and a (finite length) segment of the stable manifold of \\(H_{E}\\). The boundary intersection points are denoted by \\(b_{649}\\) and \\(b_{657}\\), respectively. Figure 4: Turnstile lobes from day 649. There is precisely one intersection point between b\\({}_{649}\\) and b\\({}_{657}^{-}\\), which implies that there are two lobes in the turnstile. The bottom figure shows the evolved position of these lobes at day 657. Comparing the two figures, one can see that the turnstile lobe to the North of the boundary at day 649 has moved to the South of the boundary at day 657, and the turnstile lobe to the South of the boundary at day 649 has moved to the North of the boundary at day 657 (the location of the boundary itself can be seen in Fig. 3). Figure 5: The spatio-temporal structure of cross frontal transport described by lobe dynamics. Salinity (in psu) is color coded as indicated by the color bar. Figure 6: The mechanism of the disruption of the Lagrangian barrier by an eddy. Salinity (in psu) is coded in colors as indicated by the color bar. Red dots are ISPs of elliptic character. Green dots are saddle ISPs. Also shown as black dots are the DHTs \\(H_{W}\\) and \\(H_{E}^{\\prime}\\).
We analyze with the tools of lobe dynamics the velocity field from a numerical simulation of the surface circulation in the Northwestern Mediterranean Sea. We identify relevant hyperbolic trajectories and their manifolds, and show that the transport mechanism known as the 'turnstile', previously identified in abstract dynamical systems and simplified model flows, is also at work in this complex and rather realistic ocean flow. In addition nonlinear dynamics techniques are shown to be powerful enough to identify the key geometric structures in this part of the Mediterranean. In particular the North Balearic Front, the westernmost part of the transition zone between saltier and fresher waters in the Western Mediterranean is interpreted in terms of the presence of a semipermanent \"Lagrangian barrier\" across which little transport occurs. Our construction also reveals the routes along which this transport happens. Topological changes in that picture, associated with the crossing by eddies and that may be interpreted as the breakdown of the front, are also observed during the simulation.
Summarize the following text.
arxiv-format/0609012v3.md
**Species-area relationship for power-law species abundance distribution** **Haruyuki Irie and Kei Tokita\\({}^{*}\\)** Information Media Center & Grad. Scl. Sci., Hiroshima University \\({}^{*}\\) Cybermedia Center, Grad. Scl. Sci. & Grad. Scl. Frontier Biosci., Osaka University ## _Keywords:_ Species-area relationship; species abundance distribution; power-law SADIntroduction A fundamental question in ecology is how various species coexist in nature (Chave et al. 2002; Hutchinson 1959; Levins 1970; May 1972; Pacala & Tilman 1993; Rosenzweig 1995; Tokeshi 1999; Gaston & Blackburn 2000; etc.). The answers to this question are expected to provide great insights into both theories of biodiversity and effective nature conservation practices. Among the various explorations into the mechanisms of species coexistence, two community-level properties have been theoretically and quantitatively examined: species abundance distributions (SADs) (Motomura 1932; Fisher et al. 1943; Preston 1948; MacArthur 1957; May 1975; Sugihara 1980; Harte et al. 1999; Hubbell 2001) and species-area relationships (SARs) (Arrhenius 1921; Preston 1962a,1962b; MacArthur & Wilson 1967; May 1975; Pueyo 2006). A mechanism to reproduce such macroscopic ecological patterns using a microscopic model has been one of the central issues in recent community ecology (Durrett & Levin, 1996; Ney-Nifle & Mangel, 1999; Tokita & Yasutomi, 1999; Bastolla et al., 2001; Tokita & Yasutomi, 2003; Tokita, 2004; Lawson & Jensen, 2006; Tokita, 2006). Studies on these macroscopic patterns, therefore, not only give theoretical insight into large-scale ecosystems but also clarify the impacts of habitat fragmentation. Thus, they aid in efforts to devise long-term estimations and strategies for nature conservation. SAD and SAR are mutually connected to each other. Preston (1962) derived the power-law SAR from the lognormal SAD under an assumption called the canonical hypothesis, which states that the peak of the individuals curve coincides with the number of individuals in the most abundant species. May (1975) comprehensively studied various types of SAD such as Preston's lognormal distribution, uniform distribution, MacArthur's Broken Stick (exponential) distribution, Motomura's geometric series distribution and Fisher's logseries distribution, and demonstrated that the first three SADs lead to power-law SAR and the latter two correspond to log SAR. In addition to those pioneering works on SADs, power-law SAD has been reported (Margalef 1994; Pueyo 2006). Other than SADs, power-law has been observed for relationships between abundance and body size (Siemann et al. 1996). Power-law is, therefore, ubiquitous in biology. It is moreover known that, in general, power-law distribution is given in a limit of large variance of the lognormal distribution. Here, we demonstrate that power-law SAR can be mathematically derived from power-law SAD without any such assumption as the canonical hypothesis. We also discuss an inverse problem: namely, what type of SAD can be obtained when we start from the power-law SAR? ## 2. Species abundance distribution and rank Let \\(\\sigma(x){\\rm d}x\\) be the species abundance distribution (SAD) between the number of individuals \\(x\\sim x+{\\rm d}x\\). The total number of species \\(S\\) is obtained by integrating \\(\\sigma(x)\\) from the minimum value of \\(x=m\\) to the maximum value of \\(x=X\\) as \\[S=\\int_{m}^{X}\\sigma(x){\\rm d}x. \\tag{1}\\]The species rank of the number of individuals \\(x\\) is defined by \\[R(x)=\\int_{x}^{\\infty}\\sigma(x^{\\prime})\\mathrm{d}x^{\\prime}. \\tag{2}\\] The inverse function of eqn (2), \\(x_{R}\\), is the rank-abundance distribution; that is, the number of individuals of the \\(R\\)-th rank is \\(x_{R}\\). The first rank of species, i.e. the most abundant species, has the maximum number of individuals, \\(X\\); then we obtain \\[R(X)=1. \\tag{3}\\] This equation (3) is equivalent to the estimation used by Preston (1962a,1962b), and May (1975). Using the SAD \\(\\sigma(x)\\), the total population or the total number of individuals becomes \\[N=\\int_{m}^{X}x\\sigma(x)\\mathrm{d}x. \\tag{4}\\] Writing the number of individuals normalized by the minimum value \\(m\\) as \\(\\hat{x}=x/m,\\hat{N}=N/m\\), and \\(\\hat{\\sigma}(\\hat{x})\\mathrm{d}\\hat{x}=\\sigma(x)\\mathrm{d}x\\), we obtain \\[S=\\int_{1}^{\\hat{X}}\\hat{\\sigma}(\\hat{x})\\mathrm{d}\\hat{x},\\quad R=\\int_{\\hat {x}}^{\\infty}\\hat{\\sigma}(\\hat{x}^{\\prime})\\mathrm{d}\\hat{x}^{\\prime},\\quad \\hat{N}=\\int_{1}^{\\hat{X}}\\hat{x}\\hat{\\sigma}(\\hat{x})\\mathrm{d}\\hat{x}.\\] Hereafter, we omit the hat \\(\\hat{,}\\) and we consider \\(x,X,\\sigma\\), and \\(N\\) to be the normalized quantities. ## 3 Power-law SAD ### Power-law SAR We consider here an SAD which decays with power-law from the minimum number of individuals, \\(x=1\\) (the number normalized by the minimum value), to the maximum value, \\(x=X\\) as \\[\\sigma(x)=\\tilde{S}\\alpha x^{-(1+\\alpha)}, \\tag{5}\\] where \\(\\tilde{S}\\) is a constant. The rank function \\(R(x)\\) is calculated by integrating \\(\\sigma(x)\\) from \\(x\\) to infinity \\[R(x)=\\int_{x}^{\\infty}\\sigma(x^{\\prime})\\mathrm{d}x^{\\prime}=\\tilde{S}\\alpha \\left.\\frac{x^{\\prime-\\alpha}}{-\\alpha}\\right|_{x}^{\\infty}=\\tilde{S}x^{- \\alpha}. \\tag{6}\\] Using this, we find \\[\\tilde{S}\\equiv\\int_{1}^{\\infty}\\sigma(x)\\mathrm{d}x=\\int_{1}^{X}\\sigma(x) \\mathrm{d}x+\\int_{X}^{\\infty}\\sigma(x)\\mathrm{d}x=S+1. \\tag{7}\\] where eqns (1) and (3) are used for the last equality. The relation of the total number of species \\(S\\) and the maximum number of individuals \\(X\\) is obtained by eqns (3) and (6), \\(1=R(X)=\\tilde{S}X^{-\\alpha}\\), so we obtain \\[X^{\\alpha}=\\tilde{S}=S+1. \\tag{8}\\]By inserting eqn (5) into eqn (4), the total number of individuals becomes \\[N=\\int_{1}^{X}x\\sigma(x)\\mathrm{d}x=\\alpha\\frac{X-X^{\\alpha}}{1-\\alpha}. \\tag{9}\\] For very large \\(X\\), from eqn (9), \\(N\\simeq\\alpha X/(1-\\alpha)\\) and \\(\\tilde{S}\\propto N^{\\alpha}\\) for \\(\\alpha<1\\), and \\(N\\simeq\\alpha X^{\\alpha}/(\\alpha-1)\\) and \\(\\tilde{S}\\propto N\\) for \\(\\alpha>1\\). When the size of area \\(A\\) is proportional to the normalized total population \\(N\\), the SAR becomes \\(S=cA^{z}\\) with \\(z=\\alpha\\) for \\(\\alpha<1\\) and \\(z=1\\) for \\(\\alpha>1\\). For a finite value of \\(X\\), the SAR never becomes a simple power-law relation. In this case, let us start to define the SAR-exponent \\(\\zeta\\) as \\[\\zeta\\equiv\\frac{\\mathrm{d}\\ln S}{\\mathrm{d}\\ln A}. \\tag{10}\\] This can quantify the increasing rate of the species richness with increasing area size \\(A\\). For the power SAR, \\(S=cA^{z}\\), this exponent becomes \\(\\zeta=z\\), and for the logarithmic case, \\(S=K\\ln A+a\\), \\(\\zeta=1/(\\ln A+a/K)\\). This exponent is closely related to the persistence function \\(a(A)\\) introduced by Plotkin et al. (2000) as \\(\\zeta=-\\log a(A)\\). If the normalized total population \\(N\\) is proportional to the area size \\(A\\), this SAR exponent equals \\(\\zeta=\\mathrm{d}\\ln S/\\mathrm{d}\\ln N\\). In the case of the power-law SAD eqn (5), we obtain \\[\\zeta=\\frac{\\mathrm{d}y}{\\mathrm{d}\\mu}=\\frac{1-\\mathrm{e}^{-\\theta y}}{1+ \\theta-\\mathrm{e}^{-\\theta y}}=\\frac{1}{1+\\frac{\\theta}{1-\\mathrm{e}^{-\\theta y }}}, \\tag{11}\\] where \\(\\mu\\equiv\\ln N,\\ y\\equiv\\ln\\tilde{S}\\simeq\\ln S\\) using \\(S\\simeq\\tilde{S}\\gg 1\\) for a large community, and \\(\\theta\\equiv\\frac{1}{\\alpha}-1\\). For large \\(N\\) and \\(X\\) and in the limit of an infinite number of species (\\(S\\to\\infty\\)), \\(\\zeta\\) becomes constant: \\[\\zeta=\\begin{cases}\\alpha&(\\text{for }\\alpha<1)\\\\ 1&(\\text{for }\\alpha\\geq 1)\\end{cases}. \\tag{12}\\] In Fig. 1, we show the SAR exponent \\(\\zeta\\) v.s. \\(\\alpha\\). The \\(\\zeta\\) bends near \\(\\alpha=1\\). Figure 1: SAR exponent \\(\\zeta\\) v.s. the exponent \\(\\alpha\\) for the power-law SAD eqn (5) ### Logarithmic SAR For \\(\\alpha\\to 0\\), keeping \\(S_{0}\\equiv\\tilde{S}\\alpha\\) constant, the SAR becomes logarithmic, because for the SAD, \\(\\sigma(x)=S_{0}x^{-(1+\\alpha)}\\) and \\[S = \\int_{1}^{X}\\sigma(x)\\mathrm{d}x=\\left.-\\frac{S_{0}x^{-\\alpha}}{ \\alpha}\\right|_{1}^{X}=S_{0}\\frac{1-X^{-\\alpha}}{\\alpha}\\to S_{0}\\left.\\frac{ \\partial_{\\alpha}(1-\\exp(-\\alpha\\ln X))}{\\partial_{\\alpha}\\alpha}\\right|_{ \\alpha=0} \\tag{13}\\] \\[= S_{0}\\ln X,\\] \\[N = \\int_{1}^{X}x\\sigma(x)\\mathrm{d}x=S_{0}\\int_{1}^{X}x^{-\\alpha} \\mathrm{d}x=\\left.\\frac{S_{0}}{1-\\alpha}x^{1-\\alpha}\\right|_{1}^{X}=\\frac{S_{0 }}{1-\\alpha}(X^{1-\\alpha}-1)\\] (14) \\[\\rightarrow S_{0}X\\quad(\\mathrm{for}\\,\\alpha\\to 0),\\] where \\(\\partial_{\\alpha}\\) denotes the derivative in \\(\\alpha\\), and L'hopital's rule is used in taking the limit in eqn 13; then we derive \\[\\frac{S}{S_{0}}\\sim\\ln\\frac{N}{S_{0}}. \\tag{15}\\] This case \\(\\alpha\\to 0\\) corresponds to the continuous version of the geometric SAD (Motomura 1932, May 1975). In the geometric SAD, the rank-size distribution is \\(x_{i}=NC_{k}k(1-k)^{i-1}\\) with a constant \\(k\\), and the coefficient \\(C_{k}\\) being the normalization constant given by the condition \\(\\sum_{i=1}^{S}x_{i}=N\\), \\(C_{k}=1/[1-(1-k)^{S}]\\). The inverse function of this expression of \\(x_{i}\\) leads to the rank function by setting \\(i=R\\), \\[i=R(x)=1+K\\ln(NC_{k}k)-K\\ln x,\\quad\\left(K\\equiv\\frac{1}{\\ln\\frac{1}{1-k}} \\right). \\tag{16}\\] Differentiating \\(R(x)\\) by \\(x\\), we obtain \\[\\sigma(x)=-\\frac{\\mathrm{d}R(x)}{\\mathrm{d}x}=\\frac{K}{x}. \\tag{17}\\] If we express it by the normalized quantities, \\(X=x_{1}=NC_{k}k,1=x_{S}=NC_{k}k(1-k)^{S}\\). For large \\(N,S\\), and \\(X\\), the parameter \\(C_{k}\\) becomes \\(C_{k}\\to 1\\), and \\(X\\sim kN,S=K\\ln(kN)\\); finally, we obtain the logarithmic SAR. From eqn (17), we find that the SAD corresponds to the power-law SAD eqn (5) with \\(\\alpha\\to 0\\), and we can obtain the logarithmic SAR for the power-law SAD with \\(\\alpha\\to 0\\) in the continuous approximation as well. The relation between parameters \\(S_{0},\\ K\\) and \\(k\\) is \\(S_{0}=K\\sim 1/k\\). ## 4 From SAR to SAD In contrast to the previous section, we consider here the inverse problem of what kind of SAD can be derived from a given SAR. If the shape of the SAD is unchanged with increasing \\(S,X\\), and \\(N\\), we can write \\(\\sigma(x)=\\tilde{S}p(x)\\): only the coefficient \\(\\tilde{S}\\) varies with varying \\(S\\), but the function \\(p(x)\\) is unchanged. In this case, eqn (1) becomes \\[S=\\tilde{S}\\int_{1}^{X}p(x)\\mathrm{d}x, \\tag{18}\\] with the use of the normalized quantities (normalized by the minimum number of individuals). Defining the cumulative distribution function as \\[P(x)=\\int_{x}^{\\infty}p(x^{\\prime})\\mathrm{d}x^{\\prime}\\quad\\left(=\\frac{R(x )}{\\tilde{S}}\\right), \\tag{19}\\]we obtain \\[\\frac{1}{\\tilde{S}}=P(X)=\\int_{X}^{\\infty}p(x)\\mathrm{d}x, \\tag{20}\\] from eqns (2) and (3). From eqn (4), the normalized total population \\(N\\) divided by \\(\\tilde{S}\\) is \\[\\frac{N}{\\tilde{S}}=\\int_{1}^{X}xp(x)\\mathrm{d}x. \\tag{21}\\] If \\(\\int_{1}^{\\infty}p(x)\\mathrm{d}x=1\\), from eqns (18) and (20), we obtain \\[\\tilde{S}=\\tilde{S}\\int_{1}^{\\infty}p(x)\\mathrm{d}x=\\tilde{S}\\int_{1}^{X}p(x) \\mathrm{d}x+\\tilde{S}\\int_{X}^{\\infty}p(x)\\mathrm{d}x=\\tilde{S}+1\\simeq\\tilde {S}. \\tag{22}\\] Equations (20) is the relation between \\(X\\) and \\(S\\) through \\(P(X)\\) on the one hand, and eqn (21) is the relation between \\(N\\) and \\(S\\) on the other. The SAR is the relation between \\(N\\) and \\(S\\) if \\(N\\propto A\\), and if we put \\(N=SF(1/S)\\), from eqn (20), \\(N/S=F(P(X))\\). Therefore, if \\(X\\) is changed to \\(X+\\delta X\\), the variation of eqn (21) becomes \\[\\delta F(P(X))=-X\\delta P(X)+\\int_{1}^{X}x\\frac{\\partial p(x;\\beta)}{\\partial \\beta}\\mathrm{d}x\\delta\\beta, \\tag{23}\\] where \\(\\beta\\) expresses some parameters of the distribution, and we use \\(\\delta P(X)=-p(X)\\delta X\\). Because we consider the case in which the parameters are unchanged with varying \\(N,S\\), and \\(X\\), the last term vanishes. For example, if the SAR is power-law \\(S=cN^{z}\\), \\(N=c^{\\prime}S^{1/z}=c^{\\prime}P(X)^{-1/z}\\); that is, \\(F(P)=c^{\\prime}P^{1-1/z}\\). Substituting this expression into eqn (23), we obtain \\[c^{\\prime}\\left(1-\\frac{1}{z}\\right)P(X)^{-1/z}\\delta P(X)=-X\\delta P(X). \\tag{24}\\] Therefore, the exponent \\(z\\) must satisfy \\[z<1\\quad\\text{and}\\quad P(X)\\propto X^{-z}. \\tag{25}\\] This means that the tail of the SAD is a power-law of eqn (5), and the exponent becomes \\(\\alpha=z<1\\). ## 5. Discussion We obtained the power-law SAR for a power-law SAD using the classical method of R. May and Preston. We also considered the inverse problem of obtaining the power-law SAD from a given power-law SAR if the shape of the SAD is unchanged with varying total population, area size, and species richness. R. May (1975) obtained the power-law SAR for the lognormal SAD and the broken stick model SAD. In both cases, the parameters of SAD vary with increasing \\(N\\): in the lognormal SAD, the variance of the log of the number of individuals increases proportional to \\(\\ln X\\), and for the case of the broken stick model, in which the SAD becomes an exponential distribution, the SAR is obtained if the average of the number of individualsis proportional to the total number of species \\(S\\). The last term of eqn (23) does not vanish in either SAD. In the case of a linear SAR, \\(S=cN\\), first we consider that this linearity holds in the limit of \\(S\\rightarrow\\infty\\) and assume that any deviation from the linearity becomes \\(N/S=1/c-bS^{-\\gamma}\\left(\\gamma>0\\right)\\), then from eqn (23) \\(b\\gamma P^{\\gamma-1}\\delta P=X\\delta P\\) and \\(P(X)\\propto X^{-1/(1-\\gamma)}\\). Hence, this case corresponds to the power-law SAD with \\(\\alpha=1/(1-\\gamma)>1\\). If the linearity holds completely for all \\(S\\) as \\(b=0\\) above, the variation of the maximum number of individuals becomes zero, \\(\\delta X=0\\), and this corresponds to the case in which \\(X\\) is constant, for example, a uniform SAD (May 1975), \\(\\sigma(x)\\propto\\delta(x)\\). We found in the previous section that the logarithmic SAR, \\(S=K\\ln N+a\\), is derived from the case of the SAD\\(\\propto 1/x\\), and the rank function defined by eqn (2) diverges, so we cannot use eqn (23) as a method for deriving SAD from SAR. Harte et al. (1999) obtained an SAD from a power-law SAR using the renormalization group technique for the case of existing self-similarity: the fraction of a species found in an area with a size \\(A\\), which is also found in \\(A/2\\), is independent of \\(A\\) and the abundance \\(x\\). But this SAD is far from the power-law SAD. Pueyo (2006) pointed out that there are possibilities for other shapes of SADs if the fraction depends on the abundance, and he also obtained power-law SAD from more straightforward discussion. ## Acknowledgements The authors thank T. Chawanya for fruitful discussions. The authors also thank the members of Department of Mathematical and Life Sciences, Graduate School of Science, Hiroshima University and the members of Large-scale Computational Science Division (Kikuchi lab), Cybermedia Center, Osaka University. The present study has been supported by Grants-in-Aid from MEXT, Japan (17540383 and Priority Areas \"Systems Genomics\") and by the research fund of Fukken Co., Ltd.. ## References * Arrhenius (1921) Arrhenius, O. (1921). Species and area. _J. Ecol._, 9, 95-99. * Bastolla et al. (2001) Bastolla, U., Lassig, M., Manrubia, S., and Valleriani, A. (2001). Diversity patterns from ecological models at dynamical equilibrium. _J. Theor. Biol._, 212, 11-34. * Chave et al. (2002) Chave, J., Muller-Landau, H., & Levin, S.A. (2002). Comparing classical community models, theoretical consequences for patterns of diversity. _Am. Nat._, 159, 1-23. * Durrett & Levin (1996) Durrett, R. and Levin, S. (1996). Spatial models for species-area curves. _J. Theor. Biol._, 179, 119-127. * Fisher et al. (1943) Fisher, R., Corbet, A., & Williams, C. (1943). The relation between the number of species and the number of individuals in a random sample of an animal population. _J. Anim. Ecol._, 12, 42-58. * Gaston & Blackburn (2000) Gaston, K. & Blackburn, T. (2000). _Patterns and processes in macroecology_. Blackwell Science. * Harte et al. (1999) Harte, J., Kinzig, A., & Green, J. (1999). Self-similarity in the distribution and abundance of species. _Science_, 284, 334-336. * Hubbell (2001) Hubbell, S. (2001). _The Unified Neutral Theory of Biodiversity and Biogeography_. Princeton Univ. Press. * Hubbell (2002)Hutchinson, G. (1959). Homage to santa rosalia, or why are there so many kinds of animals? _Am. Nat._, 93, 145-159. Lawson, D. and Jensen, H. (2006). The species-area relationship and evolution. _J. Theor. Biol._ Levins, R. (1970). Extinction. in _Some mathematical questions in biology_, Gerstenhaber, M., ed., vol.3, 75-108. American Mathematical Society. MacArthur, R. (1957). On the relative abundance of bird species. _Proc. Natl. Acad. Sci. U.S.A._, 43, 293-295. MacArthur, R. & Wilson, E. (1967). _The theory of island biogeography_. Princeton Univ. Press. Margalef, R. (1994). Through the looking glass, how marine phytoplankton appears through the microscope when graded by size and taxonomically sorted _Sci. Mar._, 58, 87-101. May, R.M. (1972). Will a large complex system be stable? _Nature_, 238, 413-414. May, R.M. (1975). Patterns of species abundance and diversity. In: _Ecology and evolution of communities_ (eds. Cody, M.L. & Diamond, J.M.). Belknap Press of Harvard University Press, pp. 81-120. Motomura, I. (1932). On the statistical treatment of communities. _Zool. Mag., Tokyo (in Japanese)_, 44, 379-383. Ney-Nifle, M. and Mangel, M. (1999). Species-area curves based on geographic range and occupancy. _J. Theor. Biol._, 196, 327-342. Pacala, S. & Tilman, D. (1993). Limiting similarity in mechanistic and spatial models of plant competition in heterogeneous environments. _Am. Nat._, 143, 222-257. Plotkin, J., Potts, M., Hubbell, S., & Nowak, M. (2000). Predicting species diversity in tropical forests. _Proc. Natl. Acad. Sci. U.S.A._, 97, 10850. Preston, F. (1948). The commonness and rarity of species. _Ecology_, 29, 254-283. Preston, F. (1962a). The canonical distribution of commonness and parity, Part i. _Ecology_, 43, 185-215. Preston, F. (1962b). The canonical distribution of commonness and parity, Part ii. _Ecology_, 43, 410-432. Pueyo, S. (2006). Self-similarity in species-area relationship and in species abundance distribution. _Oikos_, 112, 156-162. Rosenzweig, M. (1995). _Species Diversity in Space and Time_. Cambridge Univ. Press. Siemann, E., Tilman, D., & Haarstad, J. (1996). Insect species diversity, abundance and body size relationships. _Nature_, 380, 704-706. Sugihara, G. (1980). Minimal community structure, an explanation of species abundance patterns. _Am. Nat._, 116, 770-787. Tokesh, M. (1999). _Species coexistence - ecological and evolutionary perspectives_. Blackwell Science. Tokita, K. (2004). Species abundance patterns in complex evolutionary dynamics. _Phys. Rev. Lett._, 93, 178102. Tokita, K. (2006). Statistical mechanics of relative species abundance. _Ecol. Informatics_, 1 (Available online 7 July, 2006). Tokita, K. and Yasutomi, A. (1999). Mass extinction in a dynamical system of evolution with variable dimension. _Phys. Rev. E_, 60, 842-847. Tokita, K. and Yasutomi, A. (2003). Emergence of a complex and stable network in a model ecosystem with extinction and mutation. _Theor. Popul. Biol._, 63, 131-146.
We studied the mathematical relations between species abundance distributions (SADs) and species-area relationships (SARs) and found that a power-law SAR can be generally derived from a power-law SAD without a special assumption such as the \"canonical hypothesis\". In the present analysis, an SAR-exponent is obtained as a function of an SAD-exponent for a finite number of species. We also studied the inverse problem, from SARs to SADs, and found that a power-SAD can be derived from a power-SAR under the condition that the functional form of the corresponding SAD is invariant for changes in the number of species. We also discuss general relationships among lognormal SADs, the broken-stick model (exponential SADs), linear SARs and logarithmic SARs. These results suggest the existence of a common mechanism for SADs and SARs, which could prove a useful tool for theoretical and experimental studies on biodiversity and species coexistence.
Provide a brief summary of the text.
arxiv-format/0609067v1.md
Mean-Field vs Monte-Carlo equation of state for the expansion of a Fermi superfluid in the BCS-BEC crossover L. Salasnich\\({}^{1,2}\\) and N. Manini\\({}^{2}\\) \\({}^{1}\\)CNISM and CNR-INFM, Unita di Padova, Dipartimento di Fisica \"Galileo Galilei\", Universita di Padova, Via Marzolo 8, 35122 Padova, Italy \\({}^{2}\\)Dipartimento di Fisica and CNISM, Universita di Milano, Via Celoria 16, 20133 Milano, Italy ## I Introduction Current experiments with a Fermi gas of \\({}^{6}\\)Li or \\({}^{40}\\)K atoms in two hyperfine spin states operate in the regime of deep Fermi degeneracy. The experiments are concentrated across a Feshbach resonance, where the s-wave scattering length \\(a_{F}\\) of the interatomic Fermi-Fermi potential varies from large negative to large positive values. In this way it has been observed a crossover from a Bardeen-Cooper-Schrieffer (BCS) superfluid to a Bose-Einstein condensate (BEC) of molecular pairs [1, 2, 3]. The bulk energy per particle of a two-spin attractive Fermi gas can be expressed [1, 2, 3, 4, 5] in the BCS-BEC crossover by the following equation \\[{\\cal E}(n)=\\frac{3}{5}\\;\\frac{\\hbar^{2}k_{F}^{2}}{2m}\\;f(y)\\;, \\tag{1}\\] where \\(k_{F}=(3\\pi^{2}n)^{1/3}\\) is the Fermi wave vector, \\(n\\) is the number density, and \\(f(y)\\) is a universal function of the inverse interaction parameter \\(y=(k_{F}a_{F})^{-1}\\), with \\(a_{F}\\) the Fermi-Fermi scattering length. The full behavior of the universal function \\(f(y)\\) is unknown but one expects that in the BCS regime (\\(y\\ll-1\\)) it has the following asymptotic behavior \\[f(y)=1+\\frac{10}{9\\pi\\,y}+O(\\frac{1}{y^{2}})\\;, \\tag{2}\\] as found by Yang _et al._[6, 7] in 1957. In this regime the system is a Fermi gas of weakly bound Cooper pairs where the superfluid gap energy \\(\\Delta\\) is exponentially small. Instead, in the unitarity limit (\\(y=0\\)) the energy per particle is proportional to that of a non-interacting Fermi gas and, from Monte-Carlo (MC) results [5], one finds \\[f(0)=0.42\\pm 0.02\\;. \\tag{3}\\] Finally, in the BEC regime (\\(y\\gg 1\\)) the system is a weakly repulsive Bose gas of molecules of mass \\(m_{M}=2m\\), density \\(n_{M}=n/2\\) and interacting with \\(a_{M}=0.6a_{F}\\) (from MC results [5] and 4-body theory [8]). In this BEC regime one expects the asymptotic expression \\[f(y)=\\frac{5a_{M}}{18\\pi a_{F}\\,y}+O(\\frac{1}{y^{5/2}})\\;, \\tag{4}\\] as found by Lee, Yang and Huang [9], again in year 1957. ## II Monte-Carlo vs Mean-Field We have recently shown [10] that the unknown universal function \\(f(y)\\) can be modelled by the analytical formula \\[f(y)=\\alpha_{1}-\\alpha_{2}\\arctan\\left(\\alpha_{3}\\;y\\;\\frac{\\beta_{1}+|y|}{\\beta _{2}+|y|}\\right)\\,. \\tag{5}\\] This formula has been obtained from Monte-Carlo (MC) simulations [5] and the asymptotic expressions. Table 1 of Ref. [10] reports the values of the interpolating value of \\(\\alpha_{1}\\), \\(\\alpha_{2}\\), \\(\\alpha_{3}\\), \\(\\beta_{1}\\) and \\(\\beta_{2}\\). The thermodynamical formula \\[\\mu(n)=\\frac{\\partial\\left(n{\\cal E}(n)\\right)}{\\partial n}=\\frac{\\hbar^{2}k_ {F}^{2}}{2m}\\left(f(y)-\\frac{y}{5}f^{\\prime}(y)\\right)\\;. \\tag{6}\\] relates the bulk chemical potential \\(\\mu\\) to the energy per particle \\({\\cal E}\\). We call Monte-Carlo equation of state (MC EOS) the equation of state \\(\\mu=\\mu(n,a_{F})\\) obtained from Eqs. (5) and (6). Within the mean-field theory, the chemical potential \\(\\mu\\) and the gap energy \\(\\Delta\\) of the uniform Fermi gas are instead found by solving the following extended BCS (EBCS) equations [11, 12] \\[-\\frac{1}{a_{F}}=\\frac{2(2m)^{1/2}}{\\pi\\hbar^{3}}\\,\\Delta^{1/2}\\,\\int_{0}^{ \\infty}dy\\,y^{2}\\,\\left(\\frac{1}{y^{2}}-\\frac{1}{\\sqrt{(y^{2}-\\frac{\\mu}{ \\Delta})^{2}+1}}\\right)\\,, \\tag{7}\\] \\[n=\\frac{N}{V}=\\frac{(2m)^{3/2}}{2\\pi^{2}\\hbar^{3}}\\,\\Delta^{3/2}\\,\\int_{0}^{ \\infty}dy\\,y^{2}\\,\\left(1-\\frac{(y^{2}-\\frac{\\mu}{\\Delta})}{\\sqrt{(y^{2}- \\frac{\\mu}{\\Delta})^{2}+1}}\\right)\\;. \\tag{8}\\] By solving these two EBCS equations one obtains the chemical potential \\(\\mu\\) as a function of \\(n\\) and \\(a_{F}\\). Note that EBCS theory does not predict the correct BEC limit: the molecules have scattering length \\(a_{M}=2a_{F}\\) instead of \\(a_{M}=0.6a_{F}\\). We call EBCS equation of state (EBCS EOS) the mean-field equation of state \\(\\mu=\\mu(n,a_{F})\\) obtained from Eqs. (7) and (8). Obviously, our MC EOS is much closer than the EBCS EOS to the MC results obtained in Ref. [5] with a fixed node technique. For completeness, we observe that within the EBCS mean-field theory the condensate density \\(n_{0}\\) of the Fermi superfluid can be written in terms of a simple formula [11, 13], given by \\[n_{0}=\\frac{m^{3/2}}{8\\pi\\hbar^{3}}\\,\\Delta^{3/2}\\sqrt{\\frac{\\mu}{\\Delta}+\\sqrt{ 1+\\frac{\\mu^{2}}{\\Delta^{2}}}}\\;. \\tag{9}\\] In Ref. [11] we have found that the condensate fraction is exponentially small in the BCS regime (\\(y\\ll-1\\)) and goes to unity in the BEC regime (\\(y\\gg 1\\)). A very recent MC calculation [14] has confirmed this behavior but find at the unitarity limit a condensate fraction slightly smaller (\\(n_{0}/(n/2)=0.50\\)) than the mean-field expectation (\\(n_{0}/(n/2)=0.66\\)). ## III Time-dependent density functional for a Fermi superfluid We propose an action functional \\(A\\) which depends on the superfluid order parameter \\(\\psi({\\bf r},t)\\) as follows \\[A=\\int dt\\;d^{3}{\\bf r}\\;\\left\\{i\\hbar\\;\\psi^{*}\\partial_{t}\\psi+\\frac{c\\, \\hbar^{2}}{2m}\\psi^{*}\ abla^{2}\\psi-U|\\psi|^{2}-{\\cal E}(|\\psi|^{2})|\\psi|^{2 }\\right\\}. \\tag{10}\\] The term \\({\\cal E}\\) is the bulk energy per particle of the system, which is a function of the number density \\(n({\\bf r},t)=|\\psi({\\bf r},t)|^{2}\\). The Laplacian term \\(\\frac{c\\,\\hbar^{2}}{2m}\\psi^{*}\ abla^{2}\\psi\\) accounts for corrections to the kinetic energy due to spatial variations. In the BCS regime, where the Fermi gas is weakly interacting, the Laplacian term is phenomenological and it is called _von Weizsacker correction_[16]. In the BEC regime, where the gas of molecules is Bose condensed, the Laplacian term is due to the symmetry-breaking of the bosonic field operator and it is referred to as _quantum pressure_. Note that in the deep BEC regime our action functional reduces to the Gross-Pitaevskii action functional [15]. In our calculations we set the numerical coefficient \\(c\\) of the gradient term equal to unity (\\(c=1\\)), to obtain the correct quantum-pressure term in the BEC regime. In the BCS regime, a better phenomenological choice for the parameter \\(c\\) could be \\(c=1/3\\) as suggested by Tosi _et al._[17, 18], or \\(c=1/36\\) as suggested by Zaremba and Tso [19]. For the initial confining trap, we consider an axially symmmetric harmonic potential \\[U({\\bf r},t)=\\frac{m}{2}\\left[\\bar{\\omega}_{\\rho}(t)^{2}(x^{2}+y^{2})+\\bar{\\omega }_{z}(t)^{2}z^{2}\\right], \\tag{11}\\] where \\(\\bar{\\omega}_{j}(t)=\\omega_{j}\\Theta(-t)\\), with \\(j=1,2,3=\\rho,\\rho,z\\) and \\(\\Theta(t)\\) the step function, so that, after the external trap is switched off at \\(t>0\\), the Fermi cloud performs a free expansion. The Euler-Lagrange equation for the superfluid order parameter \\(\\psi({\\bf r},t)\\) is obtained by minimizing the action functional \\(A\\). This leads to a time-dependent nonlinear Schrodinger equation (TDNLSE): \\[i\\hbar\\;\\partial_{t}\\psi=\\left(-\\frac{\\hbar^{2}}{2m}\ abla^{2}+U+\\mu(|\\psi|^{2 })\\right)\\psi\\;. \\tag{12}\\] The nonlinear term \\(\\mu\\) is the bulk chemical potential of the system given by the MC EOS or the EBCS EOS. As noted previously, in the deep BEC regime this TDNLSE reduces to the familiar Gross-Pitaevskii equation. From the TDNLSE one deduces the Landau's hydrodynamics equations of superfluids at zero temperature by setting \\(\\psi({\\bf r},t)=\\sqrt{n({\\bf r},t)}e^{iS({\\bf r},t)}\\), \\({\\bf v}({\\bf r},t)=\\frac{\\hbar}{m}\ abla S({\\bf r},t)\\), and neglecting the term \\((-\\hbar^{2}\ abla^{2}\\sqrt{n})/(2m\\sqrt{n})\\), which would vanish in the uniform regime. These hydrodynamics equations are \\[\\partial_{t}n+\ abla\\cdot(n{\\bf v}) = 0\\;, \\tag{13}\\] \\[m\\;\\partial_{t}{\\bf v}+\ abla\\left(\\mu(n)+U({\\bf r},t)+\\frac{1}{ 2}mv^{2}\\right) = 0\\;. \\tag{14}\\] These superfluid equations differ from the hydrodynamic equations of a normal fluid in the superfluid velocity field being irrotational, i.e. \\(\ abla\\wedge{\\bf v}=0\\), so that the vorticity term \\({\\bf v}\\wedge(\ abla\\wedge{\\bf v})\\) does not appear in Eq. (14). By using the superfluid hydrodynamics equations, the stationary state in the trap is given by the Thomas-Fermi profile \\(n_{0}({\\bf r})=\\mu^{-1}\\left(\\bar{\\mu}-U({\\bf r},0)\\right)\\). Here \\(\\bar{\\mu}\\), the chemical potential of the inhomogeneous system, is fixed by the normalization condition \\(N=\\ and the velocity \\[{\\bf v}({\\bf r},t)=\\left(x\\frac{\\dot{b}_{1}(t)}{b_{1}(t)},y\\frac{\\dot{b}_{2}(t)}{b _{2}(t)},z\\frac{\\dot{b}_{3}(t)}{b_{3}(t)}\\right), \\tag{16}\\] we obtain three differential equations for the scaling variables \\(b_{j}(t)\\), with \\(j=1,2,3=\\rho,\\rho,z\\). The dynamics is well approximated by evaluating the scaling differential equations at the center (\\({\\bf r}={\\bf 0}\\)) of the cloud. In this case the variables \\(b_{j}(t)\\) satisfy the local scaling equations (LSE) \\[\\ddot{b}_{j}(t)+\\bar{\\omega}_{j}(t)^{2}\\;b_{j}(t)=\\frac{\\omega_{j}^{2}}{\\prod \\limits_{k=1}^{3}b_{k}(t)}\\;\\frac{\\frac{\\partial\\mu}{\\partial n}\\left(\\bar{n} (t)\\right)}{\\frac{\\partial\\mu}{\\partial n}\\left(n_{0}({\\bf 0})\\right)}\\;, \\tag{17}\\] where \\(\\bar{n}(t)=n_{0}({\\bf 0})/\\prod\\limits_{k=1}^{3}b_{k}(t)\\). Clearly, the LSE depend critically on the EOS \\(\\mu=\\mu(n,a_{F})\\). The TDNLSE is solved by using a finite-difference Crank-Nicolson predictor-corrector method, that we developed to solve the time-dependent Gross-Pitaevskii equation [20]. Observe that imaginary-time integration of Eq. (12) by the Crank-Nicolson method generates the ground-state of Bose condensates in a ring and in a double-well [21] much more accurately than the steepest descent method used in the past. The simple LSE are instead solved by using a standardleap-frog symplectic algorithm, succesfully applied to investigate the order-to-chaos transition in spatially homogeneous field theories [22]. In Ref. [23] we have compared our time-dependent theory with the available experimental data. Moreover, we have compared the full TDNLSE with the LSE by using both MC EOS and EBCS EOS. We have found that, using the same EOS, the TDNLSE gives results always very close to the LSE ones. Instead, we have found some differences between MC EOS and EBCS EOS. Figure 1 reports the aspect ratio and the released energy of a \\({}^{6}\\)Li cloud after 1.4 ms expansion from the trap realized at ENS-Paris [2]. In the experiment of Ref. [2] the free expansion of \\(7\\cdot 10^{4}\\) cold \\({}^{6}\\)Li atoms has been studied for different values of \\(y=(k_{F}a_{F})^{-1}\\) around the Feshbach resonance (\\(y=0\\)). Unfortunately, in this experiment the thermal component is not negligible and thus the comparison with the zero-temperature theory is not fully satisfactory. Figure 1 compares the experimental data of Ref. [2] with the LSE based on both MC and EBCS equation of state. This figure shows that the aspect ratio predicted by the two zero-temperature theories exceeds the finite-temperature experimental results. This is not surprising because the thermal component tends to hide the hydrodynamic expansion of the superfluid. On the other hand, the released energy of the atomic gas is well described by the two zero-temperature theories, and the mean-field theory seems more accurate, also probably due to the thermal component. In Fig. 1 the released energy is defined as in Ref. [2], i.e. on the basis of the rms widths of the cloud. By energy conservation, the actual released energy is instead given by \\[E_{\\rm rel}=\\int d^{3}{\\bf r}\\,{\\cal E}[n_{0}({\\bf r})]\\,n_{0}({\\bf r})\\;. \\tag{18}\\] It is straightforward to obtain an analytical expression for the released energy assuming a power-law dependence \\(\\mu=C\\;n^{\\gamma}\\) for the chemical potential (polytropic equation of state) and writing \\({\\cal E}[n_{0}({\\bf r})]\\simeq\\frac{3}{5}\\mu[n_{0}({\\bf r})]=\\frac{3}{5}\\frac{ \\bar{\\mu}}{n_{0}(0)^{\\gamma}}n_{0}({\\bf r})^{\\gamma}\\) where \\(\\gamma\\) is the effective polytropic index, obtained as the logarithmic derivative of the chemical potential \\(\\mu\\)[10], namely \\[\\gamma(y)=\\frac{n}{\\mu}\\frac{\\partial\\mu}{\\partial n}=\\frac{\\frac{2}{3}f(y)- \\frac{2y}{5}f^{\\prime}(y)+\\frac{y^{2}}{15}f^{\\prime\\prime}(y)}{f(y)-\\frac{y}{5 }f^{\\prime}(y)}\\;. \\tag{19}\\] In this way one finds the simple approximate formula \\[E_{\\rm rel}=\\frac{3}{5}N\\epsilon_{F}\\,\\frac{2(1+\\gamma(y))}{2+5\\gamma(y)}\\,f( y)\\;, \\tag{20}\\] where \\(\\epsilon_{F}=\\hbar^{2}k_{F}({\\bf 0})^{2}/(2m)\\) is the Fermi chemical potential at the center of the trap, with \\(k_{F}({\\bf 0})=\\left(3\\pi^{2}n_{0}({\\bf 0})\\right)^{1/3}\\). Fig. 2 shows that this simple approximate formula which neglects all details of the initial aspect ratio produces fair semi-quantitative agreement, in particular in the BEC regime, with the actual released energy obtained by solving numerically Eq. (18). During the free expansion of the cloud the aspect ratio in the BCS regime (\\(y\\ll-1\\)) is measurably different from the one of the BEC regime (\\(y\\gg 1\\)). In Ref. [23] we have predicted an interesting effect: starting with the same aspect ratio of the cloud, at small times (\\(t\\omega_{H}\\lesssim 3\\)) the aspect ratio is larger in the BCS region; at intermediate times (\\(t\\omega_{H}\\simeq 4\\)) the aspect ratio is enhanced close to the unitarity limit (\\(y=0\\)); eventually at large times (\\(t\\omega_{H}\\gtrsim 5\\)) the aspect ratio becomes larger in the BEC region. Here \\(\\omega_{H}=(\\omega_{\\perp}^{2}\\omega_{z})^{1/3}\\) is the geometric average of the trapping frequencies. This prediction is based on the numerical simulation of the LSE shown in Fig. 2, where we plot the aspect ratio of the expanding cloud as a function of the inverse interaction parameter \\(y=1/(k_{F}a_{F})\\) at successive time intervals. At \\(t=0\\) the aspect ratio equals the trap anisotropy \\(\\lambda=0.34\\). Of course the detailed sequence of deformations depends on the experimental conditions and in particular on the initial anisotropy, but the qualitative trend of an initially faster reversal on the BCS side, later suppressed by the BEC gas, is predicted for the expansion of any initially cigar-shaped interacting fermionic cloud. ## IV Conclusions We have shown that the free expansion of a Fermi superfluid in the BCS-BEC crossover, that we simulate in a hydrodynamic scheme at zero temperature, reveals interesting features. We have found that the Monte-Carlo equation of state and the mean-field equation of state give similar results for the free expansion of a two-spin Fermi gas. The two theories are in reasonable agreement with the experimental data, which, however, are affected by the presence of a thermal component. Our Monte-Carlo equation of state and time-dependent density functonal can be used to study many other interesting properties; for instance, the collective oscillations of the Fermi cloud [24, 25, 10], the Fermi-Bose mixtures across a Feshbach resonance of the Fermi-Fermi scattering length, and nonlinear effects like Bose-Fermi solitons and shock waves. Finally, we observe that new experimental data on collective oscillations [26] suggest that the Monte-Carlo equation of state is more reliable than the mean-field equation of state. ## References * [1] K.M. O'Hara _et al._ Science, **298**, 2179 (2002). * [2] T. Bourdel _et al._, Phys. Rev. Lett. **93**, 050401 (2004). * [3] Zwierlein _et al._, Phys. Rev. Lett. **92**, 120403 (2004); Zwierlein _et al._, Phys. Rev. Lett. **94**, 180401 (2005). * [4] A. Perali, P. Pieri, G.C. Strinati, Phys. Rev. Lett. **93**, 100404 (2004). * [5] G.E. Astrakharchik _et al._, Phys. Rev. Lett. **93**, 200404 (2004). * [6] K. Huang and C.N. Yang, Phys. Rev. **105**, 767 (1957). * [7] D.T. Lee and C.N. Yang, Phys. Rev. **105**, 1119 (1957). * [8] D.S. Petrov, C. Salomon, and G.V. Shlyapnikov, Phys. Rev. Lett. **93**, 090404 (2004). * [9] D.T. Lee, K. Huang, and C.N. Yang, Phys. Rev. **106**, 1135 (1957). * [10] N. Manini and L. Salasnich, Phys. Rev. A **71**, 033625 (2005). * [11] L. Salasnich, N. Manini, and A. Parola, Phys. Rev. A **72**, 023621 (2005). * [12] M. Marini, F. Pistolesi, and G.C. Strinati, Eur. Phys. J. B **1**, 151 (1998). * [13] G. Ortiz and J. Dukelsky, Phys. Rev. A **72**, 043611 (2005). * [14] G. E. Astrakharchik _et al._, Phys. Rev. Lett. **95**, 230405 (2005). * [15] L. Salasnich, Int. J. Mod. Phys. B **14**, 1 (2000); L. Salasnich, Phys. Rev. A **61**, 015601 (2000). * [16] C.F. von Weizsacker, Z. Phys. **96**, 431 (1935). * [17] N.H. March and M.P. Tosi, Ann. Phys. (NY) **81**, 414 (1973). * [18] P. Vignolo, A. Minguzzi and M.P. Tosi, Phys. Rev. Lett. **85**, 2850 (2000). * [19] E. Zaremba and H.C. Tso, Phys. Rev. A **49**, 8147 (1994). * [20] E. Cerboneschi, R. Mannella, E. Arimondo, and L. Salasnich, Phys. Lett. A **249**, 495 (1998); L. Salasnich, A. Parola, and L. Reatto, Phys. Rev. A **64**, 023601 (2001); L. Salasnich, A. Parola, and L. Reatto, J. Phys. B: At. Mol. Opt. Phys. **35**, 3205 (2002); L. Salasnich, Phys. Rev. A **70**, 053617 (2004). * [21] L. Salasnich, A. Parola, and L. Reatto, Phys. Rev. A **59**, 2990 (1999); L. Salasnich, A. Parola, and L. Reatto, Phys. Rev. A **60**, 4171 (1999). * [22] L. Salasnich, Phys. Rev. D **52**, 6189 (1995); L. Salasnich, Mod. Phys. Lett. A **12**, 1473 (1997). * [23] G. Diana, N. Manini, and L. Salasnich, Phys. Rev. A **73**, 065601 (2006). * [24] S. Stringari, Europhys. Lett. **65**, 749 (2004). * [25] H. Hu, A. Minguzzi, Xia-Ji Liu, and M.P. Tosi, Phys. Rev. Lett. **93**, 190403 (2004). * [26] R. Grimm, Univ. Innsbruck, private e-communication, July 2006. Figure 1: Cloud of \\(N=7\\cdot 10^{4}\\)\\({}^{6}\\)Li atoms after 1.4 ms expansion from the trap realized at ENS-Paris [2] with anisotrpy \\(\\lambda=\\omega_{z}/\\omega_{\\rho}=0.34\\). Squares: experimental data. Solid lines: numerical simulation with LSE and MC EOS; dashed lines: numerical simulation with LSE and EBCS EOS. The released energy is normalized as \\(E_{\\rm rel}/(N\\epsilon_{F})\\). Figure 2: Comparison of the approximate expression of Eq. (20) (dotted line) to the actual released energy defined in Eq. (18) for the conditions of the ENS-Paris experiment [2] (initial anisotrpy \\(\\lambda=\\omega_{z}/\\omega_{\\rho}=0.34\\), \\(N=7\\cdot 10^{4}\\)) (solid line). Both calculations assume the MC EOS. The actual released energy based on Eq. (18) and the EBCS EOS is also reported (dashed line). Figure 3: Four successive frames of the aspect ratio of the \\({}^{6}\\)Li Fermi cloud as a function of \\(y=(k_{F}a_{F})^{-1}\\). At \\(t=0\\) the Fermi cloud is cigar-shaped with a constant aspect ratio equal to the initial trap anisotropy \\(\\lambda=\\omega_{z}/\\omega_{\\rho}=0.34\\). Solid lines: numerical simulation with LSE and MC EOS; dashed lines: numerical simulation with LSE and EBCS EOS.
The equation of state (EOS) of a Fermi superfluid is investigated in the BCS-BEC crossover at zero temperature. We discuss the EOS based on Monte-Carlo (MC) data and asymptotic expansions and the EOS derived from the extended BCS (EBCS) mean-field theory. Then we introduce a time-dependent density functional, based on the bulk EOS and Landau's superfluid hydrodynamics with a von Weizsacker-type correction, to study the free expansion of the Fermi superfluid. We calculate the aspect ratio and the released energy of the expanding Fermi cloud showing that MC EOS and EBCS EOS are both compatible with the available experimental data of \\({}^{6}\\)Li atoms. We find that the released energy satisfies an approximate analytical formula that is quite accurate in the BEC regime. For an anisotropic droplet, our numerical simulations show an initially faster reversal of anisotropy in the BCS regime, later suppressed by the BEC fluid.
Summarize the following text.
arxiv-format/0610005v2.md
# Recent astrophysical and accelerator based results on the Hadronic Equation of State Ch. Hartnack\\({}^{1}\\), H. Oeschler\\({}^{2}\\) and Jorg Aichelin\\({}^{1}\\) \\({}^{1}\\)SUBATECH, Laboratoire de Physique Subatomique et des Technologies Associees University of Nantes - IN2P3/CNRS - Ecole des Mines de Nantes 4 rue Alfred Kastler, F-44072 Nantes Cedex 03, France \\({}^{2}\\)Institut fur Kernphysik, Darmstadt University of Technology, 64289 Darmstadt, Germany ###### pacs: 25.75.Dw How much energy is needed to compress nuclear matter? The answer to this question, the determination of \\(E/A(\\rho,T)\\), the energy/nucleon in nuclear matter in thermal equilibrium as a function of the density \\(\\rho\\) and the temperature \\(T\\), has been considered since many years as one of the most important challenges in nuclear physics. This quest has been dubbed \"search for the nuclear equation of state (EoS)\". Only at equilibrium density, \\(\\rho_{0}\\), the energy per nucleon \\(E/A(\\rho=\\rho_{0},T=0)=-16\\) MeV is known by extrapolating the Weizsacker mass formula to infinite matter. Standard ab initio many body calculations do not allow for a determination of \\(E/A(\\rho,T)\\) at energies well above the saturation density because the low density many body expansion schema ( Bruckner G- matrix) breaks down and therefore the number of contributing terms is exploding. Therefore in nuclear reaction physics another strategy has been developed. Theory has identified experimental observables in nuclear reaction physics or in astrophysics which are sensitive to \\(E/A(\\rho,T)\\). Unfortunately these observables depend as well on other quantities which are either unknown or little known (like cross sections with resonance in the entrance channel) or difficult to assess theoretical (like the resonance lifetimes in hot and dense matter). It was hoped that comparing many observables for different systems and different energies with the theoretical predictions these unknown or little known quantities can be determined experimentally and that finally the dependence of the observables on \\(E/A(\\rho,T)\\) can be isolated. In astrophysics the nuclear EoS plays an important role in heavy ion reactions [1], in the mass-radius relation of neutron stars [3; 4] and in supernovae explosions [5]. For a recent review on the topics we refer to [6]. Unfortunately, as in nuclear reaction physics, there are always other little known processes or properties which have to be understood before the nuclear EoS dependence can be isolated. We discuss here as example of the mass-radius relation of neutron stars. Fig. 1 shows the neutron star masses in units of the solar mass for different types of binaries. These masses are concentrated at around 1-1.5 solar masses. Fig. 2 shows a theoretical prediction of the mass-radius relation for neutron stars using different EoS. Since the nature of the interior of neutron stars is not known (in contradiction to what the name suggests) one may suppose that it consists of hadrons or quarks. Shown if it is \"soft\" of hadron there are speculations that there is a \\(K^{-}\\) or a \\(\\pi^{-}\\) condensate or this effect is a hyperons in equilibrium with nuclear resonances. The same picture of quarks. Little known color-flavor locked quark phases may modify the EoS at densities which are reached in the interior of the neutron star. For a detailed discussion of the phenomena we refer to ref. [6]. We see that the observed masses of neutron stars are compatible with almost all other hadron based EoS as long as the radius is unknown. Radii, however, are very difficult to measure. Because similar problems appear also for other observables, up to recently the astrophysical observations of nuclear stars did not help much to carry the nuclear EoS. This situation has changed dramatically in the last year with the observation of a neutron star with a mass of two solar masses [7]. If this observation is finally confirmed the mass/radius prediction of fig. 2 excludes that the interior of a neutron star is made by quarks [4], Fraight nuclear EoS, which will be defined below, will be excluded. This is confirmed by the calculation of Maieron [8] which uses a MIT bag model or a color dielectric models EoSto describe the quark phase. Baldo [9] argue that this conclusion may be premature because it depends too much on the equation of state of the quark phase. If one replaces the MIT bag model equation of state by that of the Nambu - Jona-Lasinio (NJL) Lagrangian under certain conditions (no color conducting phase) larger masses may be obtained. The standard NJL Lagrangian lacks, however, repulsion and in view of the momentum cut-off, necessary to regularize the loop integrals, and the coupling constants in the diquark sector, which are not uniquely determined by the Fierz transformation, quantitative prediction at high quark densities are difficult in this approach even if qualitative agreement with pQCD calculation can be found [10]. In heavy ion reactions three observables have been identified which are - according to theoretical calculations - sensitive to \\(E/A(\\rho,T)\\) at densities larger than \\(\\rho_{0}\\): (i) the strength distribution of giant isoscalar monopole resonances [11; 12], (ii) the in-plane sidewards flow of nucleons in semi-central heavy ion reaction at energies between 100 \\(A\\) MeV and 400 \\(A\\) MeV [13] and (iii) the production of \\(K^{+}\\) mesons in heavy ion reactions at energies around 1 \\(A\\) GeV [14]. We will discuss these approaches below. Although theory has predicted these effects qualitatively, a quantitative approach is confronted with two challenges: a) The nucleus is finite and surface effects are not negligible, even for the largest nuclei and b) in heavy ion reactions the reacting system does not come into equilibrium. Therefore complicated non-equilibrium transport theories have to be employed and the conclusion on the nuclear EoS can only be indirect. (i) The study of monopole vibrations has been very successful, but the variation in density is minute. Therefore, giant monopole resonances are sensitive to the energy which is necessary to change the density of a cold nucleus close to the equilibrium point \\(\\rho_{0}\\). According to theory the vibration frequency depends directly on the force which counteracts to any deviation from the equilibrium and therefore to the potential energy. The careful analysis of the isoscalar monopole strength in non-relativistic [11] and relativistic mean field models has recently converged [12] due to a new parametrization of the relativistic potential. These calculations allow now for the determination of the compressibility \\(\\kappa=9\\rho^{2}\\frac{d^{2}E/A(\\rho,T)}{\\rho_{0}}|_{\\rho=\\rho_{0}}\\) which measures the curvature of \\(E/A(\\rho,T)\\) at the equilibrium point. The values found are around \\(\\kappa=240\\) MeV and therefore close to what has been dubbed \"soft EoS\". It agrees as well with the prediction of nuclear matter calculations based on nucleon-nucleon scattering data [15] which give a value of about \\(\\kappa=250\\) MeV. (ii) If the overlap zone of projectile and target becomes considerably compressed in semi-central heavy-ion collisions, an in-plane flow is created due to the transverse pressure on the baryons outside of the interaction region with this flow being proportional to the transverse pressure. In order to obtain a noticeable compression, the beam energy has to be large as compared to the Fermi energy of the nucleons inside the nuclei and hence a beam energy of at least 100 \\(A\\) MeV is necessary. Compression goes along with excitation and therefore the compressional energy of excited nuclear matter is encoded in the in-plane flow. It has recently been demonstrated [16] that transport theories do not Figure 1: Measured and estimated masses of neutron stars in radio binary pulsars and in x-ray accreting binaries. Error bars are 1\\(\\sigma\\). Vertical dotted lines show average masses of each group (16.2 M\\({}_{\\odot}\\), 1.34 M\\({}_{\\odot}\\) and 1.56 M\\({}_{\\odot}\\)); dashed vertical lines indicate inverse error weighted average masses (1.48 M\\({}_{\\odot}\\), 1.41 M\\({}_{\\odot}\\) and 1.34 M\\({}_{\\odot}\\)). The figure is taken from ref [4] agree quantitatively yet and therefore former conclusions [17] have to be considered as premature. (iii) The third method is most promising for the study of nuclear matter at high densities [18] and I will discuss it in detail this talk. \\(K^{+}\\) mesons produced far below the \\(NN\\) threshold cannot be created in first-chance collisions between projectile and target nucleons. They do not provide sufficient energy even if one includes the Fermi motion. The effective energy for the production of a \\(K^{+}\\) meson in the \\(NN\\) center of mass system is 671 MeV as in addition to the mass of the kaon a nucleon has to be converted into a \\(\\Lambda\\) to conserve strangeness. Before nucleons can create a \\(K^{+}\\) at these subthreshold energies, they have to accumulate energy. The most effective way to do this is to convert first a nucleon into a \\(\\Delta\\) and to produce in a subsequent collision a \\(K^{+}\\) meson via \\(\\Delta N\\to NK^{+}\\Lambda\\). Two effects link the yield of produced \\(K^{+}\\) with the density reached in the collision and the stiffness of the EoS. If less energy is needed to compress matter (i) more energy is available for the \\(K^{+}\\) production and (ii) the density which can be reached in these reactions will be higher. Higher density means a smaller mean free path and therefore the \\(\\Delta\\) will interact more often increasing the probability to produce a \\(K^{+}\\) and hence it has a lower chance to decay before it interacts. Consequently the \\(K^{+}\\) yield depends on the compressional energy. At beam energies around 1 \\(A\\) GeV matter becomes highly excited and mesons are formed. Therefore this process tests highly excited hadronic matter. At beam energies \\(>\\) 2 \\(A\\) GeV first-chance collisions dominate and this sensitivity is lost. For the third approach different transport theories have converged. This was possible due a special workshop at the ECT* in Trento/Italy where the authors of the different codes have discussed their approaches in detail and common solutions have been advanced. The results of this common effort have been published in [19]. As an example we display here the \\(K^{+}\\)\\(p_{t}\\) spectra at midrapidity obtained in the different transport theories at different energies. Because with each \\(K^{+}N\\) rescattering collision the slope of the \\(K^{+}\\) spectra changes the slope of the \\(p_{t}\\) spectra encodes not only the \\(K^{+}\\) momentum distribution at the time point of production but also the distribution of the number of rescatterings. It is therefore all but trivial. Without the \\(KN\\) potential the slopes are almost identical and even the absolute yield which depends on a correct modeling of the Fermi motion of the nucleons is very similar. If we include the \\(KN\\) interaction which is not identical in the different approaches (see [19]) we still observe a very similar slope Figure 2: Mass-radius diagram for neutron stars. Black (green) curves are for normal matter (SQM) EoS [for definitions of the labels, see \\(|4|\\)]. Regions excluded by general relativity (GR), causality and rotation constraints are indicated. Contours of radiation radii \\(R\\,_{\\infty}\\) are given by the orange curves. The figure is from [4]. for most of the programs. Due to this progress the simulation programs can now be used to extract yet theoretically inaccessible information like the hadronic EOS [18]. Two independent experimental observables, the ratio of the excitation functions of the \\(K^{+}\\) production for Au+Au and for C+C [20; 21], and a new observable, the dependence on the number of participants of the \\(K^{+}\\) yield show that nucleons interact with a potential which corresponds to a compressibility of \\(\\kappa\\leq 200\\) MeV in infinite matter in thermal equilibrium. This value extracted for hadronic matter at densities around 2.5 times the normal nuclear matter density is very similar to that extracted at normal nuclear matter density. A key point here is to demonstrate that the different implementation of yet unsolved physical questions, like the \\(N\\Delta\\to K^{+}\\Lambda N\\) cross section, the \\(KN\\) interaction as well as the life time of the nuclear resonances in the hadronic environment do not affect this conclusion. In order to determine the energy which is necessary to compress infinite nuclear matter in thermal equilibrium by heavy ion reactions in which no equilibrium is obtained one chooses the following strategy: The transport theory calculates the time evolution of the quantal particles described by Gaussian wave functions. The time evolution is given by a variational principle and the equations one obtains for this choice of the wave function are identical to the classical Hamilton equations where the classical two-body potential is replaced by the expectation value of a 3 parameter Skyrme potential. The Skyrme potential is a simple approximation to the real part of the Bruckner \\(G\\)-matrix which is too complicated for performing simulations of heavy ion collisions. For this potential the potential energy in infinite nuclear matter is calculated. To determine the nuclear EoS we average this (momentum-dependent) two-body potential over the momentum distribution of a given temperature \\(T\\) and add to it the kinetic energy. Expressed as a function of the density we obtain the desired nuclear EoS \\(E/A(\\rho,T)\\). Our two-body potential has five parameters which are fixed by the binding energy of infinite nuclear matter at \\(\\rho_{0}\\), the compressibility \\(\\kappa\\) and the optical potential which has been measured in pA reactions [22]. Once the parameters are fixed we use the two-body potential with these parameters in the transport calculation. There is an infinite number of two-body potentials which give the same EoS because the range of the potential does not play a role in infinite matter. The nuclear surface measured in electron scattering on nuclei fixes the range, however, quite well. The uncertainty which remains is of little relevance here (in contradiction to the calculation of the in-plane flow which is very sensitive to the exact surface properties of the nuclei and hence to the range of the potential). We employ the Isospin Quantum Molecular Dynamics (IQMD) with momentum dependent forces. All details of the standard version of the program may be found in [22]. The standard version is supplemented for this calculation with all inelastic cross sections which are relevant for the \\(K^{+}\\) production. For details of these cross sections we refer to [23]. Unless specified differently, the change of the \\(K^{+}\\) mass due to the kaon-nucleon (\\(KN\\)) interaction according to \\(m^{K}(\\rho)=m_{0}^{K}\\,(1-0.075\\frac{\\rho_{s}}{\\rho_{0}})\\) is taken into account, in agreement with recent self-consistent calculations of the spectral function of the \\(K^{+}\\)[24]. The \\(\\Lambda\\) potential is 2/3 of the nucleon potential, assuming that the s quark is inert. The Figure 3: Final \\(K^{+}\\) transverse momentum distribution at \\(b\\)=1 fm, \\(|y_{cm}<0.5|\\) and with an enforced \\(\\Delta\\) lifetime of 1/120 MeV (top row without, bottom row with KN potential) in the different approaches [19]. calculations reproduce the experimental data quite well as can be seen in fig. 4 where we compare the experimental and theoretical \\(K^{+}\\) spectra for different centrality bins for 1.48 A GeV Au+Au. This figure shows as well the influence of the \\(K^{+}N\\) potential which modifies not only the overall multiplicity of \\(K^{+}\\) due to the increase of the in medium mass but also the spectral form confirming the complexity of the transverse momentum spectrum. In order to minimize the experimental systematical errors and the consequences of theoretical uncertainties the KaoS collaboration has proposed to study not directly the excitation function of the \\(K^{+}\\) yield but that of the yield ratio of heavy to light systems [20]. Calculations have shown that ratios are much less sensitive to little known input parameters because they affect both systems in a rather similar way. We have shown in fig. 4 that the absolute yields are well reproduced in our simulations. Therefore we can use this ratio directly for a quantitative comparison with data. These ratios are, on the contrary, quite sensitive to the nuclear potentials because the compression obtained in the Au+Au collisions is considerable (up to \\(3\\rho_{0}\\)) and depends on the nuclear EoS whereas in C+C collisions the compression is small and almost independent on the stiffness of the EoS. Figure 5 shows the comparison of the measured ratio of the \\(K^{+}\\) multiplicities obtained in Au+Au and C+C reactions [20] together with transport model calculations as a function of the beam energy. We see, first of all in the top row, that the excitation function of the yield ratio depends on the potential parameters (hard EoS: \\(\\kappa=380\\) MeV, thin lines and solid symbols, soft EoS: \\(\\kappa=200\\) MeV, thick lines and open symbols) in a quite sensible way and - even more essential - that the prediction in the standard version of the simulation (squares) for a soft and a hard EoS potential differ much more than the experimental uncertainties. The calculation of Fuchs et al. [21] given in the same graph, agrees well with our findings. This observation is, as said, not sufficient to determine the potential parameters uniquely because in these transport theories several not precisely known processes are encoded. For these processes either no reliable theoretical prediction has been advanced or the different approaches yield different results for the same observable. Therefore, it is necessary to verify that these uncertainties do not render our conclusion premature. There are 3 identified uncertainties: the \\(\\sigma_{N\\Delta\\rightarrow\\kappa+}\\) cross section, the density dependence of the \\(K^{+}N\\) potential and the lifetime of \\(\\Delta\\) in matter if produced in a collisions with a sharp energy of two scattering partners. We discuss now how these uncertainties influence our results: Figure 5, top, shows as well the influence of the unknown \\(N\\Delta\\to K^{+}\\Lambda N\\) cross section on this ratio. We confront the standard IQMD option (with cross sections for \\(\\Delta N\\) interactions from Tsushima et al. [23]) with another option, \\(\\sigma(N\\Delta)=3/4\\sigma(NN)\\)[26], which is based on isospin arguments and has been frequently employed. Both cross sections differ by up to a factor of ten and change significantly the absolute yield of \\(K^{+}\\) in heavy ion reactions but do not change the shape of the ratio. Figure 4: \\(K^{+}\\) spectra for different centrality bins as compared with (preliminary) experimental data from the KaoS collaboration The middle part demonstrates the influence of the kaon-nucleon potential which is not precisely known at the densities obtained in this reaction. The uncertainties due to the \\(\\Delta\\) life time are discussed in the bottom part. Both calculations represent the two extreme values for this lifetime [23] which is important because the disintegration of the \\(\\Delta\\) resonance competes with the \\(K^{+}\\) production. Thus we see that these uncertainties do not influence the conclusion that the excitation function of the ratio is quite different for a soft EoS potential as compared to a hard one and that the data of the KaoS collaboration are only compatible with the soft EoS. The only possibility to change this conclusions is the assumption that the cross sections are explicitly density dependent in a way that the increasing density is compensated by a decreasing cross section. It would have a strong influence on other observables which are presently well predicted by the IQMD calculations. The conclusion that nuclear matter is best described by a soft EoS, is supported by another variable, the dependence of the \\(K^{+}\\) yield on the number of participating nucleons \\(A_{\\rm part}\\). The prediction of the IQMD simulations in the standard version for this observable is shown in Fig. 6. The top of the figure shows the kaon yield \\(M_{K^{+}}/A_{\\rm part}\\) for Au+Au collisions at \\(1.5~{}A\\) GeV as a function of the participant number \\(A_{\\rm part}\\) for a soft EoS using different options: standard version (soft, \\(KN\\)), calculations without kaon-nucleon interaction (soft, no \\(KN\\)) and with the isospin based \\(N\\Delta\\to N\\Lambda K^{+}\\) cross section (soft, \\(KN\\), \\(\\sigma^{*}\\)). These calculations are confronted with a standard calculation using the hard EoS potential. The scaling of the kaon yield with the participant number can be parameterized by \\(M_{K^{+}}=A_{\\rm part}^{\\alpha}\\). All calculations with a soft EoS show a rather similar value of \\(\\alpha\\) - although the yields are very different - while the calculation using a hard equation shows a much smaller value. Therefore we can conclude that also the slope value \\(\\alpha\\) is a rather robust observable. The bottom of Fig. 6 shows that \\(\\alpha\\) depends smoothly on the compressibility \\(\\kappa\\) of the EoS. Whether we include the momentum dependence of the nucleon nucleon interaction (with mdi) or not (without mdi) does not change the value of \\(\\alpha\\) as long as the compressibility is not changed - in stark contrast to the in-plane flow. Again, the measured centrality dependence for Au+Au at \\(1.5~{}A\\) GeV from the KaoS collaboration [28], \\(\\alpha=1.34\\pm 0.16\\), is only compatible Figure 5: Comparison of the measured excitation function of the ratio of the \\(K^{+}\\) multiplicities per mass number \\(A\\) obtained in Au+Au and in C+C reactions [Ref. [20]] with various calculations. The use of a hard EoS is denoted by thin (blue) lines, a soft EoS by thick (red) lines. The calculated energies are given by the symbols, the lines are drawn to guide the eye. On top, two different versions of the \\(N\\Delta\\to K^{+}\\Lambda N\\) cross sections are used. One is based on isospin arguments [26], the other is determined by a relativistic tree level calculation [27]. The calculation by Fuchs [21] are shown as dotted lines. Middle: IQMD calculations with and without \\(KN\\) potential are compared. Bottom: The influence of different options for the life time of \\(\\Delta\\) in matter is demonstrated. with a soft Eo S potential. That the dependence of the \\(K^{+}\\) yield on the participants number is a clear signal for the EoS also at other beam energies as can be seen from fig. 6 right which displays the excitation function of the fit exponent \\(\\alpha\\). Data which follow the curve for a soft equation of state will soon be published [29] In conclusion, we have shown That earthbound experiments have now reached a precision which allows to determine the hadronic EoS. The experimental results for the two observables which are most sensitive to the hadronic EOS are only compatible with theory if the hadronic EoS is soft. This conclusion is robust. Little known input quantities do not influence this conclusion. The observation of a neutron star with twice the solar mass seems to contradict this conclusion. It points toward a hard hadronic EoS. Both results are quite new and one has not to forget that we are comparing non equilibrium heavy ion reactions where about the same number of protons and neutrons are present and where mesons and baryon resonances are produced with cold neutron matter in equilibrium. To solve this contradiction is certainly a big challenge for both communities in the near future. ## References * (1) Sasa Ratkovic, Madappa Prakash, James M.Lattimer Submitted to ApJ, astro-ph/0512136 * (2) Ph. Podsiadlowski, J. D. M. Dewi, P. Lesaffre, J. C. Miller, W. G. Newton, J. R. Stone Mon.Not.Roy.Astron.Soc. 361 (2005) 1243-1249, astro-ph/0506566 * (3) J.M. Lattimer, M. Prakash Science Vol. 304 2004 (536-542) * (4) J.M. Lattimer and M. Prakash, ApJ **550** (2001) 426; A.W. Steiner, M. Prakash and J.M. Lattimer, Phys. Lett. **B486** (2000) 239; M. Alford and S. Reddy, Phys. Rev. D. **67** (2003) 074024. * (5) H.-Th. Janka, R. Buras, F.S. Kitaura Joyanes, A. Marek, M. Rampp Procs. 12th Workshop on Nuclear Astrophysics, Ringberg Castle, March 22-27, 2004, astro-ph/0405289 * (6) Fridolin Weber, Prog.Part.Nucl.Phys. 54 (2005) 193-288 * (7) David J. Nice, Eric M.Spiaver, Ingrid H. Stairs, Oliver Loehmer, Axel Jessner, Michael Kramer, James M. Cordes, submitted to ApJ, astro-ph/0508050 * (8) C. Maieron, M. Baldo, G.F. Burgio, H.-J. Schulze Phys.Rev. D70 (2004) 043010 * (9) M. Baldo, M. Buballa, G.F. Burgio, F. Neumann, M. Oertel, H.-J. Schulze Phys.Lett. B562 (2003) 153-160 Figure 6: Dependence of the \\(K^{+}\\) scaling on the nuclear EoS. We present this dependence in form of \\(M_{K^{+}}=A_{\\rm part}^{a}\\). On the top the dependence of \\(M_{K^{+}}/A_{\\rm part}\\) as a function of \\(A_{\\rm part}\\) is shown for different options: a “hard” EoS with \\(KN\\) potential (solid line), the other three lines show a “soft” EoS, without \\(KN\\) potential and \\(\\sigma(N\\Delta)\\) from Tsushima [27] (dotted line), with \\(KN\\) potential and the same parametrization of the cross section (dashed line) and with \\(KN\\) potential and \\(\\sigma(N\\Delta)=3/4\\sigma(NN)\\). On the bottom the fit exponent \\(\\alpha\\) is shown as a function of the compressibility for calculations with momentum-dependent interactions (mdi) and for static interactions (dashed line)[22]. On the right hand side we compare the energy dependence of the fit exponent \\(\\alpha\\) for the two EoS. * (10) F. Gastineau, R. Nebauer, J.Aichelin, Phys.Rev. C65 (2002) 045204 * (11) D.H. Youngblood, H.L. Clark and Y.-W. Lui, Phys. Rev. Lett **84** (1999) 691. * (12) J. Piekarewicz, Phys. Rev. **C69** (2004) 041301 and references therein. * (13) H. Stocker and W. Greiner, Phys. Reports **137**, 278 (1986) and references therein. * (14) J. Aichelin and C.M. Ko, Phys. Rev. Lett. **55** (1985) 2661. * (15) A. Akmal, V. R. Pandharipande and D.G. Ravenhall, Phys. Rev. **C58** (1998) 1804 * (16) A. Andronic et al., Phys. Lett. **B612** (2005) 173. * (17) P. Danielewicz, R. Lacey, W.G. Lynch, Science 298 (2002) 1592. * (18) Ch. Hartnack, H. Oeschler, J. Aichelin, Phys.Rev.Lett. 96 (2006) 012302 * (19) E.E. Kolomeitsev et al., J.Phys. G31 (2005) S741 * (20) C. Sturm et al., (KaoS Collaboration), Phys. Rev. Lett. **86** (2001) 39. * (21) C. Fuchs et al., Phys. Rev. Lett **86** (2001) 1794. * (22) C. Hartnack et al., Eur. Phys. J. **A1** (1998) 151. * (23) E. Kolomeitsev et al., accepted J. of Phys. G, nucl-th/0412037. * (24) C.L. Korpa and M.F.M. Lutz, submitted to Heavy Ion Physics, nucl-th/0404088. * (25) C. Hartnack and J. Aichelin, Proc. Int. Workshop XXVIII on Gross prop. of Nucl. and Nucl. Excit., Hirschegg, January 2000 edt. by M. Buballa, W. Norenberg, B. Schafer and J. Wambach; and to be published in Phys. Rep. * (26) J. Randrup and C.M. Ko, Nucl. Phys. **A 343**, 519 (1980). * (27) K. Tsushima et al., Phys. Lett. **B337** (1994) 245; Phys. Rev. **C 59** (1999) 369. * (28) A. Forster et al., (KaoS Collaboration), Phys. Rev. Lett. **31** (2003) 152301; J. Phys. G. **30** (2004) 393; A. Forster, Ph.D. thesis, Darmstadt University of Technology, 2003. * (29) KaoS collaboration, private communication and to be published. # Hadronic Matter is Soft Ch. Hartnack\\({}^{1}\\), H. Oeschler\\({}^{2}\\) and Jorg Aichelin\\({}^{1}\\) \\({}^{1}\\)SUBATECH, Laboratoire de Physique Subatomique et des Technologies Associees University of Nantes - IN2P3/CNRS - Ecole des Mines de Nantes 4 rue Alfred Kastler, F-44072 Nantes Cedex 03, France \\({}^{2}\\)Institut fur Kernphysik, Darmstadt University of Technology, 64289 Darmstadt, Germany ###### pacs: 25.75.Dw Since many years one of the most important challenges in nuclear physics is to determine \\(E/A(\\rho,T)\\), the energy/nucleon in nuclear matter in thermal equilibrium as a function of the density \\(\\rho\\) and the temperature \\(T\\). Only at equilibrium density, \\(\\rho_{0}\\), the energy per nucleon \\(E/A(\\rho=\\rho_{0},T=0)=-16\\) MeV is known by extrapolation of the Weizsacker mass formula to infinite matter. This quest has been dubbed \"search for the nuclear equation of state (EoS)\". Modelling of neutron stars or supernovae have not yet constrained the nuclear equation of state [1]. Therefore, the most promising approach to extract \\(E/A(\\rho,T)\\) are heavy ion reactions in which the density of the colliding nuclei changes significantly. Three principal experimental observables have been suggested in the course of this quest which carry - according to theoretical calculations - information on the nuclear EoS: (i) the strength distribution of giant isoscalar monopole resonances [2; 3], (ii) the in-plane sidewards flow of nucleons in semi-central heavy ion reaction at energies between 100 \\(A\\) MeV and 400 \\(A\\) MeV [4] and (iii) the production of \\(K^{+}\\) mesons in heavy ion reactions at energies around 1 \\(A\\) GeV [5]. Although theory has predicted these effects qualitatively, a quantitative approach is confronted with two challenges: a) The nucleus is finite and surface effects are not negligible, even for the largest nuclei and b) in heavy ion reactions the reacting system does not come into equilibrium. Therefore complicated non-equilibrium transport theories have to be employed and the conclusion on the nuclear equation of state can only be indirect. (i) The study of monopole vibrations has been very successful, but the variation in density is minute. Therefore, giant monopole resonances are sensitive to the energy which is necessary to change the density of a cold nucleus close to the equilibrium point \\(\\rho_{0}\\). According to theory the vibration frequency depends directly on the force which counteracts to any deviation from the equilibrium and therefore to the potential energy. The careful analysis of the isoscalar monopole strength in non-relativistic [2] and relativistic mean field models has recently converged [3] due to a new parametrization of the relativistic potential. These calculations allow now for the determination of the compressibility \\(\\kappa=9\\rho^{2\\,d^{2}E/A(\\rho,T)}|_{\\rho=\\rho_{0}}\\) which measures the curvature of \\(E/A(\\rho,T)\\) at the equilibrium point. The values found are around \\(\\kappa=240\\) MeV and therefore close to what has been dubbed \"soft equation of state\". (ii) If the overlap zone of projectile and target becomes considerably compressed in semi-central heavy-ion collisions, an in-plane flow is created due to the transverse pressure on the baryons outside of the interaction region with this flow being proportional to the transverse pressure. In order to obtain a noticeable compression, the beam energy has to be large as compared to the Fermi energy of the nucleons inside the nuclei and hence a beam energy of at least 100 \\(A\\) MeV is necessary. Compression goes along with excitation and therefore the compressional energy of excited nuclear matter is encoded in the in-plane flow. It has recently been demonstrated [6] that transport theories do not agree quantitatively yet and therefore former conclusions [7] have to be considered as premature. (iii) The third method is most promising for the study of nuclear matter at high densities and is subject of this Letter. \\(K^{+}\\) mesons produced far below the \\(NN\\) threshold cannot be created in first-chance collisions between projectile and target nucleons. They do not provide sufficient energy even if one includes the Fermi motion. The effective energy for the production of a \\(K^{+}\\) meson in the \\(NN\\) center of mass system is 671 MeV as in addition to the mass of the kaon a nucleon has to be converted into a \\(\\Lambda\\) to conserve strangeness. Before nucleons can create a \\(K^{+}\\) at these subthreshold energies, they have to accumulate energy. The most effective way to do this is the conversion of a nucleon into a \\(\\Delta\\) and to produce in a subsequent collision a \\(K^{+}\\) meson via \\(\\Delta N\\to NK^{+}\\Lambda\\). Two effects link the yield of produced \\(K^{+}\\) with the density reached in the collision and the stiffness of the EoS. If less energy is needed to compress matter (i) more energy is available for the \\(K^{+}\\) production and (ii) the density which can be reached in these reactions will be higher. Higher density means a smaller mean free path and therefore the \\(\\Delta\\) will interact more often increasing the probability to produce a \\(K^{+}\\) and hence, it has a lower chance to decay before it interacts. Consequently the \\(K^{+}\\) yield depends on the compressional energy. At beam energies around 1 \\(A\\) GeV matter becomes highly excited and mesons are formed. Therefore this process tests highly excited hadronic matter. At beam energies \\(>\\) 2 \\(A\\) GeV first-chance collisions dominate and this sensitivity is lost. In this Letter we would like to report that for the third approach different transport theories have converged. Two independent experimental observables, the ratio of the excitation functions of the \\(K^{+}\\) production for Au+Au and for C+C [12; 14], and a new observable, the dependence on the number of participants of the \\(K^{+}\\) yield show that nucleons interact with a potential which corresponds to a compressibility of \\(\\kappa\\leq 200\\) MeV in infinite matter in thermal equilibrium. This value extracted for hadronic matter at densities around 2.5 times the normal nuclear matter density is very similar to that extracted at normal nuclear matter density. A key point of this paper is to demonstrate that the different implementation of yet unsolved physical questions, like the \\(N\\Delta\\to K^{+}\\Lambda N\\) cross section, the \\(KN\\) interaction as well as the life time of the nuclear resonances in the hadronic environment do not affect this conclusion. In order to determine the energy which is necessary to compress infinite nuclear matter in thermal equilibrium by heavy ion reactions in which no equilibrium is obtained one chooses the following strategy: The transport theory calculates the time evolution of the quantal particles described by Gaussian wave functions. The time evolution is given by a variational principle and the equations one obtains for this choice of the wave function are identical to the classical Hamilton equations where the classical two-body potential is replaced by the expectation value of the real part of the Bruckner \\(G\\)-matrix. For this potential the potential energy in infinite nuclear matter is calculated. To determine the nuclear equation of state we average this (momentum-dependent) two-body potential over the momentum distribution of a given temperature \\(T\\) and add to it the kinetic energy. Expressed as a function of the density we obtain the desired nuclear equation of state \\(E/A(\\rho,T)\\). Our two-body potential has five parameters which are fixed by the binding energy of infinite nuclear matter at \\(\\rho_{0}\\), the compressibility \\(\\kappa\\) and the optical potential which has been measured in pA reactions. Once the parameters are fixed we use the two-body potential with these parameters in the transport calculation. There is an infinite number of two-body potentials which give the same equation of state because the range of the potential does not play a role in infinite matter. The nuclear surface measured in electron scattering on nuclei fixes the range, however, quite well. The uncertainty which remains is of little relevance here (in contradiction to the calculation of the in-plane flow which is very sensitive to the exact surface properties of the nuclei and hence to the range of the potential). We employ the Isospin Quantum Molecular Dynamics (IQMD) [9] approach with the following equations of motion: \\[\\dot{\\vec{p}}_{i}=-\\frac{\\partial\\langle H\\rangle}{\\partial\\vec{r}_{i}}\\quad \\mbox{and}\\quad\\dot{\\vec{r}}_{i}=\\frac{\\partial\\langle H\\rangle}{\\partial \\vec{p}_{i}}\\;, \\tag{1}\\] where the expectation value of the total Hamiltonian reads as \\(\\langle H\\rangle=\\langle T\\rangle+\\langle V\\rangle\\) with \\[\\langle T\\rangle = \\sum_{i}\\frac{p_{i}^{2}}{2m_{i}}\\] \\[\\langle V\\rangle = \\sum_{i}\\sum_{j>i}\\int f_{i}(\\vec{r},\\vec{p},t)\\,V^{ij}f_{j}( \\vec{r}\\,^{\\prime},\\vec{p}\\,^{\\prime},t)\\,d\\vec{r}\\,d\\vec{r}\\,^{\\prime}d\\vec {p}\\,d\\vec{p}\\,^{\\prime} \\tag{2}\\] and \\(f_{i}\\) being the Gaussian Wigner density of nucleon \\(i\\). The baryon-potential consists of the real part of the \\(G\\)-Matrix which is supplemented by the Coulomb interaction between the charged particles. The former can be further subdivided in a part containing the contact Skyrme-type interaction only, a contribution due to a finite range Yukawa-potential, and a momentum-dependent part with \\[V^{ij} = V^{ij}_{\\rm Skyrme}+V^{ij}_{\\rm Yuk}+V^{ij}_{\\rm mdi}+V^{ij}_{\\rm Coul} \\tag{3}\\] \\[= t_{1}\\delta(\\vec{x}_{i}-\\vec{x}_{j})+t_{2}\\delta(\\vec{x}_{i}- \\vec{x}_{j})\\rho^{\\gamma-1}(\\vec{x}_{i})+\\] \\[t_{3}\\frac{\\exp\\{-|\\vec{x}_{i}-\\vec{x}_{j}|/\\mu\\}}{|\\vec{x}_{i}- \\vec{x}_{j}|/\\mu}+\\frac{Z_{i}Z_{j}e^{2}}{|\\vec{x}_{i}-\\vec{x}_{j}|}+\\] \\[t_{4}{\\rm ln}^{2}(1+t_{5}(\\vec{p}_{i}-\\vec{p}_{j})^{2})\\delta( \\vec{x}_{i}-\\vec{x}_{j})\\] with \\(Z_{i},Z_{j}\\) the charges of the baryons \\(i\\) and \\(j\\). For more details we refer to Ref. [9]. We include in this calculation all inelastic cross sections which are relevant for the \\(K^{+}\\) production. For details of these cross sections we refer to [10]. Unless specified differently, the change of the \\(K^{+}\\) mass due to the kaon-nucleon (\\(KN\\)) interaction according to \\(m^{K}(\\rho)=m^{K}_{0}(1-0.075\\frac{\\rho}{\\rho_{0}})\\) is taken into account, in agreement with recent self-consistent calculations of the spectral function of the \\(K^{+}\\)[11]. The \\(\\Lambda\\) potential is 2/3 of the nucleon potential, assuming that the s quark is inert. In order to minimize the experimental systematical errors and the consequences of theoretical uncertainties it is better to compare ratios of cross sections rather than the absolute values [12]. We have made sure that the standard version of IQMD reproduces the excitation function for Au+Au as well as for C+C quite well [13]. These ratios are quite sensitive to the nuclear potentials because the compression obtained in the Au+Au collisions is considerable (up to 3\\(\\rho_{0}\\)) and depends on the nuclear equation of state whereas in C+C collisions the compression is small and almost independent on the stiffness of the EoS. Figure 1 shows the comparison of the measured ratio of the \\(K^{+}\\) multiplicities obtained in Au+Au and C+C reactions [12] together with transport model calculations as a function of the beam energy. We see clearly that the form of the yield ratio depends on the potential parameters (hard EoS: \\(\\kappa=380\\) MeV, thin lines and solid symbols, soft EoS: \\(\\kappa=200\\) MeV, thick lines and open symbols) in a quite sensible way and that the prediction in the standard version of the simulation (squares) for a soft and a hard EoS potential differ much more than the experimental uncertainties. The calculation of Fuchs et al. [14] given in the same graph, agrees well with our findings. This observation is, however, not sufficient to determine the potential parameters uniquely because in these transport theories several not precisely known processes are encoded. Therefore, it is necessary to verify that these uncertainties do not render this conclusion premature. Figure 1, top, shows as well the influence of the unknown \\(N\\Delta\\to K^{+}\\Lambda N\\) cross section on this ratio. We confront the standard IQMD option (with cross sections for \\(\\Delta N\\) interactions from Tsushima et al. [10]) with another option, \\(\\sigma(N\\Delta)=3/4\\sigma(NN)\\)[15], which is based on isospin arguments and has been frequently employed. Both cross sections differ by up to a factor of ten and change significantly the absolute yield of \\(K^{+}\\) in heavy ion reactions but do not change the shape of the ratio. The middle part demonstrates the influence of the kaon-nucleon potential which is not precisely known at the densities obtained in this reaction. The uncertainties due to the \\(\\Delta\\) life time are discussed in the bottom part. Both calculations represent the two extreme values for this lifetime [10] which is important because the disintegration of the \\(\\Delta\\) resonance competes with the \\(K^{+}\\) production. Thus we see that these uncertainties do not influence the conclusion that the excitation function of the ratio is quite different for a soft EoS potential as compared to a hard one and that the data of the KaoS collaboration are only compatible with the soft EoS potential. The only possibility to change this conclusions is the assumption that the cross sections are explicitly density dependent in a way that the increasing density is compensated by a decreasing cross section. It would have a strong influence on other observables which are presently well predicted by the IQMD calculations. We would like to add that the smoothness of the excitation function also demonstrates that there are no density isomers in the density regions which are obtained in these reactions because the \\(K^{+}\\) excitation function would be very sensitive to such an isomeric state [16]. The conclusion that nuclear matter is best described by a soft EoS, is supported by another variable, the dependence of the \\(K^{+}\\) yield on the number of participating nucleons \\(A_{\\rm part}\\). The prediction of the IQMD simulations in the standard version for this observable is shown in Fig. 2. The top of the figure shows the kaon yield \\(M_{K^{+}}/A_{\\rm part}\\) for Au+Au collisions at 1.5 \\(A\\) GeV as a function of the participant number \\(A_{\\rm part}\\) for a soft EoS using different options: standard version (soft, \\(KN\\)), calculations without kaon-nucleon interaction (soft, no \\(KN\\)) and with the isospin based \\(N\\Delta\\to N\\Lambda K^{+}\\) cross section (soft, \\(KN\\), \\(\\sigma^{*}\\)). These calculations are confronted with a standard calculation using the hard EoS potential. The scaling of the kaon yield with the participant number can be parameterized by \\(M_{K^{+}}=A_{\\rm part}^{\\alpha}\\). All calculations with a soft EoS show a rather similar value of \\(\\alpha\\) - although the yields are very different - while the calculation using a hard equation shows a much smaller value. Therefore we can conclude that also the slope value \\(\\alpha\\) is a rather robust observable. The bottom of Fig. 2 shows that \\(\\alpha\\) depends smoothly on the compressibility \\(\\kappa\\) of the EoS. Whether we include the momentum dependence of the nucleon nucleon Figure 1: Comparison of the measured excitation function of the ratio of the \\(K^{+}\\) multiplicities per mass number \\(A\\) obtained in Au+Au and in C+C reactions (Ref. [12]) with various calculations. The use of a hard EoS is denoted by thin (blue) lines, a soft EoS by thick (red) lines. The calculated energies are given by the symbols, the lines are drawn to guide the eye. On top, two different versions of the \\(N\\Delta\\to K^{+}\\Lambda N\\) cross sections are used. One is based on isospin arguments [15], the other is determined by a relativistic tree level calculation [17]. The calculation by Fuchs [14] are shown as dotted lines. Middle: IQMD calculations with and without \\(KN\\) potential are compared. Bottom: The influence of different options for the life time of \\(\\Delta\\) in matter is demonstrated. interaction (with mdi) or not (without mdi) does not change the value of \\(\\alpha\\) as long as the compressibility is not changed - in stark contrast to the in-plane flow. Again, the measured centrality dependence for Au+Au at 1.5 \\(A\\) GeV from the KaoS collaboration [18], \\(\\alpha=1.34\\pm 0.16\\), is only compatible with a soft EoS potential. This finding is also supported by a more recent analysis [19; 20] of the in-plane flow which supersedes the former conclusion that the EoS is hard [21] (made before the momentum-dependent interaction has been included in the calculations). Due to the strong dependence of the in-plane flow on the potential range parameter and its dependence on the particles observed these conclusions are much less firm presently. Comparisons of the out-of-plane squeeze of baryons also show a preference for a soft equation of state with momentum dependent interactions[22]. In conclusion, we have shown that the two experimental observables which are most sensitive to the potential parameters of the nucleon-nucleon interaction are only compatible with those parameters which lead in nuclear matter to a soft hadronic EoS. This conclusion is robust. Uncertainties of the input in these calculations, like the \\(KN\\) potential at high densities, the lifetime of the \\(\\Delta\\) in matter and the \\(\\Delta N\\to NK^{+}\\Lambda\\) cross section do not influence this conclusion. The potential parameter \\(\\kappa\\) is even smaller than that extracted from the giant monopole vibrations. Thus the energy which is needed to compress hadronic matter of \\(\\kappa\\leq 200\\) MeV is close to the lower bound of the interval which has been discussed in the past. We would like to thank all members of the KaoS Collaboration for fruitful discussions especially A. Forster, P. Senger, C. Sturm, and F. Uhlig. ## References * (1) H.A. Bethe, Rev. Mod. Phys. **62** (1990) 801; J.M. Lattimer, F.D. Swesty, Nucl. Phys. **A 535** (1991) 331; H. Shen, H. Toki, K. Oyamatsu, K. Sumiyoshi, Nucl. Phys. **A 637** (1998) 435. * (2) D.H. Youngblood, H.L. Clark and Y.-W. Lui, Phys. Rev. Lett **84** (1999) 691. * (3) J. Piekarwicz, Phys. Rev. **C69** (2004) 041301 and references therein. * (4) H. Stocker and W. Greiner, Phys. Reports **137**, 278 (1986) and references therein. * (5) J. Aichelin and C.M. Ko, Phys. Rev. Lett. **55** (1985) 2661. * (6) A. Andronic et al., Phys. Lett. **B612** (2005) 173. * (7) P. Danielewicz, R. Lacey, W.G. Lynch, Science 298 (2002) 1592. * (8) J. Aichelin, Phys. Reports **202**, 233 (1991). * (9) C. Hartnack et al., Eur. Phys. J. **A1** (1998) 151. * (10) E. Kolomeitsev et al., accepted J. of Phys. G, nucl-th/0412037. * (11) C.L. Korpa and M.F.M. Lutz, submitted to Heavy Ion Physics, nucl-th/0404088. * (12) C. Sturm et al., (KaoS Collaboration), Phys. Rev. Lett. **86** (2001) 39. * (13) C. Hartnack and J. Aichelin, Proc. Int. Workshop XXVIII on Gross prop. of Nucl. and Nucl. Excit., Hirschegg, January 2000 edt. by M. Buballa, W. Norenberg, B. Schafer and J. Wambach; and to be published in Phys. Rep. * (14) C. Fuchs et al., Phys. Rev. Lett **86** (2001) 1794. * (15) J. Randrup and C.M. Ko, Nucl. Phys. **A 343**, 519 (1980). * (16) C. Hartnack, J. Aichelin, H. Stocker and W. Greiner Phys. Rev. Lett. **72** (1994) 3767. * (17) K. Tsushima et al., Phys. Lett. **B337** (1994) 245; Phys. Rev. **C 59** (1999) 369. * (18) A. Forster et al., (KaoS Collaboration), Phys. Rev. Lett. **31** (2003) 152301; J. Phys. G. **30** (2004) 393; A. Forster, Ph.D. thesis, Darmstadt University of Technology, 2003. * (19) G Stoicea et al., (FOPI Collaboration), Phys. Rev. Lett. **92** (2004) 072303. * (20) A. Andronic et al., (FOPI Collaboration), Phys. Rev. **C67** (2003) 034907. * (21) C. Hartnack et al., Nucl. Phys. **A495** (1989) 303c. * (22) C. Hartnack et al., Mod Phys. Lett. A13 (1994) 1151. Figure 2: Dependence of the \\(K^{+}\\) scaling on the nuclear equation of state. We present this dependence in form of \\(M_{K^{+}}=A_{\\rm part}^{\\alpha}\\). On the top the dependence of \\(M_{K^{+}}/A_{\\rm part}\\) as a function of \\(A_{\\rm part}\\) is shown for different options: a “hard” EoS with \\(KN\\) potential (solid line), the other three lines show a “soft” EoS, without \\(KN\\) potential and \\(\\sigma(N\\Delta)\\) from Tsushima [17] (dotted line), with \\(KN\\) potential and the same parametrization of the cross section (dashed line) and with \\(KN\\) potential and \\(\\sigma(N\\Delta)=3/4\\sigma(NN)\\). On the bottom the fit exponent \\(\\alpha\\) is shown as a function of the compressibility for calculations with momentum-dependent interactions (mdi) and for static interactions (\\(t_{4}=0\\), dashed line). # Hadronic Matter is Soft Ch. Hartnack\\({}^{1}\\), H. Oeschler\\({}^{2}\\) and Jorg Aichelin\\({}^{1}\\)1 \\({}^{1}\\)SUBATECH, Laboratoire de Physique Subatomique et des Technologies Associees University of Nantes - IN2P3/CNRS - Ecole des Mines de Nantes 4 rue Alfred Kastler, F-44072 Nantes Cedex 03, France \\({}^{2}\\)Institut fur Kernphysik, Darmstadt University of Technology, 64289 Darmstadt, Germany Footnote 1: invited speaker ###### pacs: 25.75.Dw How much energy is needed to compress nuclear matter? The answer to this question, the determination of \\(E/A(\\rho,T)\\), the energy/nucleon in nuclear matter in thermal equilibrium as a function of the density \\(\\rho\\) and the temperature \\(T\\), has been considered since many years as one of the most important challenges in nuclear physics. This quest has been dubbed \"search for the nuclear equation of state (EoS)\". Only at equilibrium density, \\(\\rho_{0}\\), the energy per nucleon \\(E/A(\\rho=\\rho_{0},T=0)=-16\\) MeV is known by extrapolating the Weizsacker mass formula to infinite matter. Standard ab initio many body calculations do not allow for a determination of \\(E/A(\\rho,T)\\) at energies well above the saturation density because the detailed form of the interaction among hadrons is not known. Therefore another strategy has been developed. Theory has identified experimental observables in nuclear reaction physics or in astrophysics which are sensitive to \\(E/A(\\rho,T)\\). Unfortunately these observables depend as well on other quantities which are either unknown or little known (like cross sections with resonance in the entrance channel) or difficult to asses theoretically (like the resonance lifetimes in hot and dense matter). It was hoped that comparing many observables for different systems and different energies with the theoretical predictions these unknown or little known quantities can be determined experimentally and that finally the dependence of the observables on \\(E/A(\\rho,T)\\) can be isolated. In astrophysics the nuclear EoS plays an important role in binary mergers involving black holes and neutron stars [1], in double pulsars [2], in the mass-radius relation of neutron stars [3; 4] and in supernovae explosions [5]. For a recent review on these topics we refer to [6]. Unfortunately, as in nuclear reaction physics, there are always other little known processes or properties which have to be understood before the nuclear EoS dependence can be isolated. We discuss here as example of the mass-radius relation of neutron stars. Fig. 1 shows the neutron star masses in units of the solar mass for different types of binaries. These masses are concentrated at around 1-1.5 solar masses. Fig. 2 shows a theoretical prediction of the mass-radius relation for neutron stars using different EoS. Since the nature of the interior of neutron stars is not known (in contradiction to what the name suggests) one may suppose that it consists of hadrons or of quarks. But even if it consists of hadrons there are speculations that there is a \\(K^{-}\\) or a \\(\\pi^{-}\\) condensate or that there are hyperons in equilibrium with nuclear resonances. The same is true if the interior consists of quarks. Little known color-flavor locked quark phases may modify the EoS at densities which are reached in the interior of the neutron star. For a detailed discussion of all these phenomena we refer to ref. [6]. The mass to radius relation for some EoS are shown in fig. 2. We see that the observed masses of neutron stars are compatible with almost all quark or hadron based EoS as long as the radius is unknown. Radii, however, are very difficult to measure. Because similar problems appear also for other observables, up to recently the astrophysical observations of neutron stars did not help much to narrow down the uncertainty on the nuclear EoS. This situation has changed dramatically in the last year with the observation of a neutron star with a mass of two solar masses [7]. If this observation is finally confirmed the mass/radius prediction of fig.2 excludes that the interior of a neutron star is made by quarks, even a soft nuclear EoS, which will be defined below, will be excluded. In heavy ion reactions three observables have been identified which are - according to theoretical calculationssensitive to \\(E/A(\\rho,T)\\) at densities larger than \\(\\rho_{0}\\): (i) the strength distribution of giant isoscalar monopole resonances [8; 9], (ii) the in-plane sidewards flow of nucleons in semi-central heavy ion reaction at energies between 100 \\(A\\) MeV and 400 \\(A\\) MeV [10] and (iii) the production of \\(K^{+}\\) mesons in heavy ion reactions at energies around 1 \\(A\\) GeV [11]. Although theory has predicted these effects qualitatively, a quantitative approach is confronted with two challenges: a) The nucleus is finite and surface effects are not negligible, even for the largest nuclei and b) in heavy ion reactions the reacting system does not come into equilibrium. Therefore complicated non-equilibrium transport theories have to be employed and the conclusion on the nuclear EoS can only be indirect. (i) The study of monopole vibrations has been very successful, but the variation in density is minute. Therefore, giant monopole resonances are sensitive to the energy which is necessary to change the density of a cold nucleus close to the equilibrium point \\(\\rho_{0}\\). According to theory the vibration frequency depends directly on the force which counteracts to any deviation from the equilibrium and therefore to the potential energy. The careful analysis of the isoscalar monopole strength in non-relativistic [8] and relativistic mean field models has recently converged [9] due to a new parametrization of the relativistic potential. These calculations allow now for the determination of the compressibility \\(\\kappa=9\\rho^{2}\\frac{d^{2}E/A(\\rho,T)}{d^{2}\\rho}|_{\\rho=\\rho_{0}}\\) which measures the curvature of \\(E/A(\\rho,T)\\) at the equilibrium point. The values found are around \\(\\kappa=240\\) MeV and therefore close to what has been dubbed \"soft EoS\". (ii) If the overlap zone of projectile and target becomes considerably compressed in semi-central heavy-ion collisions, an in-plane flow is created due to the transverse pressure on the baryons outside of the interaction region with this flow being proportional to the transverse pressure. In order to obtain a noticeable compression, the beam energy has to be large as compared to the Fermi energy of the nucleons inside the nuclei and hence a beam energy of at least 100 \\(A\\) MeV is necessary. Compression goes along with excitation and therefore the compressional energy of excited nuclear matter is encoded in the in-plane flow. It has recently been demonstrated [12] that transport theories do not agree quantitatively yet and therefore former conclusions [13] have to be considered as premature. (iii) The third method is most promising for the study of nuclear matter at high densities and is subject of this talk. \\(K^{+}\\) mesons produced far below the \\(NN\\) threshold cannot be created in first-chance collisions between projectile and target nucleons. They do not provide sufficient energy even if one includes the Fermi motion. The effective energy for the production of a \\(K^{+}\\) meson in the \\(NN\\) center of mass system is 671 MeV as in addition to the mass of the kaon a nucleon has to be converted into a \\(\\Lambda\\) to conserve strangeness. Before nucleons can create a \\(K^{+}\\) at these subthreshold energies, they have to accumulate energy. The most effective way to do this is the conversion of a nucleon into a \\(\\Delta\\) and to produce in a subsequent collision a \\(K^{+}\\) meson via \\(\\Delta N\\to NK^{+}\\Lambda\\). Two effects link the yield of produced \\(K^{+}\\) with the density reached in the collision and the stiffness of the EoS. If less energy is needed to compress matter (i) more energy is available for the \\(K^{+}\\) production and (ii) the density which can be reached in these reactions will be Figure 1: Measured and estimated masses of neutron stars in radio binary pulsars and in x-ray accreting binaries. Error bars are 1\\(\\sigma\\). Vertical dotted lines show average masses of each group (1.62 M\\({}_{\\odot}\\), 1.34 M\\({}_{\\odot}\\) and 1.56 M\\({}_{\\odot}\\)); dashed vertical lines indicate inverse error weighted average masses (1.48 M\\({}_{\\odot}\\), 1.41 M\\({}_{\\odot}\\) and 1.34 M\\({}_{\\odot}\\)). The figure is taken from ref [4] higher. Higher density means a smaller mean free path and therefore the \\(\\Delta\\) will interact more often increasing the probability to produce a \\(K^{+}\\) and hence, it has a lower chance to decay before it interacts. Consequently the \\(K^{+}\\) yield depends on the compressional energy. At beam energies around 1 \\(A\\) GeV matter becomes highly excited and mesons are formed. Therefore this process tests highly excited hadronic matter. At beam energies \\(>2\\)\\(A\\) GeV first-chance collisions dominate and this sensitivity is lost. Here we discuss that for the third approach different transport theories have converged. Two independent experimental observables, the ratio of the excitation functions of the \\(K^{+}\\) production for Au+Au and for C+C [14; 15], and a new observable, the dependence on the number of participants of the \\(K^{+}\\) yield show that nucleons interact with a potential which corresponds to a compressibility of \\(\\kappa\\leq 200\\) MeV in infinite matter in thermal equilibrium. This value extracted for hadronic matter at densities around 2.5 times the normal nuclear matter density is very similar to that extracted at normal nuclear matter density. A key point here is to demonstrate that the different implementation of yet unsolved physical questions, like the \\(N\\Delta\\to K^{+}\\Lambda N\\) cross section, the \\(KN\\) interaction as well as the life time of the nuclear resonances in the hadronic environment do not affect this conclusion. In order to determine the energy which is necessary to compress infinite nuclear matter in thermal equilibrium by heavy ion reactions in which no equilibrium is obtained one chooses the following strategy: The transport theory calculates the time evolution of the quantal particles described by Gaussian wave functions. The time evolution is given by a variational principle and the equations one obtains for this choice of the wave function are identical to the classical Hamilton equations where the classical two-body potential is replaced by the expectation value of the real part of the Bruckner \\(G\\)-matrix. For this potential the potential energy in infinite nuclear matter is calculated. To determine the nuclear EoS we average this (momentum-dependent) two-body potential over the momentum distribution of a given temperature \\(T\\) and add to it the kinetic energy. Expressed as a function of the density we obtain the desired nuclear EoS \\(E/A(\\rho,T)\\). Our two-body potential has five parameters which are fixed by the binding energy of infinite nuclear matter at \\(\\rho_{0}\\), the compressibility \\(\\kappa\\) and the optical potential which has been measured in pA reactions. Once the parameters are fixed we use the two-body potential with these parameters in the transport calculation. Figure 2: Mass-radius diagram for neutron stars. Black (green) curves are for normal matter (SQM) EoS [for definitions of the labels, see [4]]. Regions excluded by general relativity (GR), causality and rotation constraints are indicated. Contours of radiation radii \\(R_{\\infty}\\) are given by the orange curves. The figure is from [4]. There is an infinite number of two-body potentials which give the same EoS because the range of the potential does not play a role in infinite matter. The nuclear surface measured in electron scattering on nuclei fixes the range, however, quite well. The uncertainty which remains is of little relevance here (in contradiction to the calculation of the in-plane flow which is very sensitive to the exact surface properties of the nuclei and hence to the range of the potential). We employ the Isospin Quantum Molecular Dynamics (IQMD) with momentum dependent forces. All details of the standard version of the program may be found in [16]. The standard version is supplemented for this calculation with all inelastic cross sections which are relevant for the \\(K^{+}\\) production. For details of these cross sections we refer to [17]. Unless specified differently, the change of the \\(K^{+}\\) mass due to the kaon-nucleon (\\(KN\\)) interaction according to \\(m^{K}(\\rho)=m_{0}^{K}(1-0.075\\frac{\\rho}{\\rho_{0}})\\) is taken into account, in agreement with recent self-consistent calculations of the spectral function of the \\(K^{+}\\)[18]. The \\(\\Lambda\\) potential is 2/3 of the nucleon potential, assuming that the s quark is inert. The calculations reproduce the experimental data quite well as can be seen in fig. 3 where we compare the experimental and theoretical \\(K^{+}\\) spectra for different centrality bins for 1.48 AGeV Au+Au. This figure shows as well the influence of the \\(K^{+}N\\) potential which modifies not only the overall multiplicity of \\(K^{+}\\) due to the increase of the in medium mass but also the spectral form. In order to minimize the experimental systematical errors and the consequences of theoretical uncertainties it is better to compare ratios of cross sections rather than the absolute values [14]. We have made sure that the standard version of IQMD reproduces the excitation function for Au+Au as well as for C+C quite well [19]. These ratios are quite sensitive to the nuclear potentials because the compression obtained in the Au+Au collisions is considerable (up to \\(3\\rho_{0}\\)) and depends on the nuclear EoS whereas in C+C collisions the compression is small and almost independent on the stiffness of the EoS. Figure 4 shows the comparison of the measured ratio of the \\(K^{+}\\) multiplicities obtained in Au+Au and C+C reactions [14] together with transport model calculations as a function of the beam energy. We see clearly that the form of the yield ratio depends on the potential parameters (hard EoS: \\(\\kappa=380\\) MeV, thin lines and solid symbols, soft EoS: \\(\\kappa=200\\) MeV, thick lines and open symbols) in a quite sensible way and that the prediction in the standard version of the simulation (squares) for a soft and a hard EoS potential differ much more than the experimental uncertainties. The calculation of Fuchs et al. [15] given in the same graph, agrees well with our findings. This observation is, however, not sufficient to determine the potential parameters uniquely because in these transport theories several not precisely known processes are encoded. Therefore, it is necessary to verify that these uncertainties do not render this conclusion premature. Figure 4, top, shows as well the influence of the unknown \\(N\\Delta\\to K^{+}\\Lambda N\\) cross section on this ratio. We confront the standard IQMD option (with cross sections for \\(\\Delta N\\) interactions from Figure 3: \\(K^{+}\\) spectra for different centrality bins as compared with ( preliminary experimental data Tsushima et al. [17]) with another option, \\(\\sigma(N\\Delta)=3/4\\sigma(NN)\\)[20], which is based on isospin arguments and has been frequently employed. Both cross sections differ by up to a factor of ten and change significantly the absolute yield of \\(K^{+}\\) in heavy ion reactions but do not change the shape of the ratio. The middle part demonstrates the influence of the kaon-nucleon potential which is not precisely known at the densities obtained in this reaction. The uncertainties due to the \\(\\Delta\\) life time are discussed in the bottom part. Both calculations represent the two extreme values for this lifetime [17] which is important because the disintegration of the \\(\\Delta\\) resonance competes with the \\(K^{+}\\) production. Thus we see that these uncertainties do not influence the conclusion that the excitation function of the ratio is quite different for a soft EoS potential as compared to a hard one and that the data of the KaoS collaboration are only compatible with the soft EoS potential. The only possibility to change this conclusions is the assumption that the cross sections are explicitly density dependent in a way that the increasing density is compensated by a decreasing cross section. It would have a strong influence on other observables which are presently well predicted by the IQMD calculations. The conclusion that nuclear matter is best described by a soft EoS, is supported by another variable, the dependence of the \\(K^{+}\\) yield on the number of participating nucleons \\(A_{\\rm part}\\). The prediction of the IQMD simulations in the standard version for this observable is shown in Fig. 5. The top of the figure shows the kaon yield \\(M_{K^{+}}/A_{\\rm part}\\) for Au+Au collisions at 1.5 \\(A\\) GeV as a function of the participant number \\(A_{\\rm part}\\) for a soft EoS using different options: standard version (soft, \\(KN\\)), calculations without kaon-nucleon interaction (soft, no \\(KN\\)) and with the isospin based \\(N\\Delta\\to N\\Lambda K^{+}\\) cross section (soft, \\(KN\\), \\(\\sigma^{*}\\)). These calculations are confronted with a standard calculation using the hard EoS potential. The scaling of the kaon yield with the participant number can be parameterized by \\(M_{K^{+}}=A_{\\rm part}^{\\alpha}\\). All calculations with a soft EoS show a rather similar value of \\(\\alpha\\) - although the yields are very different - while the calculation using a hard equation shows a much smaller value. Therefore we can conclude that also the slope value \\(\\alpha\\) is a rather robust observable. Figure 4: Comparison of the measured excitation function of the ratio of the \\(K^{+}\\) multiplicities per mass number \\(A\\) obtained in Au+Au and in C+C reactions (Ref. [14]) with various calculations. The use of a hard EoS is denoted by thin (blue) lines, a soft EoS by thick (red) lines. The calculated energies are given by the symbols, the lines are drawn to guide the eye. On top, two different versions of the \\(N\\Delta\\to K^{+}\\Lambda N\\) cross sections are used. One is based on isospin arguments [20], the other is determined by a relativistic tree level calculation [21]. The calculation by Fuchs [15] are shown as dotted lines. Middle: IQMD calculations with and without \\(KN\\) potential are compared. Bottom: The influence of different options for the life time of \\(\\Delta\\) in matter is demonstrated. The bottom of Fig. 5 shows that \\(\\alpha\\) depends smoothly on the compressibility \\(\\kappa\\) of the EoS. Whether we include the momentum dependence of the nucleon nucleon interaction (with mdi) or not (without mdi) does not change the value of \\(\\alpha\\) as long as the compressibility is not changed - in stark contrast to the in-plane flow. Again, the measured centrality dependence for Au+Au at 1.5 \\(A\\) GeV from the KaoS collaboration [22], \\(\\alpha=1.34\\pm 0.16\\), is only compatible with a soft EoS potential. That the dependence of the \\(K^{+}\\) yield on the participants number is a clear signal for the EoS also at other beam energies as can be seen from fig. 5 right which displays the excitation function of the fit exponent \\(\\alpha\\). In conclusion, we have shown that the two experimental observables which are most sensitive to the potential parameters of the nucleon-nucleon interaction are only compatible with those parameters which lead in nuclear matter to a soft hadronic EoS. This conclusion is robust. Uncertainties of the input in these calculations, like the \\(KN\\) potential at high densities, the lifetime of the \\(\\Delta\\) in matter and the \\(\\Delta N\\to NK^{+}\\Lambda\\) cross section do not influence this conclusion. The potential parameter \\(\\kappa\\) is even smaller than that extracted from the giant monopole vibrations. Thus the compressibility \\(\\kappa\\) of hadronic matter of \\(\\leq 200\\) MeV is close to the lower bound of the interval which has been discussed in the past. How this result is compatible with the observation of a neutron star with twice the solar mass is presently studied. We would like to thank all members of the KaoS Collaboration for fruitful discussions especially A. Forster, P. Senger, C. Sturm, and F. Uhlig. ## References * (1) Sasa Ratkovic, Madappa Prakash, James M.Lattimer Submitted to ApJ, astro-ph/0512136 * (2) Ph. Podsiadlowski, J. D. M. Dewi, P. Lesaffre, J. C. Miller, W. G. Newton, J. R. Stone Mon.Not.Roy.Astron.Soc. 361 (2005) 1243-1249, astro-ph/0506566 * (3) J.M. Lattimer, M. Prakash Science Vol. 304 2004 (536-542) * (4) J.M. Lattimer and M. Prakash, ApJ **550** (2001) 426; A.W. Steiner, M. Prakash and J.M. Lattimer, Phys. Lett. **B486** (2000) 239; M. Alford and S. Reddy, Phys. Rev. D. **67** (2003) 074024. Figure 5: Dependence of the \\(K^{+}\\) scaling on the nuclear EoS. We present this dependence in form of \\(M_{K^{+}}=A_{\\rm part}^{\\alpha}\\). On the top the dependence of \\(M_{K^{+}}/A_{\\rm part}\\) as a function of \\(A_{\\rm part}\\) is shown for different options: a “hard” EoS with \\(KN\\) potential (solid line), the other three lines show a “soft” EoS, without \\(KN\\) potential and \\(\\sigma(N\\Delta)\\) from Tsushima [21] (dotted line), with \\(KN\\) potential and the same parametrization of the cross section (dashed line) and with \\(KN\\) potential and \\(\\sigma(N\\Delta)=3/4\\sigma(NN)\\). On the bottom the fit exponent \\(\\alpha\\) is shown as a function of the compressibility for calculations with momentum-dependent interactions (mdi) and for static interactions (\\(t_{4}=0\\), dashed line). On the right hand side we compare the energy dependence of the fit exponent \\(\\alpha\\) for the two EoS. * (5) H.-Th. Janka, R. Buras, F.S. Kitaura Joyanes, A. Marek, M. Rampp Procs. 12th Workshop on Nuclear Astrophysics, Ringberg Castle, March 22-27, 2004, astro-ph/0405289 * (6) Fridolin Weber, Prog.Part.Nucl.Phys. 54 (2005) 193-288 * (7) David J. Nice, Eric M. Splaver, Ingrid H. Stairs, Oliver Loehmer, Axel Jessner, Michael Kramer, James M. Cordes, submitted to ApJ, astro-ph/0508050 * (8) D.H. Youngblood, H.L. Clark and Y.-W. Lui, Phys. Rev. Lett **84** (1999) 691. * (9) J. Piekarewicz, Phys. Rev. **C69** (2004) 041301 and references therein. * (10) H. Stocker and W. Greiner, Phys. Reports **137**, 278 (1986) and references therein. * (11) J. Aichelin and C.M. Ko, Phys. Rev. Lett. **55** (1985) 2661. * (12) A. Andronic et al., Phys. Lett. **B612** (2005) 173. * (13) P. Danielewicz, R. Lacey, W.G. Lynch, Science 298 (2002) 1592. * (14) C. Sturm et al., (KaoS Collaboration), Phys. Rev. Lett. **86** (2001) 39. * (15) C. Fuchs et al., Phys. Rev. Lett **86** (2001) 1794. * (16) C. Hartnack et al., Eur. Phys. J. **A1** (1998) 151. * (17) E. Kolomeitsev et al., accepted J. of Phys. G, nucl-th/0412037. * (18) C.L. Korpa and M.F.M. Lutz, submitted to Heavy Ion Physics, nucl-th/0404088. * (19) C. Hartnack and J. Aichelin, Proc. Int. Workshop XXVIII on Gross prop. of Nucl. and Nucl. Excit., Hirschegg, January 2000 edt. by M. Buballa, W. Norenberg, B. Schafer and J. Wambach; and to be published in Phys. Rep. * (20) J. Randrup and C.M. Ko, Nucl. Phys. **A 343**, 519 (1980). * (21) K. Tsushima et al., Phys. Lett. **B337** (1994) 245; Phys. Rev. **C 59** (1999) 369. * (22) A. Forster et al., (KaoS Collaboration), Phys. Rev. Lett. **31** (2003) 152301; J. Phys. G. **30** (2004) 393; A. Forster, Ph.D. thesis, Darmstadt University of Technology, 2003. # Recent astrophysical and accelerator based results on the Hadronic Equation of State Ch. Hartnack\\({}^{1}\\), H. Oeschler\\({}^{2}\\) and Jorg Aichelin\\({}^{1}\\) \\({}^{1}\\)SUBATECH, Laboratoire de Physique Subatomique et des Technologies Associees University of Nantes - IN2P3/CNRS - Ecole des Mines de Nantes 4 rue Alfred Kastler, F-44072 Nantes Cedex 03, France \\({}^{2}\\)Institut fur Kernphysik, Darmstadt University of Technology, 64289 Darmstadt, Germany ###### pacs: 25.75.Dw How much energy is needed to compress nuclear matter? The answer to this question, the determination of \\(E/A(\\rho,T)\\), the energy/nucleon in nuclear matter in thermal equilibrium as a function of the density \\(\\rho\\) and the temperature \\(T\\), has been considered since many years as one of the most important challenges in nuclear physics. This quest has been dubbed \"search for the nuclear equation of state (EoS)\". Only at equilibrium density, \\(\\rho_{0}\\), the energy per nucleon \\(E/A(\\rho=\\rho_{0},T=0)=-16\\) MeV is known by extrapolating the Weizsacker mass formula to infinite matter. Standard ab initio many body calculations do not allow for a determination of \\(E/A(\\rho,T)\\) at energies well above the saturation density because the low density many body expansion schema ( Bruckner G- matrix) breaks down and therefore the number of contributing terms is exploding. Therefore in nuclear reaction physics another strategy has been developed. Theory has identified experimental observables in nuclear reaction physics or in astrophysics which are sensitive to \\(E/A(\\rho,T)\\). Unfortunately these observables depend as well on other quantities which are either unknown or little known (like cross sections with resonance in the entrance channel) or difficult to asses theoretically (like the resonance lifetimes in hot and dense matter). It was hoped that comparing many observables for different systems and different energies with the theoretical predictions these unknown or little known quantities can be determined experimentally and that finally the dependence of the observables on \\(E/A(\\rho,T)\\) can be isolated. In astrophysics the nuclear EoS plays an important role in binary mergers involving black holes and neutron stars [1], in double pulsars [2], in the mass-radius relation of neutron stars [3; 4] and in supernovae explosions [5]. For a recent review on these topics we refer to [6]. Unfortunately, as in nuclear reaction physics, there are always other little known processes or properties which have to be understood before the nuclear EoS dependence can be isolated. We discuss here as example of the mass-radius relation of neutron stars. Fig. 1 shows the neutron star masses in units of the solar mass for different types of binaries. These masses are concentrated at around 1-1.5 solar masses. Fig. 2 shows a theoretical prediction of the mass-radius relation for neutron stars using different EoS. Since the nature of the interior of neutron stars is not known (in contradiction to what the name suggests) one may suppose that it consists of hadrons or of quarks. But even if it consists of hadrons there are speculations that there is a \\(K^{-}\\) or a \\(\\pi^{-}\\) condensate or that there are hyperons in equilibrium with nuclear resonances. The same is true if the interior consists of quarks. Little known color-flavor locked quark phases may modify the EoS at densities which are reached in the interior of the neutron star. For a detailed discussion of all these phenomena we refer to ref. [6]. We see that the observed masses of neutron stars are compatible with almost all quark or hadron based EoS as long as the radius is unknown. Radii, however, are very difficult to measure. Because similar problems appear also for other observables, up to recently the astrophysical observations of neutron stars did not help much to narrow down the uncertainty on the nuclear EoS. This situation has changed dramatically in the last year with the observation of a neutron star with a mass of two solar masses [7]. If this observation is finally confirmed the mass/radius prediction of fig.2 excludes that the interior of a neutron star is made by quarks [4], even a soft nuclear EoS, which will be defined below, will be excluded. This is confirmed by the calculation of Maieron [8] which uses a MIT bag model or a color dielectric models EoSto describe the quark phase. Baldo [9] argue that this conclusion may be premature because it depends too much on the equation of state of the quark phase. If one replaces the MIT bag model equation of state by that of the Nambu - Jona-Lasinio (NJL) Lagrangian under certain conditions (no color conducting phase) larger masses may be obtained. The standard NJL Lagrangian lacks, however, repulsion and in view of the momentum cut-off, necessary to regularize the loop integrals, and the coupling constants in the diquark sector, which are not uniquely determined by the Fierz transformation, quantitative prediction at high quark densities are difficult in this approach even if qualitative agreement with pQCD calculation can be found [10]. Simulations of heavy ion reactions have shown that there are three possible observables which are sensitive to \\(E/A(\\rho,T)\\) at densities larger than \\(\\rho_{0}\\): (i) the strength distribution of giant isoscalar monopole resonances [11; 12], (ii) the in-plane sidewards flow of nucleons in semi-central heavy ion reactions at energies between 100 \\(A\\) MeV and 400 \\(A\\) MeV [13] and (iii) the production of \\(K^{+}\\) mesons in heavy ion reactions at energies around 1 \\(A\\) GeV [14]. For the present status of these approaches we refer to [15]. Monopole resonances test the nuclear EoS at densities only slightly larger than the normal nuclear matter density. Therefore they are of little help if one compares the EoS determined from astrophysics with that extracted from nuclear reaction physics. For the in-plane flow the conclusions are not conclusive yet. This is due to the difficulties to determine the EoS in heavy ion collisions. An EoS is defined in a thermally equilibrated system but in heavy ion collisions equilibrium is not obtained as the momentum distribution of hadrons shows. In addition, nuclei are finite size systems where the surface plays an important role. This can easily be seen inspecting the Weizsacker mass formula which gives for infinite matter almost twice the binding energy/per nucleon as for finite nuclei. Therefore complicated non-equilibrium transport theories have to be employed and the conclusion on the nuclear EoS can only be indirect, in determining the EoS for those potentials which give best agreement with the heavy ion results. In order to determine the energy which is necessary to compress infinite nuclear matter in thermal equilibrium by heavy ion reactions in which no equilibrium is obtained one chooses the following strategy: The transport theory calculates the time evolution of the quantal particles described by Gaussian wave functions. The time evolution is given by a variational principle and the equations one obtains for this choice of the wave function are identical to the classical Hamilton equations where the classical two-body potential is replaced by the expectation value of a Skyrme potential. The Skyrme potential is a simple approximation to the real part of the Bruckner \\(G\\)-matrix which is too complicated for performing simulations of heavy ion collisions. For this potential the potential energy in infinite nuclear matter is calculated. To determine the nuclear EoS we average this (momentum-dependent) two-body potential over the momentum distribution of a given temperature \\(T\\) and add to it the kinetic energy. Expressed as a function of the density we obtain the desired nuclear EoS \\(E/A(\\rho,T)\\). The potential which we use has five parameters. Figure 1: Measured and estimated masses of neutron stars in radio binary pulsars and in x-ray accreting binaries. Error bars are 1\\(\\sigma\\). Vertical dotted lines show average masses of each group (1.62 M\\({}_{\\odot}\\), 1.34 M\\({}_{\\odot}\\) and 1.56 M\\({}_{\\odot}\\)); dashed vertical lines indicate inverse error weighted average masses (1.48 M\\({}_{\\odot}\\), 1.41 M\\({}_{\\odot}\\) and 1.34 M\\({}_{\\odot}\\)). The figure is taken from ref [4] Four of them are fixed by the binding energy per nucleon in infinite nuclear matter at \\(\\rho_{0}\\) and the optical potential which has been measured in pA reactions [16]. The only parameter which has been not determined by experiments yet is the compressibility \\(\\kappa\\) at \\(\\rho_{0}\\). For \\(\\kappa<250\\) MeV one calls the EoS soft, whereas an EoS is called hard for \\(\\kappa>350\\) MeV. Once the parameters are fixed we use the two-body potential with these parameters in the transport calculation. There is an infinite number of two-body potentials which give the same EoS because the range of the potential does not play a role in infinite matter. The nuclear surface measured in electron scattering on nuclei fixes the range, however, quite well. The different transport theories give quite comparable results for the bulk part but it is difficult to model the surface. (In these simulations there is no surface in the strict sense. Each nucleon contributes to the density by its Gaussian wave function and the positions of the hadrons in the course of the reaction determine the surface as well as the density gradients.) The in-plane flow is caused by the density gradient and hence the numerical value depends on how good the surface of the nucleus can be modeled during the reaction. Already small density fluctuations, which are difficult to control, change the value of the in-plane flow considerably. Therefore the second approach, the determination if the EoS by measuring the in-plane flow, has not produced conclusive results yet[17]. The third approach, to measure the EoS by means of the \\(K^{+}\\) yield, depends on bulk properties of matter and surface fluctuations have no influence. Here the different transport theories have converged. This was possible due to a special workshop at the ECT* in Trento/Italy where the authors of the different simulation codes have discussed their approaches in detail and have unified most of the input quantities. The results of this common effort have been published in [18]. As an example we display here the \\(K^{+}\\)\\(p_{t}\\) spectra at midrapidity obtained in the different transport theories at different energies. Because with each \\(K^{+}N\\) rescattering collision the slope of the \\(K^{+}\\) spectra changes the slope of the \\(p_{t}\\) spectra encodes not only the \\(K^{+}\\) momentum distribution at the time point of production but also the distribution of the number of rescatterings. It is therefore all but trivial. Without the \\(KN\\) potential the slopes are almost identical and even the absolute yield, which depends on a correct modeling of the Fermi motion of the Figure 2: Mass-radius diagram for neutron stars. Black (green) curves are for normal matter (SQM) EoS [for definitions of the labels, see [4]]. Regions excluded by general relativity (GR), causality and rotation constraints are indicated. Contours of radiation radii \\(R_{\\infty}\\) are given by the orange curves. The figure is from [4]. nucleons, is very similar. If we include the \\(KN\\) interaction which is not identical in the different approaches (see [18]) we still observe a very similar slope for most of the programs. Due to this progress the simulation programs can now be used to extract up to now theoretically inaccessible information like the hadronic EOS [15]. Three independent experimental observables, the ratio of the excitation functions of the \\(K^{+}\\) production for Au+Au and for C+C [19; 20], the dependence of the \\(K^{+}\\) yield on the number of participants and the excitation function of this dependence can be simultaneously reproduced if in these transport theories the nucleons interact with potential which yield in infinite matter in equilibrium a compressibility of of the EoS of \\(\\kappa\\approx 200\\) MeV. Large compressibility moduli yield results which disagree with all three observables. This value of \\(\\kappa\\) extracted from the \\(K^{+}\\) production which is sensitive to nuclear matter around \\(2.5\\rho_{0}\\) is very similar to that extracted by the study of monopole vibrations at \\(\\rho_{0}\\)[11; 12]. It is not sufficient to determine the compressibility modulus. One has to demonstrate as well that its numerical value is robust, i.e. that the different implementations of yet unsolved physical questions, like the \\(N\\Delta\\to K^{+}\\Lambda N\\) cross section, the \\(KN\\) interaction as well as the life time of the nuclear resonances in the hadronic environment do not affect its value. We employ the Isospin Quantum Molecular Dynamics (IQMD) with momentum dependent forces. All details of the standard version of the program may be found in [16]. In addition we have implemented for this calculation all cross sections which yield to the production of \\(K^{+}\\) as well as the elastic and the charge exchange \\(KN\\to KN\\) reactions. The parametrization of the cross section may be found in [18]. In the standard version the \\(K^{+}N\\) potential leads to an increase of the \\(K^{+}\\) mass in matter, \\(m^{K}(\\rho)=m_{0}^{K}(1-0.075\\frac{\\rho}{\\rho_{0}})\\), in agreement with recent self-consistent calculations of the spectral function of the \\(K^{+}\\)[21]. The \\(\\Lambda\\) potential is 2/3 of the nucleon potential, assuming that the s quark is inert. The calculations reproduce the experimental data quite well as can be seen in fig. 4 where we compare the experimental and theoretical \\(K^{+}\\) spectra for different centrality bins and for 1.48 AGeV Au+Au. This figure shows as well the influence of the \\(K^{+}N\\) potential which modifies not only the overall multiplicity of \\(K^{+}\\) due to the increase of the in medium mass but also the spectral form confirming the complexity of the transverse momentum spectrum. In order to minimize the experimental systematical errors and the consequences of theoretical uncertainties the KaoS collaboration has proposed to study not directly the excitation function of the \\(K^{+}\\) yield but that of the yield ratio of heavy to light systems [19]. Calculations have shown that ratios are much less sensitive to little known input parameters because these affect both systems in a rather similar way. We have shown in fig. 4 that the absolute yields are well reproduced in our simulations. Therefore we can use this ratio directly for a quantitative comparison with data. The ratio of the \\(K^{+}\\) yields obtained in C+C and Au+Au collisions is quite sensitive to the EoS because in Au+Au collisions densities up to 3 \\(\\rho_{0}\\) (depending on the EoS) are reached whereas in C+C collisions compression is practically absent due to less stopping. Figure 5 shows the comparison of the measured ratio of the \\(K^{+}\\) multiplicities obtained in Au+Au and C+C reactions [19] together with transport model calculations as a function of the beam energy [15]. We see, first of all in the top Figure 3: Final \\(K^{+}\\) transverse momentum distribution at \\(b\\)=1 fm, \\(|y_{cm}<0.5|\\) and with an enforced \\(\\Delta\\) lifetime of 1/120 MeV (top row without, bottom row with KN potential) in the different approaches [18]. row, that the excitation function of the yield ratio depends on the potential parameters (hard EoS: \\(\\kappa=380\\) MeV, thin lines and solid symbols, soft EoS: \\(\\kappa=200\\) MeV, thick lines and open symbols) in a quite sensible way and - even more essential - that the prediction in the standard version of the simulation (squares) for a soft and a hard EoS potential differ much more than the experimental uncertainties. The calculation of Fuchs et al. [20] given in the same graph, agrees well with our findings. This observation is, as said, not sufficient to determine the potential parameters uniquely because in these transport theories several not precisely known processes are encoded. For these processes either no reliable theoretical prediction has been advanced or the different approaches yield different results for the same observable. Therefore, it is necessary to verify that these uncertainties do not render our conclusion premature. There are 3 identified uncertainties: the \\(\\sigma_{N\\Delta\\to K^{+}}\\) cross section, the density dependence of the \\(K^{+}N\\) potential and the lifetime of \\(\\Delta\\) in matter if produced in a collisions with a sharp energy of two scattering partners. We discuss now how these uncertainties influence our results: Figure 5, top, shows as well the influence of the unknown \\(N\\Delta\\to K^{+}\\Lambda N\\) cross section on this ratio. We confront the standard IQMD option (with cross sections for \\(\\Delta N\\) interactions from Tsushima et al. [18]) with another option, \\(\\sigma(N\\Delta)=3/4\\sigma(NN)\\)[23], which is based on isospin arguments and has been frequently employed. Both cross sections differ by up to a factor of ten and change significantly the absolute yield of \\(K^{+}\\) in heavy ion reactions but do not change the shape of the ratio. The middle part demonstrates the influence of the kaon-nucleon potential which is not precisely known at the densities obtained in this reaction. The uncertainties due to the \\(\\Delta\\) life time are discussed in the bottom part. Both calculations represent the two extreme values for this lifetime [18] which is important because the disintegration of the \\(\\Delta\\) resonance competes with the \\(K^{+}\\) production. Thus we see that these uncertainties do not influence the conclusion that the excitation function of the ratio is quite different for a soft EoS potential as compared to a hard one and that the data of the KaoS collaboration are only compatible with the soft EoS. The only possibility to change this conclusions is the assumption that the cross sections are explicitly density dependent in a way that the increasing density is compensated by a decreasing cross section. It would have a strong influence on other observables which are presently well predicted by the IQMD calculations. The compression which can be obtained in heavy ion reactions depends on the impact parameter or, equivalently, on the experimentally accessible number of participating nucleons. Therefore by varying the impact parameter we can test the EoS at different densities. This dependence should be different for different EoS. This is indeed the case for the result of the simulations as seen in Fig. 6, top, where we display the kaon yield \\(M_{K^{+}}/A_{\\rm part}\\) for Au+Au collisions at 1.5 \\(A\\) GeV as a function of the participant number \\(A_{\\rm part}\\) and for different options: standard version (soft, \\(KN\\)), Figure 4: \\(K^{+}\\) spectra for different centrality bins as compared with (preliminary) experimental data from the KaoS collaboration calculations without kaon-nucleon interaction (soft, no \\(KN\\)) and with the isospin based \\(N\\Delta\\to N\\Lambda K^{+}\\) cross section (soft, \\(KN\\), \\(\\sigma^{*}\\)). A variation of the KN potential as well as of the \\(K^{+}\\) production cross section change the dependence of the \\(K^{+}\\) yield on the number of participants, which can be parametrized by the form \\(M_{K^{+}}=A_{\\rm part}^{\\alpha}\\), only little. On the contrary, if we apply a hard EoS, the slope value \\(\\alpha\\) changes considerable and is outside of the values which are compatible with the experimental results, as shown in the middle part of the figure. In this figure we display as well the insensitivity of our result to the momentum dependence of the nucleon nucleon interaction. As long as the compressibility is not changed the results of our calculations are very similar independent on whether we have a static or a momentum dependent NN potential. Hence the dependence of the \\(K^{+}\\) yield on the number of participants is also a robust variable for the determination of the EoS which supports our earlier conclusion that the EoS is soft. Another confirmation that only a soft EoS describes the experimental data is the beam energy dependence of the fitted exponent \\(\\alpha\\) which is displayed in the right part of fig. 6. The data, which follow the curve for a soft equation of state, will soon be published [26]. In conclusion, we have shown that earthbound experiments have now reached a precision which allows to determine the hadronic EoS. The experimental results for the three observables which are most sensitive to the hadronic EOS are only compatible with theory if the hadronic EoS is soft. This conclusion is robust. Little known input quantities do not influence this conclusion. The observation of a neutron star with twice the solar mass seems to contradict this conclusion. It points toward a hard hadronic EoS. Both results are quite new and one has not to forget that we are comparing non equilibrium heavy ion reactions where about the same number of protons and neutrons are present and where mesons and baryon resonances are produced with cold neutron matter in equilibrium. In addition this contradiction depends also on the prediction that the observed star mass excludes the formation of quark matter in the interior, a consequence of the suggested EoS of quark matter which is still rather speculative. Figure 5: Comparison of the measured excitation function of the ratio of the \\(K^{+}\\) multiplicities per mass number \\(A\\) obtained in Au+Au and in C+C reactions (Ref. [19]) with various calculations. The use of a hard EoS is denoted by thin (blue) lines, a soft EoS by thick (red) lines. The calculated energies are given by the symbols, the lines are drawn to guide the eye. On top, two different versions of the \\(N\\Delta\\to K^{+}\\Lambda N\\) cross sections are used. One is based on isospin arguments [23], the other is determined by a relativistic tree level calculation [24]. The calculation by Fuchs [20] are shown as dotted lines. Middle: IQMD calculations with and without \\(KN\\) potential are compared. Bottom: The influence of different options for the life time of \\(\\Delta\\) in matter is demonstrated. To solve this contradiction is certainly a big challenge for both communities in the near future. ## References * (1) Sasa Ratkovic, Madappa Prakash, James M.Lattimer Submitted to ApJ, astro-ph/0512136 * (2) Ph. Podsiadlowski, J. D. M. Dewi, P. Lesaffre, J. C. Miller, W. G. Newton, J. R. Stone Mon.Not.Roy.Astron.Soc. 361 (2005) 1243-1249, astro-ph/0506566 * (3) J.M. Lattimer, M. Prakash Science Vol. 304 2004 (536-542) * (4) J.M. Lattimer and M. Prakash, ApJ **550** (2001) 426; A.W. Steiner, M. Prakash and J.M. Lattimer, Phys. Lett. **B486** (2000) 239; M. Alford and S. Reddy, Phys. Rev. D. **67** (2003) 074024. * (5) H.-Th. Janka, R. Buras, F.S. Kitaura Joyanes, A. Marek, M. Rampp Procs. 12th Workshop on Nuclear Astrophysics, Ringberg Castle, March 22-27, 2004, astro-ph/0405289 * (6) Fridolin Weber, Prog.Part.Nucl.Phys. 54 (2005) 193-288 * (7) David J. Nice, Eric M. Splaver, Ingrid H. Stairs, Oliver Loehmer, Axel Jessner, Michael Kramer, James M. Cordes, submitted to ApJ, astro-ph/0508050 * (8) C. Maieron, M. Baldo, G.F. Burgio, H.-J. Schulze Phys.Rev. D70 (2004) 043010 * (9) M. Baldo, M. Buballa, G.F. Burgio, F. Neumann, M. Oertel, H.-J. Schulze Phys.Lett. B562 (2003) 153-160 * (10) F. Gastineau, R. Nebauer, J.Aichelin, Phys.Rev. C65 (2002) 045204 * (11) D.H. Youngblood, H.L. Clark and Y.-W. Lui, Phys. Rev. Lett **84** (1999) 691. * (12) J. Piekarewicz, Phys. Rev. **C69** (2004) 041301 and references therein. * (13) H. Stocker and W. Greiner, Phys. Reports **137**, 278 (1986) and references therein. * (14) J. Aichelin and C.M. Ko, Phys. Rev. Lett. **55** (1985) 2661. * (15) Ch. Hartnack, H. Oeschler, J. Aichelin, Phys.Rev.Lett. 96 (2006) 012302 * (16) C. Hartnack et al., Eur. Phys. J. **A1** (1998) 151. * (17) A. Andronic et al., Phys. Lett. **B612** (2005) 173. * (18) E.E. Kolomieitsev et al., J.Phys. G31 (2005) S741 * (19) C. Sturm et al., (KaoS Collaboration), Phys. Rev. Lett. **86** (2001) 39. * (20) C. Fuchs et al., Phys. Rev. Lett **86** (2001) 1794. Figure 6: Dependence of the \\(K^{+}\\) scaling on the nuclear EoS. We present this dependence in form of \\(M_{K^{+}}=A_{\\rm part}^{\\alpha}\\). On the top the dependence of \\(M_{K^{+}}/A_{\\rm part}\\) as a function of \\(A_{\\rm part}\\) is shown for different options: a “hard” EoS with \\(KN\\) potential (solid line), the other three lines show a “soft” EoS, without \\(KN\\) potential and \\(\\sigma(N\\Delta)\\) from Tsushima [24] (dotted line), with \\(KN\\) potential and the same parametrization of the cross section (dashed line) and with \\(KN\\) potential and \\(\\sigma(N\\Delta)=3/4\\sigma(NN)\\). On the bottom the fit exponent \\(\\alpha\\) is shown as a function of the compressibility for calculations with momentum-dependent interactions (mdi) and for static interactions (dashed line)[16]. On the right hand side we compare the energy dependence of the fit exponent \\(\\alpha\\) for the two EoS. * (21) C.L. Korpa and M.F.M. Lutz, submitted to Heavy Ion Physics, nucl-th/0404088. * (22) C. Hartnack and J. Aichelin, Proc. Int. Workshop XXVIII on Gross prop. of Nucl. and Nucl. Excit., Hirschegg, January 2000 edt. by M. Buballa, W. Norenberg, B. Schafer and J. Wambach; and to be published in Phys. Rep. * (23) J. Randrup and C.M. Ko, Nucl. Phys. **A 343**, 519 (1980). * (24) K. Tsushima et al., Phys. Lett. **B337** (1994) 245; Phys. Rev. **C 59** (1999) 369. * (25) A. Forster et al., (KaoS Collaboration), Phys. Rev. Lett. **31** (2003) 152301; J. Phys. G. **30** (2004) 393; A. Forster, Ph.D. thesis, Darmstadt University of Technology, 2003. * (26) KaoS collaboration, private communication and to be published. # Recent astrophysical and accelerator based results on the Hadronic Equation of State Ch. Hartnack\\({}^{1}\\), H. Oeschler\\({}^{2}\\) and Jorg Aichelin\\({}^{1}\\) \\({}^{1}\\)SUBATECH, Laboratoire de Physique Subatomique et des Technologies Associees University of Nantes - IN2P3/CNRS - Ecole des Mines de Nantes 4 rue Alfred Kastler, F-44072 Nantes Cedex 03, France \\({}^{2}\\)Institut fur Kernphysik, Darmstadt University of Technology, 64289 Darmstadt, Germany ###### pacs: 25.75.Dw How much energy is needed to compress nuclear matter? The answer to this question, the determination of \\(E/A(\\rho,T)\\), the energy/nucleon in nuclear matter in thermal equilibrium as a function of the density \\(\\rho\\) and the temperature \\(T\\), has been considered since many years as one of the most important challenges in nuclear physics. This quest has been dubbed \"search for the nuclear equation of state (EoS)\". Only at equilibrium density, \\(\\rho_{0}\\), the energy per nucleon \\(E/A(\\rho=\\rho_{0},T=0)=-16\\) MeV is known by extrapolating the Weizsacker mass formula to infinite matter. Standard ab initio many body calculations do not allow for a determination of \\(E/A(\\rho,T)\\) at energies well above the saturation density because the low density many body expansion schema ( Bruckner G- matrix) breaks down and therefore the number of contributing terms is exploding. Therefore in nuclear reaction physics another strategy has been developed. Theory has identified experimental observables in nuclear reaction physics or in astrophysics which are sensitive to \\(E/A(\\rho,T)\\). Unfortunately these observables depend as well on other quantities which are either unknown or little known (like cross sections with resonance in the entrance channel) or difficult to asses theoretically (like the resonance lifetimes in hot and dense matter). It was hoped that comparing many observables for different systems and different energies with the theoretical predictions these unknown or little known quantities can be determined experimentally and that finally the dependence of the observables on \\(E/A(\\rho,T)\\) can be isolated. In astrophysics the nuclear EoS plays an important role in binary mergers involving black holes and neutron stars [1], in double pulsars [2], in the mass-radius relation of neutron stars [3; 4] and in supernovae explosions [5]. For a recent review on these topics we refer to [6]. Unfortunately, as in nuclear reaction physics, there are always other little known processes or properties which have to be understood before the nuclear EoS dependence can be isolated. We discuss here as example of the mass-radius relation of neutron stars. Fig. 1 shows the neutron star masses in units of the solar mass for different types of binaries. These masses are concentrated at around 1-1.5 solar masses. Fig. 2 shows a theoretical prediction of the mass-radius relation for neutron stars using different EoS. Since the nature of the interior of neutron stars is not known (in contradiction to what the name suggests) one may suppose that it consists of hadrons or of quarks. But even if it consists of hadrons there are speculations that there is a \\(K^{-}\\) or a \\(\\pi^{-}\\) condensate or that there are hyperons in equilibrium with nuclear resonances. The same is true if the interior consists of quarks. Little known color-flavor locked quark phases may modify the EoS at densities which are reached in the interior of the neutron star. For a detailed discussion of all these phenomena we refer to ref. [6]. We see that the observed masses of neutron stars are compatible with almost all quark or hadron based EoS as long as the radius is unknown. Radii, however, are very difficult to measure. Because similar problems appear also for other observables, up to recently the astrophysical observations of neutron stars did not help much to narrow down the uncertainty on the nuclear EoS. This situation has changed dramatically in the last year with the observation of a neutron star with a mass of two solar masses [7]. If this observation is finally confirmed the mass/radius prediction of fig.2 excludes that the interior of a neutron star is made by quarks [4], even a soft nuclear EoS, which will be defined below, will be excluded. This is confirmed by the calculation of Maieron [8] which uses a MIT bag model or a color dielectric models EoSto describe the quark phase. Baldo [9] argue that this conclusion may be premature because it depends too much on the equation of state of the quark phase. If one replaces the MIT bag model equation of state by that of the Nambu - Jona-Lasinio (NJL) Lagrangian under certain conditions (no color conducting phase) larger masses may be obtained. The standard NJL Lagrangian lacks, however, repulsion and in view of the momentum cut-off, necessary to regularize the loop integrals, and the coupling constants in the diquark sector, which are not uniquely determined by the Fierz transformation, quantitative prediction at high quark densities are difficult in this approach even if qualitative agreement with pQCD calculation can be found [10]. Simulations of heavy ion reactions have shown that there are three possible observables which are sensitive to \\(E/A(\\rho,T)\\) at densities larger than \\(\\rho_{0}\\): (i) the strength distribution of giant isoscalar monopole resonances [11; 12], (ii) the in-plane sidewards flow of nucleons in semi-central heavy ion reactions at energies between 100 \\(A\\) MeV and 400 \\(A\\) MeV [13] and (iii) the production of \\(K^{+}\\) mesons in heavy ion reactions at energies around 1 \\(A\\) GeV [14]. For the present status of these approaches we refer to [15]. Monopole resonances test the nuclear EoS at densities only slightly larger than the normal nuclear matter density. Therefore they are of little help if one compares the EoS determined from astrophysics with that extracted from nuclear reaction physics. For the in-plane flow the conclusions are not conclusive yet. This is due to the difficulties to determine the EoS in heavy ion collisions. An EoS is defined in a thermally equilibrated system but in heavy ion collisions equilibrium is not obtained as the momentum distribution of hadrons shows. In addition, nuclei are finite size systems where the surface plays an important role. This can easily be seen inspecting the Weizsacker mass formula which gives for infinite matter almost twice the binding energy/per nucleon as for finite nuclei. Therefore complicated non-equilibrium transport theories have to be employed and the conclusion on the nuclear EoS can only be indirect, in determining the EoS for those potentials which give best agreement with the heavy ion results. In order to determine the energy which is necessary to compress infinite nuclear matter in thermal equilibrium by heavy ion reactions in which no equilibrium is obtained one chooses the following strategy: The transport theory calculates the time evolution of the quantal particles described by Gaussian wave functions. The time evolution is given by a variational principle and the equations one obtains for this choice of the wave function are identical to the classical Hamilton equations where the classical two-body potential is replaced by the expectation value of a Skyrme potential. The Skyrme potential is a simple approximation to the real part of the Bruckner \\(G\\)-matrix which is too complicated for performing simulations of heavy ion collisions. For this potential the potential energy in infinite nuclear matter is calculated. To determine the nuclear EoS we average this (momentum-dependent) two-body potential over the momentum distribution of a given temperature \\(T\\) and add to it the kinetic energy. Expressed as a function of the density we obtain the desired nuclear EoS \\(E/A(\\rho,T)\\). The potential which we use has five parameters. Figure 1: Measured and estimated masses of neutron stars in radio binary pulsars and in x-ray accreting binaries. Error bars are 1\\(\\sigma\\). Vertical dotted lines show average masses of each group (1.62 M\\({}_{\\odot}\\), 1.34 M\\({}_{\\odot}\\) and 1.56 M\\({}_{\\odot}\\)); dashed vertical lines indicate inverse error weighted average masses (1.48 M\\({}_{\\odot}\\), 1.41 M\\({}_{\\odot}\\) and 1.34 M\\({}_{\\odot}\\)). The figure is taken from ref [4] Four of them are fixed by the binding energy per nucleon in infinite nuclear matter at \\(\\rho_{0}\\) and the optical potential which has been measured in pA reactions [16]. The only parameter which has been not determined by experiments yet is the compressibility \\(\\kappa\\) at \\(\\rho_{0}\\). For \\(\\kappa<250\\) MeV one calls the EoS soft, whereas an EoS is called hard for \\(\\kappa>350\\) MeV. Once the parameters are fixed we use the two-body potential with these parameters in the transport calculation. There is an infinite number of two-body potentials which give the same EoS because the range of the potential does not play a role in infinite matter. The nuclear surface measured in electron scattering on nuclei fixes the range, however, quite well. The different transport theories give quite comparable results for the bulk part but it is difficult to model the surface. (In these simulations there is no surface in the strict sense. Each nucleon contributes to the density by its Gaussian wave function and the positions of the hadrons in the course of the reaction determine the surface as well as the density gradients.) The in-plane flow is caused by the density gradient and hence the numerical value depends on how good the surface of the nucleus can be modeled during the reaction. Already small density fluctuations, which are difficult to control, change the value of the in-plane flow considerably. Therefore the second approach, the determination if the EoS by measuring the in-plane flow, has not produced conclusive results yet[17]. The third approach, to measure the EoS by means of the \\(K^{+}\\) yield, depends on bulk properties of matter and surface fluctuations have no influence. Here the different transport theories have converged. This was possible due to a special workshop at the ECT* in Trento/Italy where the authors of the different simulation codes have discussed their approaches in detail and have unified most of the input quantities. The results of this common effort have been published in [18]. As an example we display here the \\(K^{+}\\)\\(p_{t}\\) spectra at midrapidity obtained in the different transport theories at different energies. Because with each \\(K^{+}N\\) rescattering collision the slope of the \\(K^{+}\\) spectra changes the slope of the \\(p_{t}\\) spectra encodes not only the \\(K^{+}\\) momentum distribution at the time point of production but also the distribution of the number of rescatterings. It is therefore all but trivial. Without the \\(KN\\) potential the slopes are almost identical and even the absolute yield, which depends on a correct modeling of the Fermi motion of the Figure 2: Mass-radius diagram for neutron stars. Black (green) curves are for normal matter (SQM) EoS [for definitions of the labels, see [4]]. Regions excluded by general relativity (GR), causality and rotation constraints are indicated. Contours of radiation radii \\(R_{\\infty}\\) are given by the orange curves. The figure is from [4]. nucleons, is very similar. If we include the \\(KN\\) interaction which is not identical in the different approaches (see [18]) we still observe a very similar slope for most of the programs. Due to this progress the simulation programs can now be used to extract up to now theoretically inaccessible information like the hadronic EOS [15]. Three independent experimental observables, the ratio of the excitation functions of the \\(K^{+}\\) production for Au+Au and for C+C [19; 20], the dependence of the \\(K^{+}\\) yield on the number of participants and the excitation function of this dependence can be simultaneously reproduced if in these transport theories the nucleons interact with potential which yield in infinite matter in equilibrium a compressibility of of the EoS of \\(\\kappa\\approx 200\\) MeV. Large compressibility moduli yield results which disagree with all three observables. This value of \\(\\kappa\\) extracted from the \\(K^{+}\\) production which is sensitive to nuclear matter around \\(2.5\\rho_{0}\\) is very similar to that extracted by the study of monopole vibrations at \\(\\rho_{0}\\)[11; 12]. It is not sufficient to determine the compressibility modulus. One has to demonstrate as well that its numerical value is robust, i.e. that the different implementations of yet unsolved physical questions, like the \\(N\\Delta\\to K^{+}\\Lambda N\\) cross section, the \\(KN\\) interaction as well as the life time of the nuclear resonances in the hadronic environment do not affect its value. We employ the Isospin Quantum Molecular Dynamics (IQMD) with momentum dependent forces. All details of the standard version of the program may be found in [16]. In addition we have implemented for this calculation all cross sections which yield to the production of \\(K^{+}\\) as well as the elastic and the charge exchange \\(KN\\to KN\\) reactions. The parametrization of the cross section may be found in [18]. In the standard version the \\(K^{+}N\\) potential leads to an increase of the \\(K^{+}\\) mass in matter, \\(m^{K}(\\rho)=m^{K}_{0}(1-0.075\\frac{\\rho}{\\rho_{0}})\\), in agreement with recent self-consistent calculations of the spectral function of the \\(K^{+}\\)[21]. The \\(\\Lambda\\) potential is 2/3 of the nucleon potential, assuming that the s quark is inert. The calculations reproduce the experimental data quite well as can be seen in fig. 4 where we compare the experimental and theoretical \\(K^{+}\\) spectra for different centrality bins and for 1.48 AGeV Au+Au. This figure shows as well the influence of the \\(K^{+}N\\) potential which modifies not only the overall multiplicity of \\(K^{+}\\) due to the increase of the in medium mass but also the spectral form confirming the complexity of the transverse momentum spectrum. In order to minimize the experimental systematical errors and the consequences of theoretical uncertainties the KaoS collaboration has proposed to study not directly the excitation function of the \\(K^{+}\\) yield but that of the yield ratio of heavy to light systems [19]. Calculations have shown that ratios are much less sensitive to little known input parameters because these affect both systems in a rather similar way. We have shown in fig. 4 that the absolute yields are well reproduced in our simulations. Therefore we can use this ratio directly for a quantitative comparison with data. The ratio of the \\(K^{+}\\) yields obtained in C+C and Au+Au collisions is quite sensitive to the EoS because in Au+Au collisions densities up to 3 \\(\\rho_{0}\\) (depending on the EoS) are reached whereas in C+C collisions compression is practically absent due to less stopping. Figure 5 shows the comparison of the measured ratio of the \\(K^{+}\\) multiplicities obtained in Au+Au and C+C reactions [19] together with transport model calculations as a function of the beam energy. We see, first of all in the top row, Figure 3: Final \\(K^{+}\\) transverse momentum distribution at \\(b\\)=1 fm, \\(|y_{cm}<0.5|\\) and with an enforced \\(\\Delta\\) lifetime of 1/120 MeV (top row without, bottom row with KN potential) in the different approaches [18]. that the excitation function of the yield ratio depends on the potential parameters (hard EoS: \\(\\kappa=380\\) MeV, thin lines and solid symbols, soft EoS: \\(\\kappa=200\\) MeV, thick lines and open symbols) in a quite sensible way and - even more essential - that the prediction in the standard version of the simulation (squares) for a soft and a hard EoS potential differ much more than the experimental uncertainties. The calculation of Fuchs et al. [20] given in the same graph, agrees well with our findings. This observation is, as said, not sufficient to determine the potential parameters uniquely because in these transport theories several not precisely known processes are encoded. For these processes either no reliable theoretical prediction has been advanced or the different approaches yield different results for the same observable. Therefore, it is necessary to verify that these uncertainties do not render our conclusion premature. There are 3 identified uncertainties: the \\(\\sigma_{N\\Delta\\to K^{+}}\\) cross section, the density dependence of the \\(K^{+}N\\) potential and the lifetime of \\(\\Delta\\) in matter if produced in a collisions with a sharp energy of two scattering partners. We discuss now how these uncertainties influence our results: Figure 5, top, shows as well the influence of the unknown \\(N\\Delta\\to K^{+}\\Lambda N\\) cross section on this ratio. We confront the standard IQMD option (with cross sections for \\(\\Delta N\\) interactions from Tsushima et al. [18]) with another option, \\(\\sigma(N\\Delta)=3/4\\sigma(NN)\\)[23], which is based on isospin arguments and has been frequently employed. Both cross sections differ by up to a factor of ten and change significantly the absolute yield of \\(K^{+}\\) in heavy ion reactions but do not change the shape of the ratio. The middle part demonstrates the influence of the kaon-nucleon potential which is not precisely known at the densities obtained in this reaction. The uncertainties due to the \\(\\Delta\\) life time are discussed in the bottom part. Both calculations represent the two extreme values for this lifetime [18] which is important because the disintegration of the \\(\\Delta\\) resonance competes with the \\(K^{+}\\) production. Thus we see that these uncertainties do not influence the conclusion that the excitation function of the ratio is quite different for a soft EoS potential as compared to a hard one and that the data of the KaoS collaboration are only compatible with the soft EoS. The only possibility to change this conclusions is the assumption that the cross sections are explicitly density dependent in a way that the increasing density is compensated by a decreasing cross section. It would have a strong influence on other observables which are presently well predicted by the IQMD calculations. The compression which can be obtained in heavy ion reactions depends on the impact parameter or, equivalently, on the experimentally accessible number of participating nucleons. Therefore by varying the impact parameter we can test the EoS at different densities. This dependence should be different for different EoS. This is indeed the case for the result of the simulations as seen in Fig. 6, top, where we display the kaon yield \\(M_{K^{+}}/A_{\\rm part}\\) for Au+Au collisions at 1.5 \\(A\\) GeV as a function of the participant number \\(A_{\\rm part}\\) and for different options: standard version (soft, \\(KN\\)), Figure 4: \\(K^{+}\\) spectra for different centrality bins as compared with (preliminary) experimental data from the KaoS collaboration calculations without kaon-nucleon interaction (soft, no \\(KN\\)) and with the isospin based \\(N\\Delta\\to N\\Lambda K^{+}\\) cross section (soft, \\(KN\\), \\(\\sigma^{*}\\)). A variation of the KN potential as well as of the \\(K^{+}\\) production cross section change the dependence of the \\(K^{+}\\) yield on the number of participants, which can be parametrized by the form \\(M_{K^{+}}=A^{\\alpha}_{\\rm part}\\), only little. On the contrary, if we apply a hard EoS, the slope value \\(\\alpha\\) changes considerable and is outside of the values which are compatible with the experimental results, as shown in the middle part of the figure. In this figure we display as well the insensitivity of our result to the momentum dependence of the nucleon nucleon interaction. As long as the compressibility is not changed the results of our calculations are very similar independent on whether we have a static or a momentum dependent NN potential. Hence the dependence of the \\(K^{+}\\) yield on the number of participants is also a robust variable for the determination of the EoS which supports our earlier conclusion that the EoS is soft. Another confirmation that only a soft EoS describes the experimental data is the beam energy dependence of the fitted exponent \\(\\alpha\\) which is displayed in the right part of fig. 6. The data, which follow the curve for a soft equation of state, will soon be published [26]. In conclusion, we have shown that earthbound experiments have now reached a precision which allows to determine the hadronic EoS. The experimental results for the three observables which are most sensitive to the hadronic EOS are only compatible with theory if the hadronic EoS is soft. This conclusion is robust. Little known input quantities do not influence this conclusion. The observation of a neutron star with twice the solar mass seems to contradict this conclusion. It points toward a hard hadronic EoS. Both results are quite new and one has not to forget that we are comparing non equilibrium heavy ion reactions where about the same number of protons and neutrons are present and where mesons and baryon resonances are produced with cold neutron matter in equilibrium. In addition this contradiction depends also on the prediction that the observed star mass excludes the formation of quark matter in the interior, a consequence of the suggested EoS of quark matter which is still rather speculative. Figure 5: Comparison of the measured excitation function of the ratio of the \\(K^{+}\\) multiplicities per mass number \\(A\\) obtained in Au+Au and in C+C reactions (Ref. [19]) with various calculations. The use of a hard EoS is denoted by thin (blue) lines, a soft EoS by thick (red) lines. The calculated energies are given by the symbols, the lines are drawn to guide the eye. On top, two different versions of the \\(N\\Delta\\to K^{+}\\Lambda N\\) cross sections are used. One is based on isospin arguments [23], the other is determined by a relativistic tree level calculation [24]. The calculation by Fuchs [20] are shown as dotted lines. Middle: IQMD calculations with and without \\(KN\\) potential are compared. Bottom: The influence of different options for the life time of \\(\\Delta\\) in matter is demonstrated. To solve this contradiction is certainly a big challenge for both communities in the near future. ## References * (1) Sasa Ratkovic, Madappa Prakash, James M.Lattimer Submitted to ApJ, astro-ph/0512136 * (2) Ph. Podsiadlowski, J. D. M. Dewi, P. Lesaffre, J. C. Miller, W. G. Newton, J. R. Stone Mon.Not.Roy.Astron.Soc. 361 (2005) 1243-1249, astro-ph/0506566 * (3) J.M. Lattimer, M. Prakash Science Vol. 304 2004 (536-542) * (4) J.M. Lattimer and M. Prakash, ApJ **550** (2001) 426; A.W. Steiner, M. Prakash and J.M. Lattimer, Phys. Lett. **B486** (2000) 239; M. Alford and S. Reddy, Phys. Rev. D. **67** (2003) 074024. * (5) H.-Th. Janka, R. Buras, F.S. Kitaura Joyanes, A. Marek, M. Rampp Procs. 12th Workshop on Nuclear Astrophysics, Ringberg Castle, March 22-27, 2004, astro-ph/0405289 * (6) Fridolin Weber, Prog.Part.Nucl.Phys. 54 (2005) 193-288 * (7) David J. Nice, Eric M. Splaver, Ingrid H. Stairs, Oliver Loehmer, Axel Jessner, Michael Kramer, James M. Cordes, submitted to ApJ, astro-ph/0508050 * (8) C. Maieron, M. Baldo, G.F. Burgio, H.-J. Schulze Phys.Rev. D70 (2004) 043010 * (9) M. Baldo, M. Buballa, G.F. Burgio, F. Neumann, M. Oertel, H.-J. Schulze Phys.Lett. B562 (2003) 153-160 * (10) F. Gastineau, R. Nebauer, J.Aichelin, Phys.Rev. C65 (2002) 045204 * (11) D.H. Youngblood, H.L. Clark and Y.-W. Lui, Phys. Rev. Lett **84** (1999) 691. * (12) J. Piekarewicz, Phys. Rev. **C69** (2004) 041301 and references therein. * (13) H. Stocker and W. Greiner, Phys. Reports **137**, 278 (1986) and references therein. * (14) J. Aichelin and C.M. Ko, Phys. Rev. Lett. **55** (1985) 2661. * (15) Ch. Hartnack, H. Oeschler, J. Aichelin, Phys.Rev.Lett. 96 (2006) 012302 * (16) C. Hartnack et al., Eur. Phys. J. **A1** (1998) 151. * (17) A. Andronic et al., Phys. Lett. **B612** (2005) 173. * (18) E.E. Kolomieitsev et al., J.Phys. G31 (2005) S741 * (19) C. Sturm et al., (KaoS Collaboration), Phys. Rev. Lett. **86** (2001) 39. * (20) C. Fuchs et al., Phys. Rev. Lett **86** (2001) 1794. Figure 6: Dependence of the \\(K^{+}\\) scaling on the nuclear EoS. We present this dependence in form of \\(M_{K^{+}}=A_{\\rm part}^{\\alpha}\\). On the top the dependence of \\(M_{K^{+}}/A_{\\rm part}\\) as a function of \\(A_{\\rm part}\\) is shown for different options: a “hard” EoS with \\(KN\\) potential (solid line), the other three lines show a “soft” EoS, without \\(KN\\) potential and \\(\\sigma(N\\Delta)\\) from Tsushima [24] (dotted line), with \\(KN\\) potential and the same parametrization of the cross section (dashed line) and with \\(KN\\) potential and \\(\\sigma(N\\Delta)=3/4\\sigma(NN)\\). On the bottom the fit exponent \\(\\alpha\\) is shown as a function of the compressibility for calculations with momentum-dependent interactions (mdi) and for static interactions (dashed line)[16]. On the right hand side we compare the energy dependence of the fit exponent \\(\\alpha\\) for the two EoS. * (21) C.L. Korpa and M.F.M. Lutz, submitted to Heavy Ion Physics, nucl-th/0404088. * (22) C. Hartnack and J. Aichelin, Proc. Int. Workshop XXVIII on Gross prop. of Nucl. and Nucl. Excit., Hirschegg, January 2000 edt. by M. Buballa, W. Norenberg, B. Schafer and J. Wambach; and to be published in Phys. Rep. * (23) J. Randrup and C.M. Ko, Nucl. Phys. **A 343**, 519 (1980). * (24) K. Tsushima et al., Phys. Lett. **B337** (1994) 245; Phys. Rev. **C 59** (1999) 369. * (25) A. Forster et al., (KaoS Collaboration), Phys. Rev. Lett. **31** (2003) 152301; J. Phys. G. **30** (2004) 393; A. Forster, Ph.D. thesis, Darmstadt University of Technology, 2003. * (26) KaoS collaboration, private communication and to be published.
In astrophysics as well as in hadron physics progress has recently been made on the determination of the hadronic equation of state (EOS) of compressed matter. The results are contradictory, however. Simulations of heavy ion reactions are now sufficiently robust to predict the stiffness of the (EOS) from (i) the energy dependence of the ratio of \\(K^{+}\\) from Au+Au and C+C collisions and (ii) the centrality dependence of the \\(K^{+}\\) multiplicities. The data are best described with a compressibility coefficient at normal nuclear matter density \\(\\kappa\\) around 200 MeV, a value which is usually called \"soft\" The recent observation of a neutron star with a mass of twice the solar mass is only compatible with theoretical predictions if the EOS is stiff. We review the present situation.
Write a summary of the passage below.
arxiv-format/0611102v2.md
# Introduction The van der Waals (VdW) excluded volume model is successfully used to describe the hadron yields measured in relativistic nucleus-nucleus collisions (see e.g. [1, 2] and references therein). This model treats the hadrons as hard core spheres and, therefore, takes into account the hadron repulsion at short distances. In a relativistic situation one should, however, include the Lorentz contraction of the hard core hadrons. Recently, both the conventional cluster and the virial expansions were generalized to the momentum dependent inter-particle potentials, accounting for the Lorentz contracted hard core repulsion [3] and the derived equation of state (EOS) was applied to describe hadron yields observed in relativistic nuclear collisions [4]. The VdW equation obtained in the traditional way leads to the reduction of the second virial coefficient (analog of the excluded volume) compared to nonrelativistic case. However, in the high pressure limit the second virial coefficient remains finite. This fact immediately leads to the problem with causality in relativistic mechanics - the speed of sound exceeds the speed of light [5]. The influence of relativistic effects on the hard core repulsion may be important for a variety of effective models of hadrons and hadronic matter such as the modified Walecka model [6], various extensions of the Nambu-Jona-Lasinio model [7], the quark-meson coupling model [8], the chiral SU(3) model [9] e.t.c. Clearly, the relativistic hard core repulsion should be important for any effective model in which the strongly interacting particles have the reduced values of masses compared to their vacuum values because with lighter masses the large portion of particles becomes relativistic. Nevertheless, the relativistic hard core repulsion was, so far, not incorporated into these models due to the absence of the required formalism. The Lorentz contraction of rigid spheres representing the hadrons may also be essential at high particle densities which can be achieved at modern colliders. Very recently it was understood that in the baryonless deconfined phase above the cross-over temperature \\(T_{c}\\) some hadrons may survive up to large temperatures like \\(3T_{c}\\)[10, 11, 12], and that above \\(T_{c}\\) there may exist bound states [13] and resonances [14]. Moreover, an exactly solvable statistical model of quark-gluon bags with surface tension [15] indicates that above the cross-over transition [12] the coexistence of hadronic resonances with QGP may, in principle, survive up to infinite temperature. Thus, above \\(T_{c}\\) the relativistic effects of the hard core repulsion can be important for many hardonic resonances and hadron-like bound states of quarks, especially, if their masses are reduced due to chiral symmetry restoration. Also the VdW EOS, which obeys the causality condition in the limit of high density and simultaneously reproduces the correct low density behavior, adds a significant theoretical value because such an EOS had not yet been formulated during more than a century of the special relativity. This work is devoted to the investigation of the necessary assumptions to formulate such an equation of state. The work is organized as follows. In Sect. 2 a summary of both the cluster and virial expansion for the Lorentz contracted rigid spheres is given. It is shown that the VdW extrapolation in relativistic case is not a unique procedure. Therefore, an alternative derivation of the VdW EOS is considered there. The high pressure limit is studied in details in Sect. 3. It is shown that the suggested relativistic generalization of the earlier approach [3] obeysthe causality condition. The conclusions are given in the last section. **2. Relativization of the van der Waals EOS** The excluded volume effect accounts for the blocked volume of two spheres when they touch each other. If hard sphere particles move with relativistic velocities it is necessary to include their Lorentz contraction in the rest frame of the medium. The model suggested in Ref. [16] is not satisfactory: the second virial coefficient \\(a_{2}=4\\,v_{\\rm o}\\) of the VdW excluded volume model is confused there with the proper volume \\(v_{\\rm o}\\) of an individual particle - the contraction effect is introduced for the proper volume of each particle. In order to get the correct result it is necessary to account for the excluded volume of two Lorentz contracted spheres. Let \\({\\bf r}_{i}\\) and \\({\\bf r}_{j}\\) be the coordinates of the \\(i\\)-th and \\(j\\)-th Boltzmann particle, respectively, and \\({\\bf k}_{i}\\) and \\({\\bf k}_{j}\\) be their momenta, \\({\\bf\\hat{r}}_{ij}\\) denotes the unit vector \\({\\bf\\hat{r}}_{ij}={\\bf r}_{ij}/|{\\bf r}_{ij}|\\) (\\({\\bf r}_{ij}=|{\\bf r}_{i}-{\\bf r}_{j}|\\)). Then for a given set of vectors \\(({\\bf\\hat{r}}_{ij},{\\bf k}_{i},{\\bf k}_{j})\\) for the Lorentz contracted rigid spheres of radius \\(R_{\\rm o}\\) there exists the minimum distance between their centers \\(r_{ij}({\\bf\\hat{r}}_{ij};{\\bf k}_{i},{\\bf k}_{j})={\\rm min}|{\\bf r}_{ij}|\\). The dependence of the potentials \\(u_{ij}\\) on the coordinates \\({\\bf r}_{i},{\\bf r}_{j}\\) and momenta \\({\\bf k}_{i},{\\bf k}_{j}\\) can be given in terms of the minimal distance as follows \\[u({\\bf r}_{i},{\\bf k}_{i};{\\bf r}_{j},{\\bf k}_{j})\\left\\{\\begin{array}{ll}0 \\,,&|{\\bf r}_{i}-{\\bf r}_{j}|>r_{ij}\\,({\\bf\\hat{r}}_{ij};{\\bf k}_{i},{\\bf k}_{ j})\\,\\\\ \\\\ \\infty\\,,&|{\\bf r}_{i}-{\\bf r}_{j}|\\leq r_{ij}\\,({\\bf\\hat{r}}_{ij};{\\bf k}_{i},{\\bf k}_{j})\\.\\end{array}\\right. \\tag{1}\\] The general approach to the cluster and virial expansions [17] is valid for this momentum dependent potential, and in the grand canonical ensemble it leads to the transcendental equation for pressure [3] \\[p(T,\\mu)\\ =\\ T\\rho_{t}(T)\\ \\exp\\left(\\frac{\\mu-a_{2}p}{T}\\right)\\ \\equiv\\ p_{id}(T,\\mu-a_{2}p)\\, \\tag{2}\\] with the second virial coefficient \\[a_{2}(T) = \\frac{g^{2}}{\\rho_{t}^{2}}\\int\\frac{d{\\bf k}_{1}d{\\bf k}_{2}}{(2 \\pi)^{6}}\\,e^{-\\frac{E(k_{1})+E(k_{2})}{T}}\\ v({\\bf k}_{1},{\\bf k}_{2})\\, \\tag{3}\\] \\[v({\\bf k}_{1},{\\bf k}_{2}) = \\frac{1}{2}\\ \\int d{\\bf r}_{12}\\ \\Theta\\left(r_{12}({\\bf\\hat{r}}_{12};{\\bf k}_{1},{\\bf k}_{2})\\ -\\ |{\\bf r}_{12}|\\right)\\, \\tag{4}\\] where the thermal density is defined as follows \\(\\rho_{t}(T)=g\\int\\frac{d{\\bf k}}{(2\\pi)^{3}}e^{-\\frac{E(k)}{T}}\\), degeneracy as \\(g\\), and \\(v({\\bf k}_{1},{\\bf k}_{2})\\) denotes the relativistic analog of the usual excluded volume for the two spheres moving with the momenta \\({\\bf k}_{1}\\) and \\({\\bf k}_{2}\\) and, hence, the factor \\(1/2\\) in front of the volume integral in (4) accounts for the fact that the excluded volume of two moving spheres is taken per particle. In what follows we do not include the antiparticles into consideration to keep it simple, but this can be done easily. Then the pressure (2) generates the following particle density \\[n(T,\\mu)=\\frac{\\partial p(T,\\mu)}{\\partial\\mu}=\\frac{e^{\\frac{\\mu}{T}}\\rho_{ t}(T)}{1+e^{\\frac{\\mu}{T}}\\rho_{t}(T)a_{2}(T)}\\equiv\\frac{p}{T\\left(1+e^{ \\frac{\\mu}{T}}\\rho_{t}(T)a_{2}(T)\\right)}\\, \\tag{5}\\]which in the limit of high pressure \\(p(T,\\mu)\\to\\infty\\) gives a limiting value of particle density \\(n(T,\\mu)\\to a_{2}^{-1}(T)\\). A form of Eq. (2) with constant \\(a_{2}\\) was obtained for the first time in Ref. [6]. The new feature of Eq. (2) is the temperature dependence of the excluded volume \\(a_{2}\\) (T) (3) which is due to the Lorentz contraction of the rigid spheres. This is a necessary and important modification which accounts for the relativistic properties of the interaction. It leads, for instance, to a 50 % reduction of the excluded volume of pions already at temperatures \\(T=140\\) MeV [3]. The calculation of the cluster integral in relativistic case is more complicated because each sphere becomes an ellipsoid due to the Lorentz contraction and because the relativistic excluded volume strongly depends not only on the contraction of the spheres, but also on the angle between the particle 3-velocities. Therefore, in Appendix A we give a derivation of a rather simple formula for the coordinate space integration in \\(a_{2}\\) which is found to be valid with an accuracy of a few percents for all temperatures. Its simplicity enables us to perform the angular integrations in \\(a_{2}(T)\\) analytically and obtain \\[a_{2}(T)\\approx\\frac{\\alpha v_{\\rm o}}{8}\\left(3\\pi+\\frac{74\\,\\rho_{s}}{3\\, \\rho_{t}}\\right)\\,\\quad\\rho_{s}(T)=\\int\\frac{d{\\bf k}}{(2\\pi)^{3}}\\frac{m}{E}\\ e^{- \\frac{E}{T}}. \\tag{6}\\] The expression for the coefficient \\(\\alpha\\approx 1/1.065\\) is given in Appendix A by Eq. (51). Using this result it is easy to show that in the limit of high temperature \\(T\\gg m\\) the ratio of the scalar density \\(\\rho_{s}(T)\\) to the thermal density \\(\\rho_{t}(T)\\) in (6) vanishes and the second virial coefficient approaches the constant value: \\[a_{2}(T)\\bigg{|}_{T\\gg m}\\longrightarrow\\ \\frac{3\\pi\\alpha v_{\\rm o}}{8}+{\\rm O }\\left(\\frac{m}{T}\\right)\\, \\tag{7}\\] which is about \\(\\frac{3\\pi}{32}\\) times smaller compared to the value of the nonrelativistic excluded volume, and, hence, is surprisingly very close to the dense packing limit of the nonrelativistic hard spheres. Similarly to the nonrelativistic VdW case [5] this leads to the problem with causality at very high pressures. Of course, in this formulation the superluminar speed of sound should appear at very high temperatures which are unreachable in hadronic phase. Thus the simple \"relativization\" of the virial expansion is much more realistic than the nonrelativistic description used in Refs. [1, 2], but it does not solve the problem completely. The reason why the simplest generalization (2) fails is rather trivial. Eq. (2) does not take into account the fact that at high densities the particles disturb the motion of their neighbors. The latter leads to the more compact configurations than predicted by Eqs. (2 - 4), i.e., the motion of neighboring particles becomes correlated due to a simple geometrical reason. In other words, since the \\(N\\)-particle distribution is a monotonically decreasing function of the excluded volume, the most probable state should correspond to the configurations of smallest excluded volume of all neighboring particles. This subject is, of course, far beyond the present paper. Although we will touch this subject slightly while discussing the limit \\(\\mu/T\\gg 1\\) in Sect. 3, our primary task here will be to give a relativistic generalization of the VdW EOS, which at low pressures behaves in accordance with the relativistic virial expansion presented above, and at the same time is free of the causality paradox at high pressures. In our treatment, we will completely neglect the angular rotations of the Lorentz contracted spheres because their correct analysis can be done only within the framework of quantum scattering theory which is beyond the current scope. However, it is clear that the rotational effects can be safely neglected at low densities because there are not so many collisions in the system. At the same time the rotations of the Lorentz contracted spheres at very high pressures, which are of the principal interest, can be neglected too, because at so high densities the particles should be so close to each other, that they must prevent the rotations of neighboring particles. Thus, for these two limits we can safely ignore the rotational effects and proceed further on like for the usual VdW EOS. Eq. (2) is only one of many possible VdW extrapolations to high density. As in non-relativistic case, one can write many expressions which will give the first two terms of the full virial expansion exactly, and the difference will appear in the third virial coefficient. In relativistic case there is an additional ambiguity: it is possible to perform the momentum integration, first, and make the VdW extrapolation next, or vice versa. The result will, evidently, depend on the order of operation. As an example let us give a brief \"derivation\" of Eq. (2), and its counterpart in the grand canonical ensemble. The two first terms of the standard cluster expansion read as [17, 3] \\[p=T\\,\\rho_{t}(T)\\ e^{\\frac{\\mu}{T}}\\left(1-a_{2}\\,\\rho_{t}(T)\\,e^{\\frac{\\mu}{T }}\\right). \\tag{8}\\] Now we approximate the last term on the right hand side as \\(\\rho_{t}(T)\\,e^{\\frac{\\mu}{T}}\\approx\\frac{p}{T}\\). Then we extrapolate it to high pressures by moving this term into the exponential function as \\[p\\approx T\\,\\rho_{t}(T)\\ e^{\\frac{\\mu}{T}}\\left(1-a_{2}\\,\\frac{p}{T}\\right) \\approx T\\,\\rho_{t}(T)\\,\\exp\\left(\\frac{\\mu-a_{2}\\,p}{T}\\right). \\tag{9}\\] The resulting expression coincides with Eq. (2), but the above manipulations make it simple and transparent. Now we will repeat all the above steps while keeping both momentum integrations fixed \\[p \\approx \\frac{T\\,g^{2}\\,e^{\\frac{\\mu}{T}}}{\\rho_{t}(T)}\\int\\frac{d{\\bf k} _{1}}{(2\\pi)^{3}}\\frac{d{\\bf k}_{2}}{(2\\pi)^{3}}\\ e^{-\\frac{E(k_{1})+E(k_{2})}{T }}\\left(1-\\frac{v({\\bf k}_{1},{\\bf k}_{2})\\,p}{T}\\right) \\tag{10}\\] \\[\\approx \\frac{T\\,g^{2}}{\\rho_{t}(T)}\\int\\frac{d{\\bf k}_{1}}{(2\\pi)^{3}} \\frac{d{\\bf k}_{2}}{(2\\pi)^{3}}\\,e^{\\frac{\\mu-v({\\bf k}_{1},{\\bf k}_{2})\\,p-E( k_{1})-E(k_{2})}{T}}\\.\\] The last expression contains the relativistic excluded volume (4) explicitly and, as can be shown, is free of the causality paradox. This is so because at high pressures the main contribution to the momentum integrals corresponds to the smallest values of the excluded volume (4). It is clear that such values are reached when the both spheres are ultrarelativistic and their velocities are collinear. With the help of the following notations for the averages \\[\\langle{\\cal O}\\rangle \\equiv \\frac{g}{\\rho_{t}(T)}\\int\\frac{d{\\bf k}}{(2\\pi)^{3}}\\ {\\cal O}\\ e^{-\\frac{E(k)}{T}}\\,, \\tag{11}\\] \\[\\langle\\langle{\\cal O}\\rangle\\rangle \\equiv \\frac{g^{2}}{\\rho_{t}^{2}(T)}\\int\\frac{d{\\bf k}_{1}}{(2\\pi)^{3}} \\frac{d{\\bf k}_{2}}{(2\\pi)^{3}}\\ {\\cal O}\\ e^{-\\frac{v({\\bf k}_{1},{\\bf k}_{2})\\,p+E(k_{1})+E(k_{2})}{T}}\\,, \\tag{12}\\]we can define all other thermodynamic functions as \\[n(T,\\mu) = \\frac{\\partial p(T,\\mu)}{\\partial\\mu}=\\frac{p}{T\\left(1+e^{\\frac{\\mu }{T}}\\rho_{t}(T)\\langle\\langle v({\\bf k}_{1},{\\bf k}_{2})\\rangle\\rangle\\right)}\\,, \\tag{13}\\] \\[s(T,\\mu) = \\frac{\\partial p(T,\\mu)}{\\partial T}=\\frac{p}{T}+\\frac{1}{T} \\frac{\\left(2\\,e^{\\frac{\\mu}{T}}\\rho_{t}(T)\\langle\\langle E\\rangle\\rangle- \\left[\\mu+\\langle E\\rangle\\right]p\\,T^{-1}\\right)}{1+e^{\\frac{\\mu}{T}}\\rho_{t}( T)\\langle\\langle v({\\bf k}_{1},{\\bf k}_{2})\\rangle\\rangle}\\,,\\] (14) \\[\\varepsilon(T,\\mu) = T\\,s(T,\\mu)+\\mu\\,n(T,\\mu)-p(T,\\mu)=\\frac{2\\,e^{\\frac{\\mu}{T}} \\rho_{t}(T)\\langle\\langle E\\rangle\\rangle-\\left[\\mu+\\langle E\\rangle\\right]p \\,T^{-1}}{1+e^{\\frac{\\mu}{T}}\\rho_{t}(T)\\langle\\langle v({\\bf k}_{1},{\\bf k}_{ 2})\\rangle\\rangle}\\,. \\tag{15}\\] Here \\(n(T,\\mu)\\) is the particle density, while \\(s(T,\\mu)\\) and \\(\\varepsilon(T,\\mu)\\) denote the entropy and energy density, respectively. In the low pressure limit \\(4\\,p\\,v_{\\rm o}T^{-1}\\ll 1\\) the corresponding exponent in (12) can be expanded and the mean value of the relativistic excluded volume can be related to the second virial coefficient \\(a_{2}(T)\\) as follows \\[\\langle\\langle v({\\bf k}_{1},{\\bf k}_{2})\\rangle\\rangle\\approx a_{2}(T)\\;-\\; \\frac{p}{T}\\langle\\langle v^{2}({\\bf k}_{1},{\\bf k}_{2})\\rangle\\rangle\\,, \\tag{16}\\] which shows that at low pressures the average value of the relativistic excluded volume should match the second virial coefficient \\(a_{2}(T)\\), but should be smaller than \\(a_{2}(T)\\) at higher pressures and this behavior is clearly seen in Fig. 1. A comparison of the particle densities (5) and (13) shows that despite the different formulae for pressure the particle densities of these models have a very similar expression, but in (13) the second virial coefficient is replaced by the averaged value of the relativistic excluded volume \\(\\langle\\langle v({\\bf k}_{1},{\\bf k}_{2})\\rangle\\rangle\\). Such a complicated dependence of the particle density (13) on \\(T\\) and \\(\\mu\\) requires a nontrivial analysis for the limit of high pressures. To analyze the high pressure limit \\(p\\to\\infty\\) analytically we need an analytic expression for the excluded volume. For this purpose we will use the ultrarelativistic expression derived in the Appendix A: \\[v({\\bf k}_{1},{\\bf k}_{2})\\approx\\frac{v_{12}^{Urel}(R,R)}{2}\\equiv\\frac{v_{ \\rm o}}{2}\\left(\\frac{m}{E({\\bf k}_{1})}+\\frac{m}{E({\\bf k}_{2})}\\right) \\left(1+\\cos^{2}\\left(\\frac{\\Theta_{v}}{2}\\right)\\right)^{2}+\\frac{3\\,v_{\\rm o }}{2}\\sin\\left(\\Theta_{v}\\right)\\;. \\tag{17}\\] As usual, the total excluded volume \\(v_{12}^{Urel}(R,R)\\) is taken per particle. Eq. (17) is valid for \\(0\\leq\\Theta_{v}\\leq\\frac{\\pi}{2}\\); to use it for \\(\\frac{\\pi}{2}\\leq\\Theta_{v}\\leq\\pi\\) one has to make a replacement \\(\\Theta_{v}\\longrightarrow\\pi-\\Theta_{v}\\) in (17). Here the coordinate system is chosen in such a way that the angle \\(\\Theta_{v}\\) between the 3-vectors of particles' momenta \\({\\bf k}_{1}\\) and \\({\\bf k}_{2}\\) coincides with the usual spherical angle \\(\\Theta\\) of spherical coordinates (see Appendix A). To be specific, the OZ-axis of the momentum space coordinates of the second particle is chosen to coincide with the 3-vector of the momentum \\({\\bf k}_{1}\\) of the first particle. The Lorentz frame is chosen to be the rest frame of the whole system because otherwise the expression for pressure becomes cumbersome. Here \\(v_{\\rm o}\\) stands for the eigen volume of particles which, for simplicity, are assumed to have the same hard core radius and the same mass. Despite the fact that this equation was obtained for ultrarelativistic limit, it is to a within few per cent accurate in the whole range of parameters (see Fig. 1 and Appendix A for the details), and, in addition, it is sufficiently simple to allow the analytical treatment. **3. High Pressure Limit** As seen from the expression for the relativistic excluded volume (17), for very high pressures only the smallest values of the relativistic excluded volume will give a non-vanishing contribution to the angular integrals of thermodynamic functions. This means that only \\(\\Theta_{v}\\)-values around \\(0\\) and around \\(\\pi\\) will contribute into the thermodynamic functions (see Fig. 2). Using the variable \\(x=\\sin^{2}\\left(\\Theta_{v}/2\\right)\\), one can rewrite the \\({\\bf k}_{2}\\) angular integration as follows \\[I_{\\Theta}(k_{1})=\\int\\!\\!\\frac{d{\\bf k}_{2}}{(2\\pi)^{3}}e^{-\\frac {v({\\bf k}_{1},{\\bf k}_{2})p}{T}}4\\int\\!\\!\\frac{d\\,k_{2}k_{2}^{2}}{(2\\pi)^{2}} \\int_{0}^{0.5}\\!\\!d\\,x\\ e^{-\\left(AC\\left(1-\\frac{x}{2}\\right)^{2}+B\\sqrt{x(1- x)}\\right)}\\, \\tag{18}\\] \\[{\\rm with}\\ \\ \\ \\ A=2v_{o}\\frac{p}{T}\\,;\\ \\ B=\\frac{3}{2}A\\,;\\ \\ C= \\left(\\frac{m}{E(k_{1})}+\\frac{m}{E(k_{2})}\\right)\\, \\tag{19}\\] where we have accounted for the fact that the integration over the polar angle gives a factor \\(2\\pi\\) and that one should double the integral value in order to integrate over a half of the \\(\\Theta_{v}\\) range. Since \\(C\\leq 2\\) in (19) is a decreasing function of the momenta, then in the limit \\(A\\gg 1\\) one can account only for the \\(\\sqrt{x}\\) dependence in the exponential in (18) because it is the leading one. Then integrating by parts one obtains \\[I_{\\Theta}(k_{1})\\approx 4\\int\\!\\!\\frac{d\\,k_{2}k_{2}^{2}}{(2\\pi)^{2}}\\ e^{-AC} \\int_{0}^{0.5}\\!\\!d\\,x\\ e^{-B\\sqrt{x}}\\approx 8\\int\\!\\!\\frac{d\\,k_{2}k_{2}^{2}}{(2 \\pi)^{2}}\\ e^{-AC}\\frac{1}{B^{2}}. \\tag{20}\\] Applying the above result to the pressure (10), in the limit under consideration one finds that the momentum integrals are decoupled and one gets the following equation for pressure \\[p(T,\\mu)\\approx\\frac{16\\,T^{3}e^{\\frac{\\mu}{T}}}{9\\,v_{\\circ}^{2}\\,p^{2}\\,\\rho_ {t}(T)}\\left[g\\int\\!\\!\\frac{d\\,kk^{2}}{(2\\pi)^{2}}\\ e^{-\\frac{E(k)}{T}}-\\frac{2 \\,v_{\\circ}\\,m}{TE(k)}\\,p\\right]^{2}. \\tag{21}\\] Now it is clearly seen that at high pressures the momentum distribution function in (21) may essentially differ from the Boltzmann one. To demonstrate this we can calculate an effective temperature by differentiating the exponential under the integral in (21) with respect to particle's energy \\(E\\): \\[T_{eff}(E)\\ =\\ \\ -\\left[\\frac{\\partial}{\\partial E}\\left(-\\frac{E}{T}-\\frac{2 \\,v_{\\circ}\\,m}{TE}\\,p\\right)\\right]^{-1}\\ =\\ \\frac{T}{1\\ -\\ \\frac{2\\,v_{\\circ}\\,m}{E^{2}}p}. \\tag{22}\\] Eq. (22) shows that the effective temperature \\(T_{eff}(E\\rightarrow\\infty)=T\\) may be essentially lower than that one at \\(E=m\\). In fact, at very high pressures the effective temperature \\(T_{eff}(m)\\) may become negative. **Fig. 2.** Comparison of the relativistic excluded volumes for highly contracted spheres. In the left panel the long dashed curve corresponds to \\(\\frac{E(k_{1})}{m}=2\\) and \\(\\frac{E(k_{2})}{m}=10\\) whereas the short dashed curve is found for \\(\\frac{E(k_{1})}{m}=5\\) and \\(\\frac{E(k_{2})}{m}=10\\). The corresponding values in the right panel are \\(\\frac{E(k_{1})}{m}=10\\), \\(\\frac{E(k_{2})}{m}=10\\) (long dashed curve) and \\(\\frac{E(k_{1})}{m}=10\\), \\(\\frac{E(k_{2})}{m}=100\\) (short dashed curve). It shows that the excluded volume for \\(\\Theta_{v}\\) close to \\(\\frac{\\pi}{2}\\) is finite always, while for the collinear velocities the excluded volume approaches zero, if both spheres are ultrarelativistic. A sizable difference between \\(T_{eff}\\) values at high and low particle energies mimics the collective motion of particles since a similar behavior is typical for the transverse energy spectra of particles having the collective transverse velocity which monotonically grows with the transverse radius [18, 19]. However, in contrast to the true collective motion case [18, 19], the low energy \\(T_{eff}\\) (22) gets higher for smaller masses of particles. Perhaps, such a different behavior of low energy effective temperatures can be helpful for distinguishing the high pressure case from the collective motion of particles. Our next step is to perform the gaussian integration in Eq. (21). Analyzing the function \\[F\\equiv 2\\ln k-\\frac{E(k)}{T}-A\\frac{m}{E(k)} \\tag{23}\\] for \\(A\\gg 1\\), one can safely use the ultrarelativistic approximation for particle momenta \\(k\\approx E(k)\\to\\infty\\). Then it is easy to see that the function F in (23) has an extremum at \\[\\frac{\\partial F}{\\partial E}=\\frac{2}{E}-\\frac{1}{T}+A\\frac{m}{E^{2}}=0\\quad \\Rightarrow\\quad E=E^{*}\\approx\\frac{A\\,m}{\\sqrt{1+\\frac{A\\,m}{T}}-1}\\equiv T \\left(\\sqrt{1+\\frac{A\\,m}{T}}+1\\right)\\,, \\tag{24}\\] which turns out to be a maximum, since the second derivative of F (23) is negative \\[\\frac{\\partial^{2}F}{\\partial E^{2}}\\Big{|}_{E=E^{*}}\\approx-\\frac{2}{(E^{*}) ^{2}}-2\\,A\\frac{m}{(E^{*})^{3}}<0\\,. \\tag{25}\\] There are two independent ways to increase pressure: one can increase the value of chemical potential while keeping temperature fixed and vice versa. We will consider the high chemical potential limit \\(\\mu/T\\gg 1\\) for finite \\(T\\) first, since this case is rather unusual. In this limit the above expressions can be simplified further on \\[E^{*}\\approx\\sqrt{2\\,m\\,v_{\\rm o}\\,p}\\,,\\quad\\Rightarrow\\quad\\frac{\\partial^{ 2}F}{\\partial E^{2}}\\Big{|}_{E=E^{*}}\\approx-\\frac{2}{T\\,\\sqrt{2\\,m\\,v_{\\rm o }\\,p}}\\,. \\tag{26}\\] Here in the last step we explicitly substituted the expression for \\(A\\). Performing the gaussian integration for momenta in (21), one arrives at \\[\\int\\!\\!\\frac{d\\,kk^{2}}{(2\\pi)^{2}}\\,\\,e^{-\\frac{E(k)}{T}-\\frac{2\\,v_{\\rm o} \\,m}{TE(k)}\\,p}\\approx\\frac{(E^{*})^{2}}{(2\\pi)^{2}}\\,\\sqrt{\\pi\\,T\\,E^{*}}\\,e ^{-\\frac{2\\,E^{*}}{T}}\\,, \\tag{27}\\] which leads to the following equation for the most probable energy of particle \\(E^{*}\\) \\[E^{*}\\approx D\\,T^{4}\\,\\,e^{\\frac{\\mu-4E^{*}}{T}}\\,,\\quad D\\equiv\\frac{8\\,g^{ 2}m^{3}v_{\\rm o}}{9\\,\\pi^{3}\\rho_{t}(T)}\\,. \\tag{28}\\] As one can see, Eq. (28) defines pressure of the system. Close inspection shows that the high pressure limit can be achieved, if the exponential in (28) diverges much slower than \\(\\mu/T\\). The latter defines the EOS in the leading order as \\[E^{*}\\approx\\frac{\\mu}{4}\\,,\\quad\\Rightarrow\\quad p\\approx\\frac{\\mu^{2}}{32\\, m\\,v_{\\rm o}}\\,. \\tag{29}\\]The left hand side equation above demonstrates that in the \\(\\mu/T\\gg 1\\) limit the natural energy scale is given by a chemical potential. This is a new and important feature of the relativistic VdW EOS compared to the previous findings. The right hand side Eq. (29) allows one to find all other thermodynamic functions in this limit from thermodynamic identities: \\[s\\approx 0\\,,\\quad n\\approx\\frac{2\\,p}{\\mu}\\,,\\quad\\varepsilon\\equiv Ts+\\mu n-p \\approx p\\,. \\tag{30}\\] Thus, we showed that for \\(\\mu/T\\gg 1\\) and finite \\(T\\) the speed of sound \\(c_{s}\\) in the leading order does not exceed the speed of light since \\[c_{s}^{2}=\\frac{\\partial p}{\\partial\\varepsilon}\\bigg{|}_{s/n}=\\frac{d\\,p}{d \\,\\varepsilon}=1\\,. \\tag{31}\\] From Eq. (28) it can be shown that the last result holds in all orders. It is interesting that the left hand side Eq. (26) has a simple kinetic interpretation. Indeed, recalling that the pressure is the change of momentum during the collision time one can write (24) as follows (with \\(E^{*}=k^{*}\\)) \\[p=\\frac{(k^{*})^{2}}{2\\,m\\,v_{\\rm o}}=\\frac{2\\,k^{*}}{\\pi R_{\\rm o}^{2}}\\cdot \\frac{3\\,v^{*}\\gamma^{*}}{8\\,R_{\\rm o}}\\cdot\\frac{1}{2}\\,. \\tag{32}\\] In the last result the change of momentum during the collision with the wall is \\(2\\,k^{*}\\), which takes the time \\(\\frac{8\\,R_{\\rm o}}{3\\,v^{*}\\gamma^{*}}\\). The latter is twice of the Lorentz contracted height (\\(4/3R_{\\rm o}\\)) of the cylinder of the base \\(\\pi R_{\\rm o}^{2}\\) which is passed with the speed \\(v^{*}\\). Here the particle velocity \\(v^{*}\\) and the corresponding gamma-factor \\(\\gamma^{*}\\) are defined as \\(v^{*}\\gamma^{*}=k^{*}/m\\). The rightmost factor \\(1/2\\) in (32) accounts for the fact that only a half of particles moving perpendicular to the wall has the momentum \\(-k^{*}\\). Thus, Eq. (32) shows that in the limit under consideration the pressure is generated by the particle momenta which are perpendicular to the wall. This, of course, does not mean that all particles in the system have the momenta which are perpendicular to a single wall. No, this means that in those places near the wall where the particles' momenta are not perpendicular (but are parallel) to it, the change of momentum \\(2k^{*}\\) is transferred to the wall by the particles located in the inner regions of the system whose momenta are perpendicular to the wall. Also it is easy to deduce that such a situation is possible, if the system is divided into the rectangular cells or boxes inside which the particles are moving along the height of the box and their momenta are collinear, but they are perpendicular to the particles' momenta in all surrounding cells. Note that appearing of particles' cells is a typical feature of the treatment of high density limit [20] and can be related to a complicated phase structure of nuclear matter at very low temperatures [21]. Of course, inside of such a box each Lorentz contracted sphere would generate an excluded volume which is equal to a volume of a cylinder of height \\(\\frac{2\\,R_{\\rm o}}{\\gamma^{*}}\\) and base \\(\\pi R_{\\rm o}^{2}\\). This cylinder, of course, differs from the cylinder involved in Eq. (32), but we note that exactly the hight \\(\\frac{4R_{\\rm o}}{3\\,\\gamma^{*}}\\) is used in the derivation of the ultrarelativistic limit for the relativistic excluded volume (50) (see Appendix A for details). Thus, it is very interesting that in contrast to nonrelativistic case the relativistic excluded volume \\(\\frac{4\\pi R_{\\rm o}^{2}}{3\\,\\gamma^{*}}\\) which enters into Eq. (32) is only 33 % smaller than the excluded volume \\(\\frac{2\\pi R_{0}^{3}}{\\gamma^{*}}\\) of ultrarelativistic particle at high pressures. Also it is remarkable that the low density EOS extrapolated to very high values of the chemical potential, at which it is not supposed to be valid at all, gives a reasonable estimate for the pressure at high densities. Another interesting conclusion that follows from this limit is that for the relativistic VdW systems existing in the nonrectangular volumes the relativistic analog of the dense packing may be unstable. The analysis of the limit \\(T/\\mu\\gg 1\\) and finite \\(\\mu\\) also starts from Eqs. (21)-(24). The function F from (23) again has the maximum at \\(E^{*}\\equiv E(k^{*})=k^{*}\\) defined by the right hand side Eq. (24). Now the second derivative of function \\(F\\) becomes \\[\\frac{\\partial^{2}F}{\\partial E^{2}}\\bigg{|}_{E=E^{*}}\\approx-\\frac{2}{(E^{*} )^{2}}-2\\,A\\frac{m}{(E^{*})^{3}}=-\\frac{2\\,\\sqrt{1+\\frac{A\\,m}{T}}}{(E^{*})^{ 2}}\\,. \\tag{33}\\] This result allows one to perform the gaussian integration for momenta in (21) for this limit and get \\[\\int\\!\\!\\frac{d\\,kk^{2}}{(2\\pi)^{2}}\\ e^{-\\frac{E(k)}{T}-\\frac{2\\,v_{\\rm o}\\,m }{TE(k)}\\,p}\\approx\\frac{(E^{*})^{3}\\,e^{-2\\,\\left(1+\\frac{A\\,m}{T}\\right)^{ \\frac{1}{2}}}}{(2\\pi)^{2}\\,\\left(1+\\frac{A\\,m}{T}\\right)^{\\frac{1}{4}}}\\,I_{ \\xi}\\left(1+\\frac{A\\,m}{T}\\right)\\,, \\tag{34}\\] where the auxiliary integral \\(I_{\\xi}\\) is defined as follows \\[I_{\\xi}(x)\\equiv\\int\\limits_{-x^{\\frac{1}{4}}}^{+\\infty}d\\xi\\ e^{-\\xi^{2}}\\,. \\tag{35}\\] The expression (34) can be also used to find the thermal density \\(\\rho_{t}(T)\\) in the limit \\(T\\to\\infty\\) by the substitution \\(A=0\\). Using (34), one can rewrite the equation for pressure (21) as the equation for the unknown variable \\(z\\equiv A\\,m/T\\equiv\\frac{2\\,v_{\\rm o}\\,m\\,p}{T^{2}}\\) \\[z^{3}\\approx\\,e^{\\frac{\\mu}{T}}\\phi(z)\\,,\\quad\\phi(z)\\equiv\\frac{2\\,g\\,v_{\\rm o }\\,m^{3}\\,I_{\\xi}^{2}(1+z)\\left(1+(1+z)^{\\frac{1}{2}}\\right)^{6}}{\\left(3\\, \\pi\\,e^{2\\cdot\\sqrt{1+z}-1}\\right)^{2}I_{\\xi}(1)(1+z)^{\\frac{1}{2}}}\\,. \\tag{36}\\] Before continuing our analysis further on, it is necessary to make two comments concerning Eq. (36). First, rewriting the left hand side Eq. (36) in terms of pressure, one can see that the value of chemical potential is formally reduced exactly in three times. In other words, it looks like that in the limit of high temperature and finite \\(\\mu\\) the pressure of the relativistic VdW gas is created by the particles with the charge being equal to the one third of their original charge. Second, due to the nonmonotonic dependence of \\(\\phi(z)\\) in the right hand side Eq. (36) it is possible that the left hand side Eq. (36) can have several solutions for some values of parameters. Leaving aside the discussion of this possibility, we will further consider only such a solution of (36) which corresponds to the largest value of the pressure (21). Since the function \\(\\phi(z)\\) does not have any explicit dependence on \\(T\\) or \\(\\mu\\), one can establish a very convenient relation \\[\\frac{\\partial z}{\\partial T}=-\\frac{\\mu}{T}\\,\\frac{\\partial z}{\\partial\\mu} \\tag{37}\\]between the partial derivatives of \\(z\\) given by the left hand side Eq. (36). Using (37), one can calculate all the thermodynamic functions from the pressure \\(p=\\beta\\,T^{2}z\\) (with \\(\\beta\\equiv(3\\,m\\,v_{\\rm o})^{-1}\\)) as follows: \\[n \\approx \\beta\\,T^{2}\\,\\frac{\\partial z}{\\partial\\mu}\\,, \\tag{38}\\] \\[s \\approx \\beta\\,\\left[2\\,T\\,z+T^{2}\\frac{\\partial z}{\\partial T}\\right]= \\frac{2\\,p-\\mu n}{T}\\,,\\] (39) \\[\\varepsilon \\equiv Ts+\\mu n-p\\approx p\\,. \\tag{40}\\] The last result leads to the causality condition (31) for the limit \\(T/\\mu\\gg 1\\) and finite \\(\\mu\\). In fact, the above result can be extended to any \\(\\mu>-\\infty\\) and any value of \\(T\\) satisfying the inequality \\[E^{*}\\approx T\\left(\\sqrt{1+z}+1\\right)\\gg m\\,, \\tag{41}\\] which is sufficient to derive Eq. (36). To show this, it is sufficient to see that for \\(z=0\\) there holds the inequality \\(z^{3}<e^{\\frac{\\mu}{T}}\\phi(z)\\), which changes to the opposite inequality \\(z^{3}>e^{\\frac{\\mu}{T}}\\phi(z)\\) for \\(z=\\infty\\). Consequently, for any value of \\(\\mu\\) and \\(T\\) satisfying (41) the left hand side Eq. (36) has at least one solution \\(z^{*}>0\\) for which one can establish Eqs. (37)-(40) and prove the validity of the causality condition (31). The model (10) along with the analysis of high pressure limit can be straightforwardly generalized to include several particle species. For the pressure \\(p(T,\\{\\mu_{i}\\})\\) of the mixture of \\(N\\)-species with masses \\(m_{i}\\) (\\(i=\\{1,2,..,N\\}\\)), degeneracy \\(g_{i}\\), hard core radius \\(R_{i}\\) and chemical potentials \\(\\mu_{i}\\) is defined as a solution of the following equation \\[p(T,\\{\\mu_{i}\\})=\\int\\frac{d^{3}{\\bf k}_{1}}{(2\\pi)^{3}}\\frac{d^{3}{\\bf k}_{2 }}{(2\\pi)^{3}}\\sum_{i,j=1}^{N}\\frac{T\\,g_{i}\\,g_{j}}{\\rho_{tot}(T,\\{\\mu_{l}\\}) }\\,e^{\\frac{\\mu_{i}+\\mu_{j}-v_{ij}({\\bf k}_{1},{\\bf k}_{2})\\,p\\,-\\,E_{i}(k_{1 })\\,-\\,E_{j}(k_{2})}{T}}\\,, \\tag{42}\\] where the relativistic excluded volume per particle of species \\(i\\) (with the momentum \\({\\bf k}_{1}\\)) and \\(j\\) (with the momentum \\({\\bf k}_{2}\\)) is denoted as \\(v_{ij}({\\bf k}_{1},{\\bf k}_{2})\\), \\(E_{i}(k_{1})\\equiv\\sqrt{k_{1}^{2}+m_{i}^{2}}\\) and \\(E_{j}(k_{2})\\equiv\\sqrt{k_{2}^{2}+m_{j}^{2}}\\) are the corresponding energies, and the total thermal density is given by the expression \\[\\rho_{tot}(T,\\{\\mu_{i}\\})=\\int\\frac{d^{3}{\\bf k}}{(2\\pi)^{3}}\\sum_{i=1}^{N}\\, \\,g_{i}\\,e^{\\frac{\\mu_{i}-E_{i}(k)}{T}}\\,. \\tag{43}\\] The excluded volume \\(v_{ij}({\\bf k}_{1},{\\bf k}_{2})\\) can be accurately approximated by \\(\\alpha\\,v_{12}^{Urel}(R_{i},R_{j})/2\\) defined by Eqs. (50) and (51). The multicomponent generalization (42) is obtained in the same sequence of steps as the one-component expression (10). The only difference is in the definition of the total thermal density (43) which now includes the chemical potentials. Note also that the expression (42) by construction recovers the virial expansion up to the second order at low particle densities, but it cannot be reduced to any of two extrapolations which are suggested in [22] and [23] for the multicomponent mixtures and carefully analyzed in Ref. [4]. Thus, the expression (42) removes the non-uniqueness of the VdW extrapolations to high densities, if one requires a causal behavior in this limit. **4. Concluding Remarks** In this work we proposed a relativistic analog of the VdW EOS which reproduces the virial expansion for the gas of the Lorentz contracted rigid spheres at low particle densities and is causal at high densities. As one can see from the expression for particle density (13) and from the corresponding relation for effective temperature (22) the one-particle momentum distribution function has a more complicated energy dependence than the usual Boltzmann distribution function, which would be interesting to check experimentally. Such a task involves considerable technical difficulties since the particle spectra measured in high energy nuclear collisions involve a strong collective flow which can easily hide or smear the additional energy dependence. However, it is possible that such a complicated energy dependence of the momentum spectra and excluded volumes of lightest hardons, i.e. pions and kaons, can be verified for highly accurate measurements, if the collective flow is correctly taken into account. The latter adjustment is tremendously complex because it is related to the freeze-out problem in relativistic hydrodynamics [24] or hydro-cascade approach [25]. Another possibility to study the effect of Lorentz contraction on the EOS properties is to incorporate them into transport models. The first steps in this direction have been made already in [26], but the approximation used in [26] is too crude. It might be more realistic to incorporate the developed approach into effective models of nuclear/hadronic matter [6, 7, 8, 9] and check the obtained EOS on a huge amount of data collected by the nuclear physics of intermediate energies. Since the suggested relativization of the VdW EOS makes it softer at high densities, one can hope to improve the description of the nuclear/hadronic matter properties (compressibility constant, elliptic flow, effective nucleon masses e.t.c.) at low temperatures and high baryonic densities [27]. Also it is possible that the momentum spectra of this type can help to extend the hydrodynamic description into the region of large transversal momenta of hadrons (\\(p_{T}>1.5-2\\) GeV) which are usually thought to be too large to follow the hydrodynamic regime [28]. Another possibility to validate the suggested model is to study angular correlations of the hard core particles emitted from the neighboring regions and/or the enhancement of the particle yield of those hadrons occurring due to coalescence of the constituents with the short range repulsion. As shown above (also see Fig. 2), the present model predicts that the probability to find the neighboring particles with collinear velocities is higher than the one with non-collinear velocities. Due to this reason, the coalescence of particles with the parallel velocities should be enhanced. This effect amplifies if pressure is high and if particles are relativistic in the local rest frame. Therefore, it would be interesting to study the coalescence of any relativistic constituents with hard core repulsion (quarks or hadrons) at high pressures in a spirit of the recombination model of Ref. [29] and extend its results to lower transversal momenta of light hadrons. Perhaps, the inclusion of such an effect into consideration may essentially improve not only our understanding of the quark coalescence process, but also the formation of deuterons and other nuclear fragments in relativistic nuclear collisions. This subject is, however, outside the scope of the present work. As a typical VdW EOS, the present model should be valid for the low particle densities. Moreover, our analysis of the limit \\(\\mu/T\\gg 1\\) for fixed \\(T\\) leads to a surprisingly clear kinetic expression for the system's pressure (32). Therefore, it is possible that this low density result may provide a correct hint to study the relativistic analog of the dense packing problem. Thus, it would be interesting to verify, whether the above approach remains valid for relativistic quantum treatment because there are several unsolved problems for the systems of relativistic bosons and/or fermions which, on one hand, are related to the problems discussed here and, on the other hand, may potentially be important for relativistic nuclear collisions and for nuclear astrophysics. **Acknowledgments.** The author thanks D. H. Rischke for the fruitful and stimulating discussions, and A. L. Blokhin for the important comments on the obtained results. The research made in this work was supported in part by the Program \"Fundamental Properties of Physical Systems under Extreme Conditions\" of the Bureau of the Section of Physics and Astronomy of the National Academy of Science of Ukraine. The partial support by the Alexander von Humboldt Foundation is greatly acknowledged. **Appendix A: Relativistic Excluded Volume** In order to study the high pressure limit, it is necessary to estimate the excluded volume of two ellipsoids, obtained by the Lorentz contraction of the spheres. In general, this is quite an involved problem. Fortunately, our analysis requires only the ultrarelativistic limit when the mean energy per particle is high compared to the mass of the particle. The problem can be simplified further since it is sufficient to find an analytical expression for the relativistic excluded volume with the collinear particle velocities because the configurations with the noncollinear velocities have larger excluded volume and, hence, are suppressed. Therefore, one can safely consider the excluded volume produced by two contracted cylinders (disks) having the same proper volumes as the ellipsoids. For this purpose the cylinder's height in the local rest frame is fixed to be \\(\\frac{4}{3}\\) of a sphere radius. Let us introduce the different radii \\(R_{1}\\) and \\(R_{2}\\) for the cylinders, and consider for the moment a zero height for the second cylinder \\(h_{2}=0\\) and non-zero height \\(h_{1}\\) for the first one. Suppose that the center of the coordinate system coincides with the geometrical center of the first cylinder and the \\(OZ\\)-axis is perpendicular to the cylinder's base. Then the angle \\(\\Theta_{v}\\) between the particle velocities is also the angle between the bases of two cylinders. To simplify the expression for the pressure, the Lorentz frame is chosen to be the rest frame of the whole system. In order to estimate the excluded volume we fix the particle velocities and transfer the second cylinder around the first cylinder while keeping the angle \\(\\Theta_{v}\\) fixed. The desired excluded volume is obtained as the volume occupied by the center of the second cylinder under these transformations. Considering the projection on the \\(XOY\\) plane (see Fig. 3.a), one should transfer the ellipse with the semiaxes \\(R_{x}=R_{2}\\cos\\left(\\Theta_{v}\\right)\\) and \\(R_{y}=R_{2}\\) around the circle of radius \\(R_{1}\\). We approximate it by the circle of the averaged radius \\(\\langle R_{XOY}\\rangle=R_{1}+(R_{x}+R_{y})/2=R_{1}+R_{2}(1+\\cos\\left(\\Theta_{ v}\\right))/2\\). Then the first contribution to the excluded volume is the volume of the cylinder of the radius \\(\\langle R_{XOY}\\rangle\\) and the height \\(h_{1}=CC_{1}\\) of the cylinder \\(OABC\\) in Figs. 3.a and 3.b, i.e., \\[v_{I}(h_{1})=\\pi\\left(R_{1}+\\frac{R_{2}(1+\\cos\\left(\\Theta_{v}\\right))}{2} \\right)^{2}h_{1}. \\tag{44}\\]Projecting the picture onto the \\(XOZ\\) plane as it is shown in Fig. 3.b, one finds that the translations of a zero width disk over the upper and lower bases of the first cylinder (the distance between the center of the disk and the base \\(CA\\) is, evidently, \\(CD_{1}=R_{2}|\\sin\\left(\\Theta_{v}\\right)|\\)) generate the second conrtibution to the excluded volume \\[v_{II}(h_{1})=\\pi R_{1}^{2}\\,2\\,R_{2}|\\sin\\left(\\Theta_{v}\\right)|. \\tag{45}\\] The third contribution follows from the translation of the disk from the cylinder's base to the cylinder's side as it is shown for the \\(YOZ\\) plane in Fig. 3.c. The area \\(BB_{1}F\\) is the part of the ellipse segment whose magnitude depends on the x coordinate. However, one can approximate it as the quarter of the disk area projected onto the \\(YOZ\\) plane and can get a simple answer \\(\\pi R_{2}^{2}|\\sin\\left(\\Theta_{v}\\right)|/4\\). Since there are four of such transformations, and they apply for all x coordinates of the first cylinder (the length is \\(2\\,R_{1}\\)), then the third contribution is \\[v_{III}(h_{1})=\\pi R_{1}^{2}\\,2\\,R_{1}|\\sin\\left(\\Theta_{v}\\right)|. \\tag{46}\\] Collecting all the contributions, one obtains an estimate for the excluded volume of a cylinder and a disk \\[v_{2c}(h_{1})=\\pi\\left(R_{1}+R_{2}\\cos^{2}\\left(\\frac{\\Theta_{v}}{2}\\right) \\right)^{2}h_{1}+2\\,\\pi R_{1}R_{2}(R_{1}+R_{2})|\\sin\\left(\\Theta_{v}\\right)|\\.\\] (4 **Fig. 3.** Relativistic excluded volume derivation for relativistic cylinder \\(OABC\\) and ultrarelativistic cylinder (disk) \\(DC\\) with radii \\(R_{1}\\) and \\(R_{2}\\), respectively. \\(\\Theta\\) is the angle between their velocities. Pictures a - c show the projections onto different planes. The transfer of the cylinder \\(DC\\) around the side of the cylinder \\(OABC\\) is depicted in Fig. 3.a. The solid curve \\(DEF\\) corresponds to the exact result, whereas the dashed curve corresponds to the average radius approximation \\(\\langle R_{XOY}\\rangle=OA+(DC+BF)/2=R_{1}+R_{2}(1+\\cos{(\\Theta)})/2\\). The transfer of the cylinder \\(DC=DC_{1}=AD_{2}\\) along the upper base of the cylinder \\(OABC=ACC_{1}A_{1}\\) is shown in panel b. Its contribution to the excluded volume is a volume of the cylinder with the base \\(AC=2R_{1}\\) and the height \\(CD_{1}\\sin{(\\Theta)}=R_{2}\\sin{(\\Theta)}\\). A similar contribution corresponds to the disk transfer along the lower base of the cylinder \\(A_{1}C_{1}\\). The third contribution to the relativistic excluded volume arises from the transformation of the cylinder \\(DC=BB_{1}=FB_{1}\\) from the upper base of the cylinder \\(OABC=AB_{1}O\\) to its side, and it is schematically shown in Fig. 3.c. The area \\(BB_{1}F\\approx\\pi/4R_{2}^{2}\\sin{(\\Theta)}\\) is approximated as the one quarter of the area of the ellipse \\(BB_{1}\\). The corresponding \\(\\gamma_{q}\\)-factors (\\(\\gamma_{q}\\equiv E({\\bf k}_{q})/m_{q}\\), \\(q=\\{1,2\\}\\)) are defined in the local rest frame of the whole system for particles of mass \\(m_{q}\\). The last result is valid for \\(0\\leq\\Theta_{v}\\leq\\frac{\\pi}{2}\\), to use it for \\(\\frac{\\pi}{2}\\leq\\Theta_{v}\\leq\\pi\\) one has to replace \\(\\Theta_{v}\\longrightarrow\\pi-\\Theta_{v}\\) in (50). It is necessary to stress that the above formula gives a surprisingly good approximation even in nonrelativistic limit for the excluded volume of two spheres. For \\(R_{2}=R_{1}\\equiv R\\) one finds that the maximal excluded volume corresponds to the angle \\(\\Theta_{v}=\\frac{\\pi}{4}\\) and its value is \\(\\max\\{v_{12}^{Urel}(R,R)\\}\\approx\\frac{36}{3}\\pi R^{3}\\), whereas the exact result for nonrelativistic spheres is \\(v_{2s}=\\frac{32}{3}\\pi R^{3}\\), i.e., the ultrarelativistic formula (50) describes a nonrelativistic situation with the maximal deviation of about 10% (see the left panel in Fig. 4). Eq. (50) also describes the excluded volume \\(v_{sd}=\\frac{10+3\\pi}{3}R^{3}\\) for a nonrelativistic sphere and ultrarelativistic ellipsoid with the maximal deviation from the exact result of about 15% (see the right panel in Fig. 4). In order to improve the accuracy of (50) for nonrelativistic case, we introduce a factor \\(\\alpha\\) to normalize the integral of the excluded volume (50) over the whole solid angle to the volume of two spheres \\[v^{Nrel}(R_{1},R_{2})=\\alpha\\ v_{12}^{Urel}(R_{1},R_{2})\\,;\\qquad\\alpha=\\frac{ 4\\pi\\left(R_{1}+R_{2}\\right)^{3}}{3\\int\\limits_{0}^{\\pi}d\\Theta_{v}\\,\\sin \\left(\\Theta_{v}\\right)\\,v_{12}^{Urel}(R_{1},R_{2})\\Big{|}_{\\gamma_{1}=\\gamma_{ 2}=1}}. \\tag{51}\\] **Fig. 4.** Comparison of the relativistic excluded volume obtained by the approximative ultrarelativistic formula with the exact results. The left panel shows the quality of the approximation \\(V_{EXCL}\\equiv v_{12}^{Urel}(R,R)\\) (50) to describe the excluded volume of two nonrelativistic spheres \\(V_{2SP}\\) of the same radius \\(R\\) as a function of the spherical angle \\(\\Theta\\). The right panel depicts the approximation to the excluded volume of the nonrelativistic sphere and disk. In both panels the solid curve corresponds to the exact result and the long dashed one corresponds to the ultrarelativistic approximation by two cylinders. The averaged ultrarelativistic excluded volume in the left panel is \\(\\frac{\\left(V_{EXCL}\\right)_{\\Theta}}{V_{2SP}}\\approx 1.065\\). The corresponding averaged value for the right panel is \\(\\frac{\\left(V_{EXCL}\\right)_{\\Theta}}{V_{2SP}}\\approx 0.655\\), which should be compared with the exact value \\(\\frac{\\left(V_{EXCL}\\right)_{\\Theta}}{V_{2SP}}\\approx 0.607\\). The corresponding \\(\\gamma_{q}\\)-factors (\\(\\gamma_{q}\\equiv E({\\bf k}_{q})/m_{q}\\), \\(q=\\{1,2\\}\\)) are defined in the local rest frame of the whole system for particles of mass \\(m_{q}\\). The last result is valid for \\(0\\leq\\Theta_{v}\\leq\\frac{\\pi}{2}\\), to use it for \\(\\frac{\\pi}{2}\\leq\\Theta_{v}\\leq\\pi\\) one has to replace \\(\\Theta_{v}\\longrightarrow\\pi-\\Theta_{v}\\) in (50). It is necessary to stress that the above formula gives a surprisingly good approximation even in nonrelativistic limit for the excluded volume of two spheres. For \\(R_{2}=R_{1}\\equiv R\\) one finds that the maximal excluded volume corresponds to the angle \\(\\Theta_{v}=\\frac{\\pi}{4}\\) and its value is \\(\\max\\{v_{12}^{Urel}(R,R)\\}\\approx\\frac{36}{3}\\pi R^{3}\\), whereas the exact result for nonrelativistic spheres is \\(v_{2s}=\\frac{32}{3}\\pi R^{3}\\), i.e., the ultrarelativistic formula (50) describes a nonrelativistic situation with the maximal deviation of about 10% (see the left panel in Fig. 4). Eq. (50) also describes the excluded volume \\(v_{sd}=\\frac{10+3\\pi}{3}R^{3}\\) for a nonrelativistic sphere and ultrarelativistic ellipsoid with the maximal deviation from the exact result of about 15% (see the right panel in Fig. 4). In order to improve the accuracy of (50) for nonrelativistic case, we introduce a factor \\(\\alpha\\) to normalize the integral of the excluded volume (50) over the whole solid angle to the volume of two spheres \\[v^{Nrel}(R_{1},R_{2})=\\alpha\\ v_{12}^{Urel}(R_{1},R_{2})\\,;\\qquad\\alpha=\\frac{ 4\\pi\\left(R_{1}+R_{2}\\right)^{3}}{3\\int\\limits_{0}^{\\pi}d\\Theta_{v}\\,\\sin \\left(\\Theta_{v}\\right)\\,v_{12}^{Urel}(R_{1},R_{2})\\Big{|}_{\\gamma_{1}=\\gamma_ {2}=1}}. \\tag{52}\\]For the equal values of hard core radii and equal masses of particles the normalization factor reduces to the following value \\(\\alpha\\approx\\frac{1}{1.0654}\\), i.e., it compensates the most of the deviations discussed above. With such a correction the excluded volume (51) can be safely used for the nonrelativistic domain because in this case the VdW excluded volume effect is itself a correction to the ideal gas and, therefore, the remaining deviation from the exact result is of a higher order. It is useful to have the relativistic excluded volume expressed in terms of 3-momenta \\[v_{12}^{Urel}(R_{1},R_{2}) = \\frac{v_{01}}{\\gamma_{1}}\\left(1+R_{2}\\frac{|{\\bf k}_{1}||{\\bf k }_{2}|+|{\\bf k}_{1}\\cdot{\\bf k}_{2}|}{2\\,R_{1}\\,|{\\bf k}_{1}||{\\bf k}_{2}|} \\right)^{2}+\\frac{v_{02}}{\\gamma_{1}}\\left(1+R_{1}\\frac{|{\\bf k}_{1}||{\\bf k} _{2}|+|{\\bf k}_{1}\\cdot{\\bf k}_{2}|}{2\\,R_{2}\\,|{\\bf k}_{1}||{\\bf k}_{2}|} \\right)^{2} \\tag{52}\\] \\[+ 2\\,\\pi R_{1}R_{2}(R_{1}+R_{2})\\frac{|{\\bf k}_{1}\\times{\\bf k}_{ 2}|}{|{\\bf k}_{1}||{\\bf k}_{2}|}\\,\\] where \\(v_{0q}\\) denote the corresponding proper volumes \\(v_{0q}=\\frac{4}{3}\\pi R_{q}^{3}\\,,\\quad q=\\{1,2\\}\\). For the practical calculations it is necessary to express the relativistic excluded volume in terms of the three 4-vectors - the two 4-momenta of particles, \\(k_{q\\,\\mu}\\), and the collective 4-velocity \\(u^{\\mu}=\\frac{1}{\\sqrt{1-{\\bf v}^{2}}}(1,{\\bf v})\\). For this purpose one should reexpress the gamma-factors and at least one of trigonometric functions in (50) in a covariant form \\[\\gamma_{q}=\\frac{\\sqrt{m^{2}+{\\bf k}_{q}^{2}}}{m}=\\frac{k_{q}^{\\mu}\\,u_{\\mu}} {m},\\quad\\cos\\left(\\Theta_{v}\\right)=\\frac{k_{1}^{\\mu}\\,u_{\\mu}\\,\\,k_{2}^{\ u }\\,u_{\ u}-k_{1}^{\\mu}\\,k_{2\\,\\mu}}{\\sqrt{((k_{1}^{\\mu}\\,u_{\\mu})^{2}-m^{2}) \\left((k_{2}^{\\mu}\\,u_{\\mu})^{2}-m^{2}\\right)}}. \\tag{53}\\] Using Eq. (53), one can express any trigonometric function of \\(\\Theta_{v}\\) in a covariant form. ## References * [1] G. D. Yen and M.I. Gorenstein, Phys. Rev. **C 59** (1999) 2788. * [2] P. Braun-Munzinger, I. Heppe and J. Stachel, Phys. Lett. **B 465** (1999) 15. * [3] K. A. Bugaev, M. I. Gorenstein, H. Stocker and W. Greiner. Phys. Lett. **485** (2000) 121. * [4] G. Zeeb, K. A. Bugaev, P. T. Reuter and H. Stocker, nucl-th/0209011. * [5] R. Venugopalan and M. Prakash, Nucl. Phys. **A546** (1992) 718. * [6] D. H. Rischke, M. I. Gorenstein, H. Stocker and W. Greiner, Z. Phys. **C51** (1991) 485. * [7] Y. Nambu and G. Jona-Lasinio, Phys. Rev. **122** (1961) 345; Phys. Rev. **124** (1961) 246. * [8] P. A. M. Guichon, Phys. Lett. **B 200** (1988) 235. * [9] P. Papazoglou _et. al._, Phys. Rev. **C 57** (1998) 2576. * [10] S. Datta, F. Karsch, P. Petreczky and I. Wetzorke, hep-lat/0208012. * [11] M. Asakawa and T. Hatsuda, Nucl. Phys. **A 715** (2003) 863c. * [12] see also discussions and references in E. V. Shuryak, Prog. Part. Nucl. Phys. **53** (2004) 273; E. V. Shuryak and I. Zahed, Phys. Rev. **C 70** (2004) 021901; E. V. Shuryak, Nucl. Phys. A **774** (2006) 387; hep-ph/0510123. * [13] E. V. Shuryak and I. Zahed, Phys. Rev. **D 70** (2004) 054507; hep-ph/0403127. * [14] M. Mannarelli and R. Rapp, hep-ph/0505080. * [15] K. A. Bugaev, Phys. Rev. **C 76** (2007) 014903. * [16] Q. R. Zhang, Z. Phys. **A 353** (1995) 345. * [17] J. E. Mayer and M. Goeppert-Mayer,\"Statistical Mechanics\" (1977) * [18] K. A. Bugaev, J. Phys. **G 28** (2002) 1981. * [19] M. I. Gorenstein, K. A. Bugaev and M. Gazdzicki, Phys. Rev. Lett. **88** (2002) 132301; K. A. Bugaev, M. Gazdzicki, M. I. Gorenstein, Phys. Lett. **B 544** (2002) 127. * [20] see, for instance, A. Munster: \"Statistical Thermodynamics\" Vol. II, Springer-Verlag, Heidelberg 1974. * [21] for instance, see M. Gyulassy, Prog. Part. Nucl. Phys. **15** (1985) 403 and references therein. * [22] J. D. van der Waals Z. Phys. Chem. **5** (1889) 133. * [23] M. I. Gorenstein, A. P. Kostyuk and Y. D. Krivenko J. Phys. **G 25** (1999) L75. * [24] K. A. Bugaev, Nucl. Phys. **A 606** (1996) 559; K. A. Bugaev and M. I. Gorenstein, nucl-th/9903072 (1999) 70 p. * [25] K. A. Bugaev, Phys. Rev. Lett. **90** (2003) 252301; Phys. Rev. **C70** (2004) 034903 and references therein. * [26] A. B. Larionov, O. Buss, K. Gallmeister and U. Mosel, arXiv:nucle-th/0704.1785. * [27] P. Danielewicz, nucl-th/0512009 and references therein. * [28] U. W. Heinz, nucl-th/0504011 and references therein. * [29] R. J. Fries, B. Muller, C. Nonaka and S. A. Bass, Phys. Rev. C **68** (2003) 044902 [nucl-th/0306027].
The relativistic equation of state (EOS) of the Van-der-Waals gas is suggested and analyzed. In contrast to the usual case, the Lorentz contraction of the sphere's volume is taken into account. It is proven that the suggested EOS obeys the causality in the limit of high densities, i.e., the value of sound velocity of such a media is sublumnar. The pressure obtained for the high values of chemical potential has an interesting kinetic interpretation. The suggested EOS shows that for high densities the most probable configuration corresponds to the smallest value of the relativistic excluded volume. In other words, for high densities the configurations with the collinear velocities of the neighboring hard core particles are the most probable ones. This, perhaps, may shed light on the coalescence process of any relativistic hard core constituents. **The Van-der-Waals Gas EOS for the Lorentz Contracted Rigid Spheres** **Kyrill A. Bugaev** Bogolyubov Institute for Theoretical Physics, 03680 - Kiev, Ukraine **Key words:** Equation of state, hard spheres, relativistic Van-der-Waals model
Give a concise overview of the text below.
arxiv-format/0611115v1.md
A higher-order active contour model of a 'gas of circles' and its application to tree crown extraction Peter Horvath -- Ian H. Jermyn -- Zoltan Kato -- Josiane Zerubia November 2006 [MISSING_PAGE_POST] A higher-order active contour model of a 'gas of circles' and its application to tree crown extraction Peter Horvath Theme COG -- Systemes cognitifs Projet Ariana Rapport de recherche n\\({}^{\\circ}\\)?? -- November 2006 -- 33 pages Ian H. Jermyn Theme COG -- Systemes cognitifs Projet Ariana Zoltan Kato Theme COG -- Systemes cognitifs Projet Ariana Josiane Zerubia Theme COG -- Systemes cognitifs Projet Ariana Rapport de recherche n\\({}^{\\circ}\\)?? -- November 2006 -- 33 pages ###### The case of an unknown number of circular objects arises in a number of domains, _e.g._ medical, biological, nanotechnological, and remote sensing imagery. Regions composed of an a priori unknown number of circles may be referred to as a 'gas of circles'. In this report, we present a HOAC model of a 'gas of circles'. In order to guarantee stable circles, we conduct a stability analysis via a functional Taylor expansion of the HOAC energy around a circular shape. This analysis fixes one of the model parameters in terms of the others and constrains the rest. In conjunction with a suitable likelihood energy, we apply the model to the extraction of tree crowns from aerial imagery, and show that the new model outperforms other techniques. type: _SOPHIA_ This work was partially supported by EU project IMAVIS (FP5 IHP-MCHT99), EU project MUSCLE (FP6-507752), Egide PAI Balaton (09020X), OTKA T-046805, and a Janos Bolyai Research Fellowship of HAS. The authors would like to thank the French National Forest Inventory (IFN) for the aerial images. Unite de recherche INRIA Sophia Antipolis 2004, route des Lucioles, BP 93, 06902 Sophia Antipolis Cedex (France) Telephone : +33 4 92 38 77 77 -- Telécopie : +33 4 92 38 77 65 **Un modele de contours actifs d'ordre superieur d'un 'gaz de cercles' et son application a l'extraction des hoppiers** **Resume :** Plusieurs problemes en traitement d'image demandent l'identification de la region dans le domaine de l'image qui correspond a une entite donnee dans la scene. La solution automatique de ces problemes demande des modeles qui incorperent une connaissance _a priori_ importante de la forme de la region. Beaucoup de methodes pour l'inclusion d'une telle connaissance ont des difficultes quand la topologie est a priori inconnue, par exemple quand l'entite est composee d'un nombre inconnu d'objets similaires. Les contours actifs d'ordre superieur (CAOS) sont une methode pour la modelisation d'une connaissance a priori sur la forme non-triviale sans necessairement contrainde la topologie de la region, via l'inclusion d'interactions non-locales entre les points du bord de la region dans l'energie qui definie le modele. Le cas d'un nombre inconnu d'objets circulaires se souleve dans plusieurs domaines, _e.g._ l'imagerie medicale, biologique, nanotechnologique, et la teledetection. Des regions composees d'un nombre inconnu de cercles peuvent etre appelees un \"gaz de cercles\". Dans ce rapport, nous presentons un modele CAOS d'un gaz de cercles. Pour garantir des cercles stables, nous faisons une analyse de stabilite via un developpement de Taylor fonctionnel autour d'une forme circulaire. Cette analyse fixe un des parametres du modele en fonction des autres, et contraint le reste. En combinaison avec une energie d'attache aux donnees appropriee, nous appliquons le modele a l'extraction des hoppiers dans des images aeriennes, et montrons que sa performance est meilleure que d'autres techniques. **Mots-cles :** arbre, hoppier, extraction, image aerienne, ordre superieur, contour actif, gaz de cercles, a priori, forme ###### Contents * 1 Introduction \t* 1.1 Higher order active contours * 2 The 'gas of circles' model \t* 2.1 Stability analysis \t\t* 2.1.1 Parameter constraints \t* 2.2 Geometric experiments * 3 Data terms and experiments \t* 3.1 Previous work \t* 3.2 Data terms and gradient descent \t* 3.3 Tree crown extraction from aerial images \t* 3.4 Noisy synthetic images \t* 3.5 Circle separation: comparison to classical active contours * 4 Conclusion * A Details of stability computations * A.1 Length * A.2 Area * A.3 Quadratic energy * A.3.1 Inner product of tangent vectors * A.3.2 Distance between two points * A.3.3 Interaction function * A.3.4 Combining terms ## 1 Introduction Forestry is a domain in which image processing and computer vision techniques can have a significant impact. Resource management and conservation, whether commercial or in the public domain, require information about the current state of a forest or plantation. Much of this information can be summarized in statistics related to the size and placement of individual tree crowns (_e.g._ mean crown area and diameter, density of the trees). Currently, this information is gathered using expensive field surveys and time-consuming semi-automatic procedures, with the result that partial information from a number of chosen sites frequently has to be extrapolated. An image processing method capable of automatically extracting tree crowns from high resolution aerial or satellite images (an example is shown in figure 1) and computing statistics based on the results would greatly aid this domain. The tree crown extraction problem can be viewed as a special case of a general image understanding problem: the identification of the region \\(R\\) in the image domain \\(\\Omega\\) corresponding to some entity or entities in the scene. In order to solve this problem in any particular case, we have to construct, even if only implicitly, a probability distribution on the space of regions \\(\\mbox{P}(R|I,K)\\). This distribution depends on the current image data \\(I\\) and on any prior knowledge \\(K\\) we may have about the region or about its relation to the image data, as encoded in the likelihood \\(\\mbox{P}(I|R,K)\\) and the prior \\(\\mbox{P}(R|K)\\) appearing in the Bayes' decomposition of \\(\\mbox{P}(R|I,K)\\) (or equivalently in their energies Figure 1: Real image with planted forest ©IFN. \\(-\\ln\\mbox{P}(I|R,K)\\) and \\(-\\ln\\mbox{P}(R|K)\\)). This probability distribution can then be used to make estimates of the region we are looking for. In the automatic solution of realistic problems, the prior knowledge \\(K\\), and in particular prior knowledge about the'shape' of the region, as described by \\(\\mbox{P}(R|K)\\), is critical. The tree crown extraction problem provides a good example: particularly in plantations, \\(R\\) takes the form of a collection of approximately circular connected components of similar size. There is thus a great deal of prior knowledge about the region sought. The question is then how to incorporate such prior knowledge into a model for \\(R\\)? If the model does not include enough prior knowledge, it will be necessary to include it in some other form. So, for example, the use of classical active contour energies to find entities in images usually requires the initialization of the contour close to the entity to be found, which represents a large injection of prior knowledge by the user. The simplest prior information concerns the smoothness properties of the boundary of the region. For example, the Ising model and many active contour models [3, 4, 7, 8, 18] use the length of the region boundary and its interior area as their prior energies. This type of prior information can be augmented using more demanding measures of smoothness, for example the boundary curvature [14, 18]. These models are all limited, though, by the fact that they are integrals over the region boundary of some function of various derivatives of the boundary. In consequence, they capture local differential geometric information, corresponding to local interactions between boundary points, but can say nothing more global about the shape of the region. To go further, it is therefore clear that one must introduce longer range interactions. There are two principal ways to do this: one is to introduce hidden variables, given which the original variables of interest are (more or less) independent. Marginalizing over the hidden variables then introduces interactions between the original variables. Another is to include explicit long-range interactions between the original variables (these interactions may also have an interpretation in terms of marginalization over some hidden variables, which are left implicit). The first approach has been much investigated, in the form of template shapes and their deformations [5, 9, 10, 11, 12, 13, 17, 21, 22, 23, 24, 26]. Here a probability distribution or an energy is defined based on a distance measure of some kind between regions. One region, the template, is fixed, while the other is the variable \\(R\\). This type of model constrains \\(R\\) to be close to the template region in the space of regions. Template regions may be learned from examples or fixed by hand; similarly the distance function maybe based, for example, on the learned covariance of a Gaussian distribution, or chosen _a priori_. The most sophisticated methods use the kernel trick to define the distance as a pullback from a high-dimensional space, thereby allowing more complex behaviours. Multiple templates may also be used, corresponding to a mixture model. It is clear that these methods implicitly introduce long-range interactions: if you know that one half of a given region aligns well with the template, this tells you something about the likely position and shape of the other half. The above methods assign high probability to regions that lie 'close' to certain points in the space of regions. As such, it is difficult to construct models of this type that favour regions for which the topology, and in particular the number of connected components, is unknown _a priori_. There are many problems, however, for which this is the case, for example, the extraction of networks, or the extraction of an unknown number of objects of a particular type from astronomical, biological,medical, or remote sensing images. For this type of prior knowledge, a different type of model is needed. Higher-order active contours (HOACs) are one such category of models. HOACs, first described by Rochery et al. [29] (see also [30, 31] for fuller descriptions), take the second approach mentioned above. They introduce explicit long-range interactions between region boundary points via energies that contain multiple integrals over the boundary, thus avoiding the use of template shapes. HOAC energies can be made intrinsically Euclidean invariant, and, as required by the above analysis, incorporate sophisticated prior information about region shape without necessarily constraining the topology of the region. As with other methods incorporating significant prior knowledge, it is not necessary to introduce extra knowledge via an initialization close to the target region: a generic initialization suffices, thus rendering the method quasi-automatic. Rochery et al. [29] applied the method to road extraction from satellite and aerial images using a prior which favours network-like objects. In this report, we describe a HOAC model of a 'gas of circles': the model favours regions composed of an _a priori_ unknown number of circles of a certain radius. For such a model to work, the circles must be stable to small perturbations of their boundaries, _i.e._ they must be local minima of the HOAC energy, for otherwise a circle would tend to 'decay' into other shapes. This is a non-trivial requirement. We impose it by performing a functional Taylor expansion of the HOAC energy around a circle, and then demanding that the first order term be zero for all perturbations, and that the second order term be positive semi-definite. These conditions allow us to fix one of the model parameters in terms of the others, and constrain the rest. Experiments using the HOAC energy demonstrate empirically the coherence between these theoretical considerations and the gradient descent algorithm used in practice to minimize the energy. The model has many potential applications, to medical, biological, physical, and remote sensing imagery in which the entities to be identified are circular. We choose to apply it to the tree crown extraction problem from aerial imagery, using the 'gas of circles' model as a prior energy, and an appropriate likelihood. We will see that the extra prior knowledge included in the 'gas of circles' model permits the separation of trees that cannot be separated by simpler methods, such as maximum likelihood or classical active contours. In the rest of this section, we present a brief introduction to HOACs. In section 2, we describe the 'gas of circles' HOAC model. The key to this model is the analysis of the stability of a circle as a function of the model parameters. To demonstrate the prior knowledge contained in the model, and the empirical correctness of the stability analysis, we present experimental results using the new energy. In section 3, we apply the new model to tree crown extraction. We describe a likelihood energy for trees based on the image intensity and gradient, and then present experimental results on synthetic data and on aerial images. We conclude in section 4, and discuss some open issues with the model. ### Higher order active contours As described in section 1, HOACs introduce long-range interactions between boundary points not via the intermediary of a template region or regions to which \\(R\\) is compared, but directly, by using energy terms that involve multiple integrals over the boundary. The integrands of such integrals thus depend on two or more, perhaps widely separated, boundary points simultaneously, and can thereby impose relations between tuples of points. Euclidean invariance of such energies can be imposed directly on the energy, without the necessity to estimate a transformation between the boundary sought and the template. Perhaps more importantly, because there is no template, the topology of the region need not be constrained, a factor that is critical when the topology is not known _a priori_. As with all active contour models, a region \\(R\\) is represented by its boundary, \\(\\partial R\\). There are various ways to think of the boundary of a region. If the region has only one connected component, which is also simply-connected, then a boundary is an equivalence class of embeddings of the circle \\(S^{1}\\) under the action of orientation-preserving diffeomorphisms of \\(S^{1}\\). When more, possibly multiply-connected components are included, however, things get complicated. First, the number of embeddings of \\(S^{1}\\) that are required depends on the topology, and second, there are constraints on the orientations of different components if they are to represent regions with handles. An alternative is to view \\(\\partial R\\) as a closed \\(1\\)-chain \\(\\gamma\\) in the image domain \\(\\Omega\\) ([6] is a useful reference for the following discussion). Although region boundaries correspond to a special subset of closed \\(1\\)-chains known as domains of integration, active contour energies themselves are defined for general \\(1\\)-chains. It is convenient to use this more general context to distinguish HOAC energies from classical active contours, because it allows for notions of linearity to be used to characterize the complexity of energy functionals. Using this representation, HOAC energies can be defined as follows [30, 31]. Let \\(\\gamma\\) be a \\(1\\)-chain in \\(\\Omega\\), and \\(\\Box\\gamma\\) be its domain. Then \\(\\gamma^{n}:(\\Box\\gamma)^{n}\\to\\Omega^{n}\\) is an \\(n\\)-chain in \\(\\Omega^{n}\\). We define a class of \\((n-p)\\)-forms on \\(\\Omega^{n}\\) that are \\(1\\)-forms with respect to \\((n-p)\\) factors and \\(0\\)-forms with respect to the remaining \\(p\\) factors (by symmetry, it does not matter which \\(p\\) factors). These forms can be pulled back to \\((\\Box\\gamma)^{n}\\) by \\(\\gamma^{n}\\). The Hodge duals of the \\(p\\)\\(0\\)-form factors with respect to the induced metric on \\(\\Box\\gamma\\) can then be taken independently on each such factor, thus converting them to \\(1\\)-forms, and rendering the whole form an \\(n\\)-form on \\((\\Box\\gamma)^{n}\\). This \\(n\\)-form can then be integrated on \\((\\Box\\gamma)^{n}\\). In the \\((n,p)=(n,0)\\) cases, we are simply integrating a general \\(n\\)-form on the image of \\(\\gamma^{n}\\) in \\(\\Omega^{n}\\), thus defining a linear functional on the space of \\(n\\)-chains in \\(\\Omega^{n}\\), and hence an \\(n\\)th-order monomial on the space of \\(1\\)-chains in \\(\\Omega\\). Taking arbitrary linear combinations of such monomials then gives the space of polynomial functionals on the space of \\(1\\)-chains. By analogy we refer to the general \\((n,p)\\) cases as 'generalized \\(n\\)th-order monomials' on the space of \\(1\\)-chains in \\(\\Omega\\), and to arbitrary linear combinations of the latter as 'generalized polynomial functionals' on the space of \\(1\\)-chains in \\(\\Omega\\). HOAC energies are generalized polynomial functionals. Standard active contour energies are generalized _linear_ functionals on \\(1\\)-chains in this sense, hence the term 'higher-order'. The \\((1,1)\\) case is simply the boundary length in some metric. An interesting application of the \\((2,2)\\) case to topology preservation is described by Sundaramoorthi and Yezzi [32]. The \\((1,0)\\) case gives the region area in some metric. To be more concrete, we specialize to the \\((n,0)\\) case. Let \\(F\\) be an \\(n\\)-form on \\(\\Omega^{n}\\). We pull \\(F\\) back to the domain of \\(\\gamma^{n}\\) and integrate it: \\[E(\\gamma)=\\int_{(\\partial R)^{n}}F=\\int_{(\\Box\\gamma)^{n}}(\\gamma^{n})^{*}F. \\tag{1.1}\\]Specializing again to the case \\(n=2\\), and using the antisymmetry of \\(F\\) together with the symmetry of \\(\\gamma^{2}\\), we can rewrite the energy functional in this case as \\[E(\\gamma)=\\int_{(\\partial R)^{2}}F=\\int_{(\\square\\gamma)^{2}}(\\gamma\\times\\gamma) ^{*}F=\\iint_{(\\square\\gamma)^{2}}dt\\;dt^{\\prime}\\;\\tau(t)\\cdot F(\\gamma(t),\\gamma (t^{\\prime}))\\cdot\\tau(t^{\\prime})\\;, \\tag{1.2}\\] where \\(F(x,x^{\\prime})\\), for each \\((x,x^{\\prime})\\in\\Omega^{2}\\), is a \\(2\\times 2\\) matrix, \\(t\\) is a coordinate on \\(\\square\\gamma\\), and \\(\\tau=\\dot{\\gamma}\\) is the tangent vector to \\(\\gamma\\). By imposing Euclidean invariance on this term, and adding linear terms, Rochery et al. [29] defined the following higher-order active contour prior: \\[E_{\\text{g}}(\\gamma)=\\lambda_{C}L(\\gamma)+\\alpha_{C}A(\\gamma)-\\frac{\\beta_{C}} {2}\\iint dt\\;dt^{\\prime}\\;\\tau(t^{\\prime})\\cdot\\tau(t)\\;\\Phi(R(t,t^{\\prime}))\\;, \\tag{1.3}\\] where \\(L\\) is the boundary length functional, \\(A\\) is the interior area functional and \\(R(t,t^{\\prime})=|\\gamma(t)-\\gamma(t^{\\prime})|\\) is the Euclidean distance between \\(\\gamma(t)\\) and \\(\\gamma(t^{\\prime})\\). Rochery et al. [29] used the following interaction function \\(\\Phi\\): \\[\\Phi(z)=\\begin{cases}1&z<d-\\epsilon\\;,\\\\ \\frac{1}{2}\\big{(}1-\\frac{z-d}{\\epsilon}-\\frac{1}{\\pi}\\sin\\frac{\\pi(z-d)}{ \\epsilon}\\big{)}&d-\\epsilon\\leq z<d+\\epsilon\\;,\\\\ 0&z\\geq d+\\epsilon\\;.\\end{cases} \\tag{1.4}\\] In this paper, we use this same interaction function with \\(d=\\epsilon\\), but other monotonically decreasing functions lead to qualitatively similar results. ## 2 The 'gas of circles' model For certain ranges of the parameters involved, the energy (1.3) favours regions in the form of networks, consisting of long narrow arms with approximately parallel sides, joined together at junctions, as described by Rochery et al. [29, 30, 31]. It thus provides a good prior for network extraction from images. This behaviour does not persist for all parameter values, however, and we will exploit this parameter dependence to create a model for a 'gas of circles', an energy that favours regions composed of an _a priori_ unknown number of circles of a certain radius. For this to work, a circle of the given radius, hereafter denoted \\(r_{0}\\), must be stable, that is, it must be a local minimum of the energy. In section 2.1, we conduct a stability analysis of a circle, and discover that stable circles are indeed possible provided certain constraints are placed on the parameters. More specifically, we expand the energy \\(E_{\\text{g}}\\) in a functional Taylor series to second order around a circle of radius \\(r_{0}\\). The constraint that the circle be an energy extremum then requires that the first order term be zero, while the constraint that it be a minimum requires that the operator in the second order term be positive semi-definite. These requirements constrain the parameter values. In subsection 2.2, we present numerical experiments using \\(E_{\\text{g}}\\) that confirm the results of this analysis. ### Stability analysis We want to expand the energy \\(E_{\\rm g}\\) around a circle of radius \\(r_{0}\\). We denote a member of the equivalence class of maps representing the \\(1\\)-chain defining the circle by \\(\\gamma_{0}\\). The energy \\(E_{\\rm g}\\) is invariant to diffeomorphisms of \\(\\Box\\gamma_{0}\\), and thus is well-defined on \\(1\\)-chains. To second order, \\[E_{\\rm g}(\\gamma)=E_{\\rm g}(\\gamma_{0}+\\delta\\gamma)=E_{\\rm g}(\\gamma_{0})+ \\langle\\delta\\gamma|\\frac{\\delta E_{\\rm g}}{\\delta\\gamma}\\rangle_{\\gamma_{0}} +\\frac{1}{2}\\langle\\delta\\gamma|\\frac{\\delta^{2}E_{\\rm g}}{\\delta\\gamma^{2}}| \\delta\\gamma\\rangle_{\\gamma_{0}}. \\tag{1}\\] where \\(\\langle\\cdot|\\cdot\\rangle\\) is a metric on the space of \\(1\\)-chains. Since \\(\\gamma_{0}\\) represents a circle, it is easiest to express it in terms of polar coordinates \\(r,\\theta\\) on \\(\\Omega\\). For a suitable choice of coordinate on \\(S^{1}\\), a circle of radius \\(r_{0}\\) centred on the origin is then given by \\(\\gamma_{0}(t)=(r_{0}(t),\\theta_{0}(t))\\), where \\(r_{0}(t)=r_{0}\\), \\(\\theta(t)=t\\), and \\(t\\in[-\\pi,\\pi)\\). We are interested in the behaviour of small perturbations \\(\\delta\\gamma=(\\delta r,\\delta\\theta)\\). The first thing to notice is that because the energy \\(E_{\\rm g}\\) is defined on \\(1\\)-chains, tangential changes in \\(\\gamma\\) do not affect its value. We can therefore set \\(\\delta\\theta=0\\), and concentrate on \\(\\delta r\\). On the circle, using the arc length parameterization \\(t\\), the integrands of the different terms in \\(E_{\\rm g}\\) are functions of \\(t-t^{\\prime}\\) only; they are invariant to translations around the circle. In consequence, the second derivative \\(\\delta^{2}E_{\\rm g}/\\delta\\gamma(t)\\delta\\gamma(t^{\\prime})\\) is also translation invariant, and this implies that it can be diagonalized in the Fourier basis of the tangent space at \\(\\gamma_{0}\\). It thus turns out to be easiest to perform the calculation by expressing \\(\\delta r\\) in terms of this basis: \\[\\delta r(t)=\\sum_{k}a_{k}e^{ir_{0}kt}\\, \\tag{2}\\] where \\(k\\in\\{m/r_{0}:\\;m\\in\\mathbb{Z}\\}\\). Below, we simply state the resulting expansions to second order in the \\(a_{k}\\) for the three terms appearing in equation (3). Details can be found in appendix A. The boundary length and interior area of the region are given to second order by \\[L(\\gamma) =\\int_{-\\pi}^{\\pi}dt\\;|\\tau(t)|=2\\pi r_{0}\\left\\{1+\\frac{a_{0}}{ r_{0}}+\\frac{1}{2}\\sum_{k}k^{2}|a_{k}|^{2}\\right\\} \\tag{3}\\] \\[A(\\gamma) =\\int_{-\\pi}^{\\pi}d\\theta\\int_{0}^{r(\\theta)}dr^{\\prime}\\,r^{ \\prime}=\\pi r_{0}^{2}+2\\pi r_{0}a_{0}+\\pi\\sum_{k}\\!|a_{k}|^{2}. \\tag{4}\\] Note the \\(k^{2}\\) in the second order term for \\(L\\). This is the same frequency dependence as the Laplacian, and shows that the length term plays a similar smoothing role for boundary perturbations as the Laplacian does for functions. In the area term, by contrast, the Fourier perturbations are 'white noise'. It is also worth noting that there are no stable solutions using these terms alone. For the circle to be an extremum, we require \\(\\lambda_{C}2\\pi+\\alpha_{C}2\\pi r_{0}=0\\), which tells us that \\(\\alpha_{C}=-\\lambda_{C}/r_{0}\\). The criterion for a minimum is, for each \\(k\\), \\(\\lambda_{C}r_{0}k^{2}+\\alpha_{C}\\geq 0\\). Note that we must have \\(\\lambda_{C}>0\\) for stability at high frequencies. Substituting for \\(\\alpha_{C}\\), the condition becomes \\(\\lambda_{C}(r_{0}k^{2}-r_{0}^{-1})\\geq 0\\). Substituting \\(k=m/r_{0}\\), gives the condition \\(m^{2}-1\\geq 0\\). Two points are worth noting. The first is the one wehave already made: the zero frequency perturbation is not stable. The second is that the \\(m=1\\) perturbation is marginally stable to second order, that is, such changes require no energy to this order. To fully analyse them, we must therefore go to higher order in the Taylor series. This feature will appear also in the analysis of the full energy \\(E_{\\rm g}\\). The quadratic term can be expressed to second order as \\[\\iint_{-\\pi}^{\\pi}dt\\;dt^{\\prime}\\;\\tau(t^{\\prime}) \\cdot\\tau(t)\\;\\Phi(R(t,t^{\\prime}))=2\\pi\\int_{-\\pi}^{\\pi}dp\\;F_{0 0}(p)+4\\pi a_{0}\\int_{-\\pi}^{\\pi}dp\\,F_{10}(p)\\] \\[+\\sum_{k}2\\pi|a_{k}|^{2}\\bigg{\\{}\\Big{[}2\\int_{-\\pi}^{\\pi}dp\\,F_{ 20}(p)+\\int_{-\\pi}^{\\pi}dp\\,e^{-ir_{0}kp}F_{21}(p)\\Big{]}\\] \\[\\quad-\\Big{[}2ir_{0}k\\int_{-\\pi}^{\\pi}dp\\,e^{-ir_{0}kp}F_{23}(p) \\Big{]}+\\Big{[}r_{0}^{2}k^{2}\\int_{-\\pi}^{\\pi}dp\\,e^{-ir_{0}kp}F_{24}(p)\\Big{]} \\bigg{\\}}\\, \\tag{2.5}\\] The \\(F_{ij}\\) are functionals of \\(\\Phi\\) (hence functions of \\(d\\) and \\(\\epsilon\\) for \\(\\Phi\\) given by equation (1.4)), and functions of \\(r_{0}\\), as well as functions of the dummy variable \\(p\\). Combining equations (2.3), (2.4), and (2.5), we find the energy functional (1.3) up to the second order: \\[E_{\\rm g}(\\gamma_{0}+\\delta\\gamma) =e_{0}(r_{0})+a_{0}e_{1}(r_{0})+\\frac{1}{2}\\sum_{k}|a_{k}|^{2}e_{2 }(k,r_{0})\\] \\[=\\Big{\\{}2\\pi\\lambda_{C}r_{0}+\\pi\\alpha_{C}r_{0}^{2}-\\pi\\beta_{C} G_{00}(r_{0})\\Big{\\}}+a_{0}\\Big{\\{}2\\pi\\lambda_{C}+2\\pi\\alpha_{C}r_{0}-2\\pi \\beta_{C}G_{10}(r_{0})\\Big{\\}}\\] \\[+\\frac{1}{2}\\sum_{k}|a_{k}|^{2}\\Big{\\{}2\\pi\\lambda_{C}r_{0}k^{2}+ 2\\pi\\alpha_{C}\\] \\[-2\\pi\\beta_{C}\\big{[}2G_{20}(r_{0})+G_{21}(k,r_{0})-2ir_{0}kG_{23 }(k,r_{0})+r_{0}^{2}k^{2}G_{24}(k,r_{0})\\big{]}\\Big{\\}}\\, \\tag{2.6}\\] where \\(G_{ij}=\\int_{-\\pi}^{\\pi}dp\\,e^{-ir_{0}(1-\\delta(j))kp}F_{ij}(p)\\). Note that as anticipated, there are no off-diagonal terms linking \\(a_{k}\\) and \\(a_{k^{\\prime}}\\) for \\(k\ eq k^{\\prime}\\): the Fourier basis diagonalizes the second order term. #### 2.1.1 Parameter constraints Note that a circle of any radius is always an extremum for non-zero frequency perturbations (\\(a_{k}\\) for \\(k\ eq 0\\)), as these Fourier coefficients do not appear in the first order term (this is also a consequence of invariance to translations around the circle). The condition that a circle be an extremum for \\(a_{0}\\) as well (\\(e_{1}=0\\)) gives rise to a relation between the parameters: \\[\\beta_{C}(\\lambda_{C},\\alpha_{C},\\hat{r}_{0})=\\frac{\\lambda_{C}+\\alpha_{C} \\hat{r}_{0}}{G_{10}(\\hat{r}_{0})}\\, \\tag{2.7}\\] where we have introduced \\(\\hat{r}_{0}\\) to indicate the radius at which there is an extremum, to distinguish it from \\(r_{0}\\), the radius of the circle about which we are calculating the expansion (2.1). The left hand side of figure 2 shows a typical plot of the energy \\(e_{0}\\) of a circle versus its radius \\(r_{0}\\), with the \\(\\beta_{C}\\) parameter fixed using the equation (7) with \\(\\lambda_{C}=1.0\\), \\(\\alpha=0.8\\), and \\(\\hat{r}_{0}=1.0\\). The energy has a minimum at \\(r_{0}=\\hat{r}_{0}\\) as desired. The relationship between \\(\\hat{r}_{0}\\) and \\(\\beta_{C}\\) is not quite as straightforward as it might seem though. As can be seen, the energy also has a maximum at some radius. It is not _a priori_ clear whether it will be the maximum or the minimum that appears at \\(\\hat{r}_{0}\\). If we graph the positions of the extrema of the energy of a circle against \\(\\beta_{C}\\) for fixed \\(\\alpha_{C}\\), we find a curve qualitatively similar to that shown in figure 3 (this is an example of a fold catastrophe). The solid curve represents the minimum, the dashed the maximum. Note that there is indeed a unique \\(\\beta_{C}\\) for a given choice of \\(\\hat{r}_{0}\\). Denote the point at the bottom of the curve by \\((\\beta_{C}^{(0)},\\hat{r}_{0}^{(0)})\\). Note that at \\(\\beta_{C}=\\beta_{C}^{(0)}\\), the extrema merge and for \\(\\beta_{C}<\\beta_{C}^{(0)}\\), there are no extrema: the energy curve is monotonic because the quadratic term is not strong enough to overcome the shrinking effect of the length and area terms. Note also that the minimum cannot move below \\(r_{0}=r_{0}^{(0)}\\). This behaviour is easily understood qualitatively in terms of the interaction function in equation (4). If \\(2r_{0}<d-\\epsilon\\), the quadratic term will be constant, and no force will exist to stabilize the circle. In order to use equation (7) then, we have to ensure that we are on the upper branch of figure 3. Equation (7) gives the value of \\(\\beta_{C}\\) that provides an extremum of \\(e_{0}\\) with respect to changes of radius \\(a_{0}\\) at a given \\(\\hat{r}_{0}\\) (\\(e_{1}(\\hat{r}_{0})=0\\)), but we still need to check that the circle of radius \\(\\hat{r}_{0}\\) is indeed stable to perturbations with non-zero frequency, _i.e._ that \\(e_{2}(k,\\hat{r}_{0})\\) is non-negative for all \\(k\\). Scaling arguments mean that in fact the sign of \\(e_{2}\\) depends only on the combinations \\(\\tilde{r}_{0}=r_{0}/d\\) and \\(\\tilde{\\alpha}_{C}=(d/\\lambda_{C})\\alpha_{C}\\). The equation for \\(e_{2}\\) can then be used to obtain bounds on \\(\\tilde{\\alpha}_{C}\\) in terms of \\(\\tilde{r}_{0}\\). (Details of these calculations and bounds will be given elsewhere.) The right hand side of figure 2shows a plot of \\(e_{2}(k,\\hat{r}_{0})\\) against \\(\\hat{r}_{0}k\\) for the same parameter values used for the right hand side, showing that it is non-negative for all \\(\\hat{r}_{0}k\\). We call the resulting model, the energy \\(E_{\\text{g}}\\) with parameters chosen according to the above criteria, the 'gas of circles' model. ### Geometric experiments In order to illustrate the behaviour of the prior energy \\(E_{\\text{g}}\\) with parameter values fixed according to the above analysis, in this section we show the results of some experiments using this energy (there are no image terms). Figure 4 shows the result of gradient descent using \\(E_{\\text{g}}\\) starting from various different initial regions. (For details of the implementation of gradient descent for higher-order active contour energies using level set methods, see [30, 31].) In the first column, four different initial regions are shown. The other three columns show the final regions, at convergence, for three different sets of parameters. In particular, the three columns have \\(\\hat{r}_{0}=15.0\\), \\(10.0\\), and \\(5.0\\) respectively. In the first row, the initial shape is a circle of radius \\(32\\) pixels. The stable states, which can be seen in the other three columns, are circles with the desired radii in every case. In the second row, the initial region is composed of four circles of different radii. Depending on the value of \\(\\hat{r}_{0}\\), some of these circles shrink and disappear. This behaviour can be explained by looking at figure 2. As already noted, the energy of a circle \\(e_{0}\\) has a maximum at some radius \\(r_{\\text{max}}\\). If an initial circle has a radius less than \\(r_{\\text{max}}\\), it will'slide down the energy slope' towards \\(r_{0}=0\\), and disappear. If its radius is larger than \\(r_{\\text{max}}\\), it will finish in the minimum, with radius \\(\\hat{r}_{0}\\). This is precisely what is observed in this second experiment. In the third row, the initial condition is composed of four squares. The squares evolve to circles of the appropriate radii. The fourth row has an initial condition composed of four differing shapes. The nature of the stable states depends on the relation between the stable Figure 3: Schematic plot of the positions of the extrema of the energy of a circle versus \\(\\beta_{C}\\). radius, \\(\\hat{r}_{0}\\), and the size of the initial shapes. If \\(\\hat{r}_{0}\\) is much smaller than an initial shape, this shape will 'decay' into several circles of radius \\(\\hat{r}_{0}\\). ## 3 Data terms and experiments In this section, we apply the 'gas of circles' model developed in section 2 to the extraction of trees from aerial images. This is just one of many possible applications, corresponding to the mission of the Ariana research group. In the next section, we give a brief state of the art for tree crown extraction, and then present the data terms we use in section 3.2. In section 3.3, we describe tree crown extraction experiments on aerial images and compare the results to those found using a classical active contour model. In section 3.4, we examine the robustness of the method to noise using synthetic images. This illuminates the principal failure modes of the model, which will be further discussed in section 4, and which point the way for future work. In section 3.5, we illustrate the importance Figure 4: Experimental results using the geometric term: the first column shows the initial conditions; the other columns show the stable states for various choices of the radius. of prior information via tree crown separation experiments on synthetic images, and compare the results to those obtained using a classical active contour model. ### Previous work The problem of locating, counting, or delineating individual trees in high resolution aerial images has been studied in a number of papers. Gougeon [15, 16] observes that trees are brighter than the areas separating them. Local minima of the image are found using a \\(3\\times 3\\) filter, and the 'valleys' connecting them are then found using a \\(5\\times 5\\) filter. The tree crowns are subsequently delineated using a five-level rule-based method designed to find circular shapes, but with some small variations permitted. Larsen [19, 20] concentrates on spruce tree detection using a template matching method. The main difference between these two papers is the use of multiple templates in the second. The 3D shape of the tree is modelled using a generalized ellipsoid, while illumination is modelled using the position of the sun and a clear-sky model. Reflectance is modelled using single reflections, with the branches and needles acting as scatterers, while the ground is treated as a Lambertian surface. Template matching is used to calculate a correlation measure between the tree image predicted by the model and the image data. The local maxima of this measure are treated as tree candidates, and various strategies are then used to eliminate false positives. Brandtberg and Walter [2] decompose an image into multiple scales, and then define tree crown boundary candidates at each scale as zero crossings with convex greyscale curvature. Edge segment centres of curvature are then used to construct a candidate tree crown region at each scale. These are then combined over different scales and a final tree crown region is grown. Andersen et al. [1] use a morphological approach combined with a top-hat transformation for the segmentation of individual trees. All of these methods use multiple steps rather than a unified model. Closer in spirit to the present work is that of Perrin et al. [27, 28], who model the collection of tree crowns by a marked point process, where the marks are circles or ellipses. An energy is defined that penalizes, for example, overlapping shapes, and controls the parameters of the individual shapes. Reversible Jump MCMC and simulated annealing are used to estimate the tree crown configuration. Compared to the work described in this paper, the method has the advantage that overlapping trees can be represented as two separate objects, but the disadvantage that the tree crowns are not precisely delineated due to the small number of degrees of freedom for each mark. ### Data terms and gradient descent In order to couple the region model \\(E_{\\text{g}}\\) to image data, we need a likelihood, \\(\\text{P}(I|R,K)\\). The images we use for the experiments are coloured infrared (CIR) images. Originally they are composed of three bands, corresponding roughly to green, red, and near infrared (NIR). Analysis of the one-point statistics of the image in the region corresponding to trees and the image in the background, shows that the 'colour' information does not add a great deal of discriminating power compared to a 'greyscale' combination of the three bands, or indeed the NIR band on its own. We therefore model the latter. The images have a resolution \\(\\sim 0.5\\)m/pixel, and tree crowns have diameters of the order of ten pixels. Very little if any dependence remains between the pixels at this resolution, which means, when combined with the paucity of statistics within each tree crown, that pixel dependencies (_i.e._ texture) are very hard to use for modelling purposes. We therefore model the interior of tree crowns using a Gaussian distribution with mean \\(\\mu\\) and covariance \\(\\sigma^{2}\\delta_{R}\\), where \\(\\delta_{A}\\) is the identity operator on images on \\(A\\subset\\Omega\\). The background is very varied, and thus hard to model in a precise way. We use a Gaussian distribution with mean \\(\\bar{\\mu}\\) and variance \\(\\bar{\\sigma}^{2}\\delta_{\\bar{R}}\\). In general, \\(\\mu>\\bar{\\mu}\\), and \\(\\sigma<\\bar{\\sigma}\\); trees are brighter and more constant in intensity than the background. The boundary of each tree crown has significant inward-pointing image gradient, and although the Gaussian models should in principle take care of this, we have found in practice that it is useful to add a gradient term to the likelihood energy. Our likelihood thus has three factors: \\[\\mbox{P}(I|R,K)=Z^{-1}\\;g_{R}(I_{R})\\;g_{\\bar{R}}(I_{\\bar{R}})\\;f_{\\partial R }(I_{\\partial R})\\.\\] where \\(I_{R}\\) and \\(I_{\\bar{R}}\\) are the images restricted to \\(R\\) and \\(\\bar{R}\\) respectively, and \\(g_{R}\\) and \\(g_{\\bar{R}}\\) are proportional to the Gaussian distributions already described, _i.e._ \\[-\\ln g_{R}(I_{R})=\\int_{R}d^{2}x\\;\\frac{1}{2\\sigma^{2}}(I_{R}(x)-\\mu)^{2} \\tag{1}\\] and similarly for \\(g_{\\bar{R}}\\). The function \\(f_{\\partial R}\\) depends on the gradient of the image \\(\\partial I\\) on the boundary \\(\\partial R\\): \\[-\\ln f_{\\partial R}(I_{\\partial R})=\\lambda_{i}\\int_{\\square_{\\gamma}}dt\\;n(t )\\cdot\\partial I(t) \\tag{2}\\] where \\(n\\) is the unnormalized outward normal to \\(\\gamma\\). The normalization constant \\(Z\\) is thus a function of \\(\\mu\\), \\(\\sigma\\), \\(\\bar{\\mu}\\), \\(\\bar{\\sigma}\\), and \\(\\lambda_{i}\\). \\(Z\\) is also a functional of the region \\(R\\). To a first approximation, it is a linear combination of \\(L(\\partial R)\\) and \\(A(R)\\). It thus has the effect of changing the parameters \\(\\lambda_{C}\\) and \\(\\alpha_{C}\\) in \\(E_{\\rm g}\\). However, since these parameters are essentially fixed by hand (the criteria described in section 2.1.1 only allow us to fix \\(\\beta_{C}\\) and constrain \\(\\alpha_{C}\\)), knowledge of the normalization constant does not change their values, and we ignore it once the likelihood parameters have been learnt. The full model is then given by \\(E(R)=E_{\\rm i}(I,R)+E_{\\rm g}(R)\\), where \\[E_{\\rm i}(I,R)=-\\ln\\mbox{P}(I|R,K)+\\ln Z=-\\ln g_{R}(I_{R})-\\ln g_{\\bar{R}}(I_ {\\bar{R}})-\\ln f_{\\partial R}(I_{\\partial R})\\.\\] The energy is minimized by gradient descent. The gradient descent equation for \\(E\\) is \\[\\hat{n}\\cdot\\frac{\\partial\\gamma}{\\partial s}(t)=-\\partial^{2}I( \\gamma(t))-\\frac{(I(\\gamma(t))-\\mu)^{2}}{2\\sigma^{2}}+\\frac{(I(\\gamma(t))-\\bar {\\mu})^{2}}{2\\bar{\\sigma}^{2}}\\\\ -\\lambda_{C}\\kappa(t)-\\alpha_{C}+\\beta_{C}\\int_{\\square_{\\gamma}} dt^{\\prime}\\;\\hat{R}(t,t^{\\prime})\\cdot n(t^{\\prime})\\hat{\\Phi}(R(t,t^{\\prime}))\\, \\tag{3}\\] where \\(s\\) is the descent time parameter, \\(\\hat{R}(t,t^{\\prime})=(\\gamma(t)-\\gamma(t^{\\prime}))/|\\gamma(t)-\\gamma(t^{ \\prime})|\\) and \\(\\kappa\\) is the signed boundary curvature. As already mentioned, to evolve the region we use the level set framework of Osher and Sethian [25] extended to the demands of nonlocal forces such as equation (3) [30, 31]. ### Tree crown extraction from aerial images In this section, we present the results of the application of the above model to \\(50\\) cm/pixel colour infrared aerial images of poplar stands located in the 'Saone et Loire' region in France. The images were provided by the French National Forest Inventory (IFN). As stated in section 3.2, we model only the NIR band of these images, as adding the other two bands does not increase discriminating power. The tree crowns in the images are \\(\\sim 8\\)-\\(10\\) pixels in diameter, _i.e._\\(\\sim 4\\)-\\(5\\)m. In the experiments, we compare our model to a classical active contour model (\\(\\beta_{C}=0\\)). The parameters \\(\\mu\\), \\(\\sigma\\), \\(\\bar{\\mu}\\), and \\(\\bar{\\sigma}\\) were the same for both models, and were learned from hand-labelled examples in advance. The classical active contour prior model thus has three free parameters (\\(\\lambda_{i}\\), \\(\\lambda_{C}\\) and \\(\\alpha_{C}\\)), while the HOAC 'gas of circles' model has six (\\(\\lambda_{i}\\), \\(\\lambda_{C}\\), \\(\\alpha_{C}\\), \\(\\beta_{C}\\), \\(d\\) and \\(r_{0}\\)). We fixed \\(r_{0}\\) based on our prior knowledge of tree crown size in the images, and \\(d\\) was then set equal to \\(r_{0}\\). Once \\(\\alpha_{C}\\) and \\(\\lambda_{C}\\) have been fixed, \\(\\beta_{C}\\) is determined by equation (2.7). There are thus three effective parameters for the HOAC model. In the absence of any method to learn \\(\\lambda_{i}\\), \\(\\alpha_{C}\\) and \\(\\lambda_{C}\\), they were fixed by hand to give the best results, as with most applications of active contour models. The values of \\(\\lambda_{i}\\), \\(\\lambda_{C}\\) and \\(\\alpha_{C}\\) were not the same for the classical active contour and HOAC models; they were chosen to give the best possible result for each model separately. The initial region in all real experiments was a rounded rectangle slightly bigger than the image domain. The image values in the region exterior to the image domain were set to \\(\\bar{\\mu}\\) to ensure that the region would shrink inwards. Figure 5 illustrates the first experiment. On the left is the data, showing a regularly planted poplar stand. The result is shown on the right. We have applied the algorithm only in the central part of the image, for reasons that will be explained in section 4. Figure 6 illustrates a second experiment. On the left is the data. The image shows a small piece of an irregularly planted poplar forest. The image is difficult because the intensities of the crowns are varied and the gradients are blurred. In the middle is the best result we could obtain using a classical active contour. On the right is the result we obtain with the HOAC 'gas of circles' model.1 Note that in the classical active contour result several trees that are in reality separate are merged into single connected components, and the shapes of trees are often rather distorted, whereas the prior geometric knowledge included when \\(\\beta\ eq 0\\) allows the separation of almost all the trees and the regularization of their shapes. Footnote 1: Unless otherwise specified, in the figure captions the values of the parameters learned from the image are shown when the data is mentioned, in the form \\((\\mu,\\sigma,\\bar{\\mu},\\bar{\\sigma})\\). The other parameter values are shown when each result is mentioned, in the form \\((\\lambda_{i},\\lambda_{C},\\alpha_{C},\\beta_{C},d,r_{0})\\), truncated if the parameters are not present. All parameter values are truncated to two significant figures. Unless otherwise specified, images were scaled to take values in \\([0,1]\\). The region boundary is shown in white. Figure 7 illustrates a third experiment. Again the data is on the left, the best result obtained with a classical active contour model is in the middle, and the result with the HOAC 'gas of circles' model is on the right. The trees are closer together than in the previous experiment. Using the classical active contour, the result is that the tree crown boundaries touch in the majority of cases, despite their separation in the image. Many of the connected components are malformed due to background features. The HOAC model produces more clearly delineated tree crowns, but there are still some joined trees. We will discuss this further in section 4Figure 8 shows a fourth experiment. The data is on the left, the best result obtained with a classical active contour model is in the middle, and the result with the HOAC 'gas of circles' model is on the right. Again, the 'gas of circles' model better delineates the tree crowns and separates more trees, but some joined trees remain also. The HOAC model selects only objects of the size chosen, so that false positives involving small objects do not occur. Table 1 shows the percentages of correct tree detections, false positives and false negatives (two joined trees count as one false negative), obtained with the classical active contour model and the Figure 5: Left: real image with a planted forest ©IFN (0.3, 0.06, 0.05, 0.05). Right: the result obtained using the ‘gas of circles’ model \\((529,5.88,5.88,5.64,4,4)\\). Figure 6: From left to right: image of poplars ©IFN (0.73, 0.11, 0.23, 0.094); the best result with a classical active contour \\((880,13,73)\\); result with the ’gas of circles’ model \\((100,6.7,39,31,4.2,4.2)\\). 'gas of circles' model in the experiments shown in figures 6, 7, and 8. The 'gas of circles' model outperforms the classical active contour in all measures, except in the number of false negatives in the experiment in figure 7. Once the segmentation result has been obtained, it is a relatively simple matter to compute statistics of interest to the forestry industry: number of trees, total area, number and area density, and so on. Figure 8: From left to right: image of poplars ©IFN (0.71, 0.075, 0.18, 0.075); the best result with a classical active contour \\((35000,100,500)\\); result with the ’gas of circles’ model \\((1200,20,100,82,3.5,3.5)\\). Figure 7: From left to right: image of poplars ©IFN (0.71, 0.075, 0.18, 0.075); the best result with a classical active contour \\((24000,100,500)\\); result with the ’gas of circles’ model \\((1500,25,130,100,3.5,3.5)\\). ### Noisy synthetic images In this section, we present the results of tests of the sensitivity of the model to noise in the image. Fifty synthetic images were created, each with ten circles with radius \\(8\\) pixels and ten circles with radius \\(3.5\\) pixels, placed at random but with overlaps rejected. Six different levels of white Gaussian noise, with image variance to noise power ratios from \\(-5\\) dB to \\(20\\) dB, were then added to the images to generate \\(300\\) noisy images. Six of these, corresponding to noisy versions of the same original image, were used to learn \\(\\mu\\), \\(\\sigma\\), \\(\\bar{\\mu}\\), and \\(\\bar{\\sigma}\\). The model used was the same as that used for the aerial images, except that \\(\\lambda_{i}\\) was set equal to zero. The parameters were adjusted to give a stable radius of \\(8\\) pixels. The results obtained on the noisy versions of one of the fifty images are shown in figure 9. Table 2 shows the proportion of false negative and false positive circle detections with respect to the total number of potentially correctly detectable circles (\\(500=50\\times 10\\)), as well as the proportion of 'joined circles', when two circles are grouped together (an example can be seen in the bottom right image of figure 9). Detections of one of the smaller circles (which only occurred a few times even at the highest noise level) were counted as false positives. The method is very robust with respect to all but the highest levels of noise. The first errors occur at \\(5\\) dB, where there is a \\(2\\%\\) false positive rate. At \\(0\\) dB, the error rate is \\(\\sim 10\\%\\), _i.e._ one of the ten circles in each image was misidentified on average. At \\(-5\\) dB, the total error rate increases to \\(\\sim 30\\%\\), rendering the method not very useful. Note that the principal error modes of the model are false positives and joined circles. There are good reasons why these two types of error dominate. We will discuss them further in section 4. ### Circle separation: comparison to classical active contours In a final experiment, we simulated one of the most important causes of error in tree crown extraction, and examined the response of classical active contour and HOAC models to this situation. The errors, which involve joined circles similar to those found in the previous experiment, are caused by the fact that in many cases nearby tree crowns in an image are connected by regions of significant intensity with significant gradient with respect to the background, thus forming a dumbbell shape. Calling the bulbous extremities, the 'bells', and the join between them, the 'bar', the situation arises when the bells are brighter than the bar, while the bar is in turn brighter than the background, and most \\begin{table} \\begin{tabular}{|c||c|c|c||c|c|c|} \\hline Model & & CAC & & & HOAC & \\\\ \\hline Figure & CD \\% & FP \\% & FN \\% & CD \\% & F+ \\% & F- \\% \\\\ \\hline Figure 6 & 85 & 0 & 15 & 97 & 0 & 3 \\\\ Figure 7 & 96.2 & 2.8 & 1.9 & 97.7 & 0 & 2.3 \\\\ Figure 8 & 89.4 & 5 & 5.6 & 95.5 & 0.6 & 3.9 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Results on real images using a classical active contour model (CAC) and the ‘gas of circles’ model (HOAC). CD: correct detections; FP: false positives; FN: false negatives (two joined trees count as one false negative). Figure 9: One of the synthesized images, with six different levels of added white Gaussian noise. Reading from left to right, top to bottom, the image variance to noise power ratios are \\(20\\), \\(15\\), \\(10\\), \\(5\\), \\(0\\), \\(-5\\) dB. Parameter values in the form \\((\\mu,\\sigma,\\bar{\\mu},\\bar{\\sigma},\\lambda_{C},\\alpha_{C},\\beta_{C})\\) are shown under the six images. The parameters \\(d\\) and \\(r_{0}\\) were fixed to \\(8\\) throughout. importantly, the gradient between the background and the bar is greater than that between the bar and the bells. The first row of figure 10 shows a sequence of bells connected by bars. The intensity of the bar varies along the sequence, resulting in different gradient values. We applied the classical active contour and HOAC 'gas of circles' models to these images. The middle row of figure 10 shows the best results obtained using the classical active contour model. The model was either unable to separate the individual circles, or the region completely vanished. The intuition is that if there is insufficient gradient to stop the region at the sides of the bar, then there will also be insufficient gradient to stop the region at the boundary between the bar and the bells, so that the region will vanish. On the other hand, if there is sufficient gradient between the bar and the background to stop the region, the circles will not be separated, and a 'bridge' will remain between the two circles.2 Footnote 2: ‘Bar’ and ‘bell’ refer to image properties; we use ‘bridge’ and ‘circle’ to refer to the corresponding pieces of a dumbbell-shaped region. The corresponding results using the HOAC 'gas of circles' model can be seen in the bottom row of figure 10. All the circles were segmented correctly, independent of the gray level of the junction. Encouraging as this is, it is not the whole story, as we indicated in section 3.4. We make a further comment on this issue in section 4, which now follows. ## 4 Conclusion Higher-order active contours allow the inclusion of sophisticated prior information in active contour models. This information can concern the relation between a region and the data, _i.e._ the likelihood \\(\\text{P}(I|R,K)\\), but more often it concerns the prior probability \\(\\text{P}(R|K)\\) of a region, or in other words, its'shape'. HOACs are particularly well adapted to including shape information about regions for which the topology is unknown _a priori_. In this paper, we have shown via a stability analysis that a HOAC energy can be constructed that describes a 'gas of circles', that is, it favours regions composed of an _a priori_ unknown number of \\begin{table} \\begin{tabular}{|c||c|c|c|} \\hline noise (dB) & FP (\\%) & FN (\\%) & J (\\%) \\\\ \\hline 20 & 0 & 0 & 0 \\\\ \\hline 15 & 0 & 0 & 0 \\\\ \\hline 10 & 0 & 0 & 0 \\\\ \\hline 5 & 2 & 0 & 0 \\\\ \\hline 0 & 6.4 & 4 & 0 \\\\ \\hline -5 & 27.6 & 3.6 & 23 \\\\ \\hline \\end{tabular} \\end{table} Table 2: Results on synthetic noisy images. FP, FN, J: percentages of false positive, false negative, and joined circle detections respectively, with respect to the potential total number of correct detections. circles of a certain radius, with short-range interactions amongst them. The requirement that circles be stable, _i.e._ local minima of the energy, fixes one of the prior parameters and constrains another. The 'gas of circles' model has many potential uses in computer vision and image processing. Combined with an appropriate likelihood, we have applied it to the extraction of tree crowns from aerial images. It performs better than simpler techniques, such as maximum likelihood and standard active contours. In particular, it is better able to separate trees that appear joined in the data than a classical active contour model. The model is not without its issues, however. The two most significant are related to the principal error modes found in the noise experiments of section 3.4: circles are found where the data does not ostensibly support them (false positives, a.k.a. 'phantom' circles), and two circles may be joined into a dumbbell shape and never separated. We discuss these in turn. The first issue is that of 'phantom' circles. Circles of radius \\(\\hat{r}_{0}\\) are local minima of the prior energy. It is the effect of the data that converts such configurations into global minima. Were we able to find the global minimum of the energy, this would be fine. However, the fact that gradient descent finds only a local minimum can create problems in areas where the data does not support the existence of circles. This is because a circle, once formed during gradient descent, cannot disappear unless there is an image force acting on it. We thus find that circles can appear and remain even Figure 10: Results on circle separation comparing the HOAC ‘gas of circles’ model to the classical active contour model. Top: the original images. The intensity of the bar takes values equally spaced between \\(48\\) and \\(128\\) from left to right; the background is \\(255\\); the bells are \\(0\\). In the middle: the best results obtained using the classical active contour model \\((8,1,1)\\). Either the circles are not separated or the region vanishes. Bottom: the results using the HOAC ‘gas of circles’ model \\((2,1,5,4.0,8,8)\\). All the circles are segmented correctly. though there is no data to support them. Adding a large level of noise exacerbates this problem, because random fluctuations may encourage the appearance of circles as intermediate states during gradient descent. The second issue is that of joined circles, discussed in section 3.5. Although the current HOAC model is better able to separate circles than a classical active contour, it still fails to do so in a number of cases, leaving a bridge between the circles. The issue here is a delicate balance between the parameters, which must be adjusted so that the sides of the bridge attract one another, thus breaking the bridge, and so that nearby circles repel one another at close range, so that the bridge does not re-form. Again, this is at least in part an algorithmic issue. Even if the two separated circles have a lower energy than the joined circles, separation may never be achieved due to a local minimum caused by the bridge. Again, high levels of noise encourage this behaviour by producing by chance image configurations that weakly support the existence of a bridge. We are currently working on solving both these problems through a more detailed theoretical analysis of the energy, and in particular the dependence of local minima on the parameters. ## Appendix A Details of stability computations In this appendix, starting from the equation for the circle and the expression for the radial perturbation in terms of its Fourier coefficients, \\[\\gamma(t)=\\gamma_{0}(t)+\\delta\\gamma(t)=(r(t),\\theta(t))=(r_{0}(t)+\\delta r(t ),\\theta_{0}(t))\\] (A.1a) where \\[\\gamma_{0}(t)=(r_{0}(t),\\theta_{0}(t))=(r_{0},t)\\] (A.1b) and \\[\\delta r(t)=\\sum_{k}a_{k}e^{ir_{0}kt}\\,\\] (A.1c) with \\(k\\in\\{m/r_{0}:\\,m\\in\\mathbb{Z}\\}\\), we give most of the steps involved in reaching the expression, equation (2.6), for the expansion to second order of \\(E_{\\text{g}}\\) around a circle. The derivative of \\(\\gamma\\) is given by \\[\\dot{\\theta}(t) =1\\] (A.2a) \\[\\dot{r}(t) =\\dot{\\delta r}(t)=\\sum_{k}a_{k}ir_{0}ke^{ir_{0}kt}\\.\\] (A.2b) The tangent vector field is given by \\[\\tau(t)=\\dot{r}(t)\\partial_{r}+\\dot{\\theta}(t)\\partial_{\\theta}\\.\\] (A.3)We need the magnitude of this vector to second order. The metric in polar coordinates is given by \\(ds^{2}=dr^{2}+r^{2}d\\theta^{2}\\), so we have that \\(|\\tau(t)|^{2}=\\dot{r}(t)^{2}+r(t)^{2}\\) by equation (A.2a). Substituting from equations (A.1) and (A.2b) gives \\[|\\tau(t)|^{2}=r_{0}^{2}+2r_{0}\\sum_{k}a_{k}e^{ir_{0}kt}+\\sum_{k,k^{ \\prime}}a_{k}a_{k^{\\prime}}e^{ir_{0}(k+k^{\\prime})t}(1-r_{0}^{2}kk^{\\prime})\\.\\] (A.4) Taking the square root, expanding it as \\(\\sqrt{1+x}\\approx 1+\\frac{1}{2}x-\\frac{1}{8}x^{2}\\), and keeping terms to second order in the \\(a_{k}\\) then gives \\[|\\tau(t)|=r_{0}\\left\\{1+\\sum_{k}\\frac{a_{k}}{r_{0}}e^{ir_{0}kt}- \\frac{1}{2}\\sum_{k,k^{\\prime}}a_{k}a_{k^{\\prime}}kk^{\\prime}e^{ir_{0}(k+k^{ \\prime})t}\\right\\}\\.\\] (A.5) ### Length Using equation (A.5), the boundary length is then given to second order by \\[L(\\gamma)=\\int_{-\\pi}^{\\pi}dt\\ |\\tau(t)|=2\\pi r_{0}\\left\\{1+ \\frac{a_{0}}{r_{0}}+\\frac{1}{2}\\sum_{k}k^{2}|a_{k}|^{2}\\right\\}\\,\\] where we have used the fact that \\[\\int_{-\\pi}^{\\pi}dt\\ e^{ir_{0}kt}=2\\pi\\delta(k)\\,\\] (A.6) and that \\(a_{-k}=a_{k}^{*}\\), where \\(*\\) indicates complex conjugation, because \\(\\delta r\\) is real. ### Area We can write the interior area of the region as \\[A(\\gamma)=\\int_{-\\pi}^{\\pi}d\\theta\\ \\int_{0}^{r(\\theta)}dr^{ \\prime}\\ r^{\\prime}=\\int_{-\\pi}^{\\pi}d\\theta\\ \\frac{1}{2}r^{2}(\\theta)\\.\\] Thus, using equations (A.1), and again using equation (A.6) to integrate Fourier basis elements, we have that \\[A(\\gamma)=\\pi r_{0}^{2}+2\\pi r_{0}a_{0}+\\pi\\sum_{k}|a_{k}|^{2}\\.\\] (A.7) ### Quadratic energy To compute the expansion of the quadratic term in equation (1.3) for \\(E_{\\text{g}}\\), we need the expansions of \\(\\tau(t)\\cdot\\tau(t^{\\prime})\\) and \\(\\Phi(R(t,t^{\\prime}))\\). #### a.3.1 Inner product of tangent vectors The tangent vector is given by equation (A.3), but we must take care as \\(\\tau(t)\\) and \\(\\tau(t^{\\prime})\\) live in different tangent spaces, at \\(\\gamma(t)\\) and \\(\\gamma(t^{\\prime})\\) respectively. Since parallel transport does not preserve the coordinate basis vectors \\(\\partial_{r}\\) and \\(\\partial_{\\theta}\\), it will change the components of \\(\\tau(t^{\\prime})\\), say, when we parallel transport it to the tangent space at \\(\\gamma(t)\\). It is easiest to convert the tangent vectors to the Euclidean coordinate basis, \\[\\partial_{r} =\\cos(\\theta)\\partial_{x}+\\sin(\\theta)\\partial_{y}\\] \\[\\partial_{\\theta} =-r\\sin(\\theta)\\partial_{x}+r\\cos(\\theta)\\partial_{y}\\,\\] as these basis vectors are preserved by parallel transport. Doing so, and then taking the inner product gives \\[\\tau\\cdot\\tau^{\\prime}=\\cos(\\theta^{\\prime}-\\theta)[r_{0}^{2}+r_{ 0}\\delta r+r_{0}\\delta r^{\\prime}+\\delta r\\delta r^{\\prime}+\\dot{\\delta r}\\dot {\\delta r}^{\\prime}]\\\\ +\\sin(\\theta^{\\prime}-\\theta)[r_{0}\\dot{\\delta r}^{\\prime}-r_{0} \\dot{\\delta r}+\\delta r\\dot{\\delta r}^{\\prime}-\\dot{\\delta r}\\delta r^{\\prime} ]\\.\\] where unprimed quantities are evaluated at \\(t\\) and primed quantities at \\(t^{\\prime}\\). Note that when \\(t=t^{\\prime}\\), the expression reduces to equation (A.4). #### a.3.2 Distance between two points The squared distance between \\(\\gamma(t^{\\prime})\\) and \\(\\gamma(t)\\) is given by \\[|\\gamma(t^{\\prime})-\\gamma(t)|^{2} =(x(t^{\\prime})-x(t))^{2}+(y(t^{\\prime})-y(t))^{2}\\] \\[=[(r_{0}+\\delta r^{\\prime})\\cos(\\theta^{\\prime})-(r_{0}+\\delta r )\\cos(\\theta)]^{2}+[(r_{0}+\\delta r^{\\prime})\\sin(\\theta^{\\prime})-(r_{0}+ \\delta r)\\sin(\\theta)]^{2}\\,\\] which after expansion gives \\[|\\gamma(t^{\\prime})-\\gamma(t)|^{2}=2r_{0}^{2}(1-\\cos(\\Delta t))\\bigg{\\{}1+ \\frac{1}{r_{0}}(\\delta r+\\delta r^{\\prime})+\\frac{\\delta r^{2}+\\delta r^{ \\prime 2}-2\\cos(\\Delta t)\\delta r\\delta r^{\\prime}}{2r_{0}^{2}(1-\\cos(\\Delta t ))}\\bigg{\\}}\\,\\] where \\(\\Delta t=\\theta^{\\prime}-\\theta=t^{\\prime}-t\\). Expanding \\(\\sqrt{1+x}\\approx 1+\\frac{1}{2}x-\\frac{1}{8}x^{2}\\) to second order and collecting terms, we then find \\[R(t,t^{\\prime})=|\\gamma(t^{\\prime})-\\gamma(t)|=2r_{0}|\\sin(\\Delta t/2)|+|\\sin (\\Delta t/2)|(\\delta r+\\delta r^{\\prime})+\\frac{A(\\Delta t)}{4r_{0}}(\\delta r -\\delta r^{\\prime})^{2}\\,\\] (A.9) where \\(A(z)=\\bigg{(}\\frac{\\cos^{2}\\big{(}\\frac{z}{2}\\big{)}}{|\\sin\\frac{z}{2}|}\\bigg{)}\\). #### a.3.3 Interaction function Expanding \\(\\Phi(z)\\) in a Taylor series to second order, and then substituting \\(R(t,t^{\\prime})\\) for \\(z\\) using the approximation in equation (A.9), and keeping only terms up to second order in \\(\\delta\\gamma\\) then gives \\[\\Phi(R(t,t^{\\prime}))=\\Phi(X_{0})+\\left|\\sin\\frac{\\Delta t}{2} \\right|\\Phi^{\\prime}(X_{0})(\\delta r+\\delta r^{\\prime})\\\\ +\\frac{1}{4r_{0}}A(\\Delta t)\\Phi^{\\prime}(X_{0})(\\delta r-\\delta r ^{\\prime})^{2}+\\frac{1}{2}\\sin^{2}\\Bigl{(}\\frac{\\Delta t}{2}\\Bigr{)}\\Phi^{ \\prime\\prime}(X_{0})(\\delta r+\\delta r^{\\prime})^{2}\\,\\] (A.10) where \\(X_{0}=2r_{0}|\\sin(\\Delta t/2)|\\). #### a.3.4 Combining terms Now let \\(G(t,t^{\\prime})=\\tau(t)\\cdot\\tau(t^{\\prime})\\Phi(R(t,t^{\\prime}))\\). Combining the expressions already derived, we have \\[G(t,t^{\\prime})=\\\\ \\underbrace{r_{0}^{2}\\cos(\\Delta t)\\Phi(X_{0})}_{F_{00},\\,\\text{ even}}\\\\ +(\\delta r+\\delta r^{\\prime})\\underbrace{r_{0}\\cos(\\Delta t)\\left\\{ \\Phi(X_{0})+r_{0}\\left|\\sin\\frac{\\Delta t}{2}\\right|\\Phi^{\\prime}(X_{0}) \\right\\}}_{F_{10},\\,\\text{even}}\\\\ +(\\delta r^{\\prime}-\\delta r)\\underbrace{r_{0}\\sin(\\Delta t)\\Phi( X_{0})}_{F_{11},\\,\\text{odd}}\\\\ +(\\delta r^{2}+\\delta r^{\\prime 2})\\underbrace{r_{0}\\cos(\\Delta t )\\left\\{\\frac{1}{4}A(\\Delta t)\\Phi^{\\prime}(X_{0})+\\frac{1}{2}r_{0}\\sin^{2} \\Bigl{(}\\frac{\\Delta t}{2}\\Bigr{)}\\Phi^{\\prime\\prime}(X_{0})+\\left|\\sin\\frac{ \\Delta t}{2}\\right|\\Phi^{\\prime}(X_{0})\\right\\}}_{F_{20},\\,\\text{even}}\\\\ +(\\delta r\\delta r^{\\prime})\\underbrace{\\cos(\\Delta t)\\left\\{ \\Phi(X_{0})+2r_{0}\\left|\\sin\\frac{\\Delta t}{2}\\right|\\Phi^{\\prime}(X_{0})- \\frac{1}{2}r_{0}A(\\Delta t)\\Phi^{\\prime}(X_{0})+r_{0}^{2}\\sin^{2}\\Bigl{(}\\frac {\\Delta t}{2}\\Bigr{)}\\Phi^{\\prime\\prime}(X_{0})\\right\\}}_{F_{21},\\,\\text{even}}\\\\ +(\\delta r^{\\prime}\\delta r^{\\prime}-\\delta r\\delta\\dot{r}) \\underbrace{r_{0}\\left|\\sin\\frac{\\Delta t}{2}\\right|\\sin(\\Delta t)\\Phi^{\\prime }(X_{0})}_{F_{22},\\,\\text{odd}}\\\\ +(\\delta r\\dot{\\delta r}^{\\prime}-\\delta r^{\\prime}\\dot{\\delta r}) \\underbrace{\\sin(\\Delta t)\\left\\{\\Phi(X_{0})+r_{0}\\left|\\sin\\frac{\\Delta t}{2} \\right|\\Phi^{\\prime}(X_{0})\\right\\}}_{F_{23},\\,\\text{odd}}\\\\ +(\\delta\\dot{\\delta r}\\dot{\\delta r}^{\\prime})\\underbrace{\\cos( \\Delta t)\\Phi(X_{0})}_{F_{24},\\,\\text{even}}\\.\\]where we have introduced the notation \\(F_{00}\\ldots F_{24}\\) for the functions appearing in the terms of \\(G\\), and 'odd' and 'even' refer to parity under exchange of \\(t\\) and \\(t^{\\prime}\\). Note that the \\(F\\) are functionals of \\(\\Phi\\), and functions of \\(r_{0}\\) and \\(t^{\\prime}-t\\) (but not \\(t\\) and \\(t^{\\prime}\\) separately). Note also that each line, and hence \\(G\\), is symmetric in \\(t\\) and \\(t^{\\prime}\\). The integral in the quadratic energy term is now given by \\(\\iint_{-\\pi}^{\\pi}dt\\ dt^{\\prime}\\ G(t,t^{\\prime})\\). We can now substitute the expressions for \\(\\delta r\\) and \\(\\dot{\\delta r}\\) in terms of their Fourier coefficients, \\(\\delta r(t)=\\sum_{k}a_{k}e^{ir_{0}kt}\\) and \\(\\dot{\\delta r}(t)=\\sum_{k}a_{k}ir_{0}ke^{ir_{0}kt}\\). Due to the dependence of the \\(F\\) on \\(t-t^{\\prime}\\) only, the resulting integrals can be reduced, via a change of variables \\(p=t^{\\prime}-t\\), to integrals over \\(p\\). We note that in the terms involving \\(F_{10}\\), \\(F_{11}\\), \\(F_{20}\\), \\(F_{22}\\), and \\(F_{23}\\), the presence of the symmetric or antisymmetric factors in \\(\\delta r\\) and \\(\\delta r^{\\prime}\\) simply leads to a doubling of the value of the integral for one of the terms in these factors, due to the corresponding symmetry or antisymmetry of the \\(F\\) functions. For example, \\[\\iint_{-\\pi}^{\\pi}dt\\ dt^{\\prime}\\ (\\delta r\\dot{\\delta r}^{\\prime}-\\dot{ \\delta r}\\delta r^{\\prime})\\ F_{23}(t^{\\prime}-t)=2\\iint_{-\\pi}^{\\pi}dt\\ dt^{\\prime}\\ \\delta r\\dot{\\delta r}^{\\prime}\\ F_{23}(t^{\\prime}-t)\\.\\] We therefore only need to evaluate one of these integrals for the relevant terms. Below we list the calculations for all the \\(F\\) integrals for completeness: \\[\\iint_{-\\pi}^{\\pi}dt\\ dt^{\\prime}\\ \\ F_{00}(t^{\\prime}-t) =\\iint_{-\\pi}^{\\pi}dp\\ dt^{\\prime}\\ \\ F_{00}(p)\\] \\[=2\\pi\\int_{-\\pi}^{\\pi}dp\\ \\ F_{00}(p)\\,\\] which survives because \\(F_{00}\\) is symmetric; \\[\\iint_{-\\pi}^{\\pi}dt\\ dt^{\\prime}\\ \\delta r(t)\\ F_{10}(t^{\\prime}-t) =\\iint_{-\\pi}^{\\pi}dt\\ dt^{\\prime}\\ \\sum_{k}a_{k}e^{ir_{0}kt}\\ F_{10}(t^{\\prime}-t)\\] \\[=\\sum_{k}a_{k}\\iint_{-\\pi}^{\\pi}dp\\ dt^{\\prime}\\ e^{ir_{0}k(-p+t^ {\\prime})}\\ F_{10}(p)\\] \\[=\\sum_{k}a_{k}\\int_{-\\pi}^{\\pi}dt^{\\prime}\\ e^{ir_{0}kt^{\\prime} }\\int_{-\\pi}^{\\pi}dp\\ e^{-ir_{0}kp}\\ F_{10}(p)\\] \\[=\\sum_{k}a_{k}2\\pi\\delta(k)\\int_{-\\pi}^{\\pi}dp\\ e^{-ir_{0}kp}\\ F_{ 10}(p)\\] \\[=2\\pi a_{0}\\int_{-\\pi}^{\\pi}dp\\ F_{10}(p)\\,\\] which survives because \\(F_{10}\\) is symmetric;\\[\\iint_{-\\pi}^{\\pi}dt\\:dt^{\\prime}\\:\\dot{\\delta r}(t)\\:F_{11}(t^{ \\prime}-t) =\\iint_{-\\pi}^{\\pi}dt\\:dt^{\\prime}\\:\\sum_{k}a_{k}iro_{0}ke^{ir_{0} kt}\\:F_{11}(t^{\\prime}-t)\\] \\[=\\sum_{k}a_{k}ir_{0}k\\iint_{-\\pi}^{\\pi}dp\\:dt^{\\prime}\\:e^{ir_{0} k(-p+t^{\\prime})}\\:F_{11}(p)\\] \\[=\\sum_{k}a_{k}ir_{0}k\\int_{-\\pi}^{\\pi}dt^{\\prime}\\:e^{ir_{0}kt^{ \\prime}}\\int_{-\\pi}^{\\pi}dp\\:e^{-ir_{0}kp}\\:F_{11}(p)\\] \\[=\\sum_{k}a_{k}ir_{0}k2\\pi\\delta(k)\\int_{-\\pi}^{\\pi}dp\\:e^{-ir_{0} kp}\\:F_{11}(p)\\] \\[=0\\;;\\] \\[\\iint_{-\\pi}^{\\pi}dt\\:dt^{\\prime}\\:\\delta r^{2}(t)\\:F_{20}(t^{ \\prime}-t) =\\iint_{-\\pi}^{\\pi}dt\\:dt^{\\prime}\\:\\sum_{k}\\sum_{k^{\\prime}}a_{k }a_{k^{\\prime}}e^{ir_{0}(k+k^{\\prime})t}\\:F_{20}(t^{\\prime}-t)\\] \\[=\\sum_{k}\\sum_{k^{\\prime}}a_{k}a_{k^{\\prime}}\\iint_{-\\pi}^{\\pi} dp\\:dt^{\\prime}\\:e^{ir_{0}(k+k^{\\prime})(-p+t^{\\prime})}\\:F_{20}(p)\\] \\[=\\sum_{k}\\sum_{k^{\\prime}}a_{k}a_{k^{\\prime}}\\int_{-\\pi}^{\\pi} dt^{\\prime}\\:e^{ir_{0}(k+k^{\\prime})t^{\\prime}}\\int_{-\\pi}^{\\pi}dp\\:e^{-ir_{0}(k+k^{ \\prime})p}\\:F_{20}(p)\\] \\[=\\sum_{k}\\sum_{k^{\\prime}}a_{k}a_{k^{\\prime}}2\\pi\\delta(k+k^{ \\prime})\\int_{-\\pi}^{\\pi}dp\\:e^{-ir_{0}(k+k^{\\prime})p}\\:F_{20}(p)\\] \\[=\\sum_{k}a_{k}a_{-k}2\\pi\\int_{-\\pi}^{\\pi}dp\\:F_{20}(p)\\] \\[=2\\pi\\sum_{k}|a_{k}|^{2}\\int_{-\\pi}^{\\pi}dp\\:F_{20}(p)\\,\\] which survives because \\(F_{20}\\) is symmetric;\\[\\iint_{-\\pi}^{\\pi}dt\\:dt^{\\prime}\\:\\delta r(t)\\delta r(t^{\\prime}) \\:F_{21}(t^{\\prime}-t) =\\iint_{-\\pi}^{\\pi}dt\\:dt^{\\prime}\\:\\sum_{k}\\sum_{k^{\\prime}}a_{k}a _{k^{\\prime}}e^{ir_{0}(kt+k^{\\prime}t^{\\prime})}\\:F_{21}(t^{\\prime}-t)\\] \\[=\\sum_{k}\\sum_{k^{\\prime}}a_{k}a_{k^{\\prime}}\\iint_{-\\pi}^{\\pi}dp \\:dt^{\\prime}\\:e^{ir_{0}k(-p+t^{\\prime})}e^{ir_{0}k^{\\prime}t^{\\prime}}\\:F_{21 }(p)\\] \\[=\\sum_{k}\\sum_{k^{\\prime}}a_{k}a_{k^{\\prime}}\\int_{-\\pi}^{\\pi}dt^ {\\prime}\\:e^{ir_{0}(k+k^{\\prime})t^{\\prime}}\\int_{-\\pi}^{\\pi}dp\\:e^{-ir_{0}kp} \\:F_{21}(p)\\] \\[=\\sum_{k}\\sum_{k^{\\prime}}a_{k}a_{k^{\\prime}}2\\pi\\delta(k+k^{ \\prime})\\int_{-\\pi}^{\\pi}dp\\:e^{-ir_{0}kp}\\:F_{21}(p)\\] \\[=\\sum_{k}a_{k}a_{-k}2\\pi\\int_{-\\pi}^{\\pi}dp\\:e^{-ir_{0}kp}\\:F_{2 1}(p)\\] \\[=2\\pi\\sum_{k}\\lvert a_{k}\\rvert^{2}\\int_{-\\pi}^{\\pi}dp\\:e^{-ir_{ 0}kp}\\:F_{21}(p)\\;;\\] \\[\\iint_{-\\pi}^{\\pi}dt\\:dt^{\\prime}\\:\\delta r(t)\\dot{\\delta r}(t) \\:F_{22}(t^{\\prime}-t) =\\iint_{-\\pi}^{\\pi}dt\\:dt^{\\prime}\\:\\sum_{k}\\sum_{k^{\\prime}}a_{k} a_{k^{\\prime}}ir_{0}ke^{ir_{0}(k+k^{\\prime})t}\\:F_{22}(t^{\\prime}-t)\\] \\[=\\sum_{k}\\sum_{k^{\\prime}}a_{k}a_{k^{\\prime}}ir_{0}k\\iint_{-\\pi}^ {\\pi}dp\\:dt^{\\prime}\\:e^{ir_{0}(k+k^{\\prime})(-p+t^{\\prime})}\\:F_{22}(p)\\] \\[=\\sum_{k}\\sum_{k^{\\prime}}a_{k}a_{k^{\\prime}}ir_{0}k\\int_{-\\pi}^ {\\pi}dt^{\\prime}\\:e^{ir_{0}(k+k^{\\prime})t^{\\prime}}\\int_{-\\pi}^{\\pi}dp\\:e^{- ir_{0}(k+k^{\\prime})p}\\:F_{22}(p)\\] \\[=\\sum_{k}\\sum_{k^{\\prime}}a_{k}a_{k^{\\prime}}ir_{0}k2\\pi\\delta(k +k^{\\prime})\\int_{-\\pi}^{\\pi}dp\\:e^{-ir_{0}(k+k^{\\prime})p}\\:F_{22}(p)\\] \\[=0\\;,\\] because with \\(k+k^{\\prime}=0\\) from the delta function, the integral becomes one over \\(F_{22}\\) only, which vanishes due to the antisymmetry of \\(F_{22}\\);\\[\\iint\\!\\!\\!\\int_{-\\pi}^{\\pi}dt\\,dt^{\\prime}\\;\\delta r(t)\\dot{\\delta r }(t^{\\prime})\\;F_{23}(t^{\\prime}-t) =\\iint\\!\\!\\!\\int_{-\\pi}^{\\pi}dt\\,dt^{\\prime}\\;\\sum_{k}\\sum_{k^{ \\prime}}a_{k}a_{k^{\\prime}}ir_{0}k^{\\prime}e^{ir_{0}(kt+k^{\\prime}t^{\\prime})} \\;F_{23}(t^{\\prime}-t)\\] \\[=\\sum_{k}\\sum_{k^{\\prime}}a_{k}a_{k^{\\prime}}ir_{0}k^{\\prime} \\iint\\!\\!\\!\\int_{-\\pi}^{\\pi}dp\\,dt^{\\prime}\\;e^{ir_{0}(k(-p+t^{\\prime})+k^{ \\prime}t^{\\prime})}\\;F_{23}(p)\\] \\[=\\sum_{k}\\sum_{k^{\\prime}}a_{k}a_{k^{\\prime}}ir_{0}k^{\\prime} \\int_{-\\pi}^{\\pi}dt^{\\prime}\\;e^{ir_{0}(k+k^{\\prime})t^{\\prime}}\\int_{-\\pi}^{ \\pi}dp\\,e^{-ir_{0}kp}\\;F_{23}(p)\\] \\[=\\sum_{k}\\sum_{k^{\\prime}}a_{k}a_{k^{\\prime}}ir_{0}k^{\\prime}2 \\pi\\delta(k+k^{\\prime})\\int_{-\\pi}^{\\pi}dp\\,e^{-ir_{0}kp}\\;F_{23}(p)\\] \\[=-2\\pi\\sum_{k}]a_{k}|^{2}ir_{0}k\\int_{-\\pi}^{\\pi}dp\\,e^{-ir_{0}kp} \\;F_{23}(p)\\;;\\] \\[\\iint\\!\\!\\!\\int_{-\\pi}^{\\pi}dt\\,dt^{\\prime}\\;\\dot{\\delta r}(t) \\dot{\\delta r}(t^{\\prime})\\;F_{24}(t^{\\prime}-t) =\\iint\\!\\!\\!\\int_{-\\pi}^{\\pi}dt\\,dt^{\\prime}\\;\\sum_{k}\\sum_{k^{ \\prime}}a_{k}a_{k^{\\prime}}i^{2}r_{0}^{2}kk^{\\prime}e^{ir_{0}(kt+k^{\\prime}t^ {\\prime})}\\;F_{24}(t^{\\prime}-t)\\] \\[=-\\sum_{k}\\sum_{k^{\\prime}}a_{k}a_{k^{\\prime}}r_{0}^{2}kk^{\\prime }\\iint\\!\\!\\!\\int_{-\\pi}^{\\pi}dp\\,dt^{\\prime}\\;e^{ir_{0}(k(-p+t^{\\prime})+k^{ \\prime}t^{\\prime})}\\;F_{24}(p)\\] \\[=-\\sum_{k}\\sum_{k^{\\prime}}a_{k}a_{k^{\\prime}}r_{0}^{2}kk^{\\prime }\\int_{-\\pi}^{\\pi}dt^{\\prime}\\;e^{ir_{0}(k+k^{\\prime})t^{\\prime}}\\int_{-\\pi}^ {\\pi}dp\\,e^{-ir_{0}kp}\\;F_{24}(p)\\] \\[=-\\sum_{k}\\sum_{k^{\\prime}}a_{k}a_{k^{\\prime}}r_{0}^{2}kk^{\\prime }2\\pi\\delta(k+k^{\\prime})\\int_{-\\pi}^{\\pi}dp\\,e^{-ir_{0}kp}\\;F_{24}(p)\\] \\[=2\\pi\\sum_{k}|a_{k}|^{2}r_{0}^{2}k^{2}\\int_{-\\pi}^{\\pi}dp\\;e^{-ir_ {0}kp}\\;F_{24}(p)\\;.\\] Using these results then gives equation (2.5), which in combination with equations (2.3) and (2.4), gives equation (2.6). ## References * [1] H.E. Andersen, S.E. Reutebuch, and G.F. Schreuder. Automated individual tree measurement through morphological analysis of a LIDAR-based canopy surface model. In _Proc. of the \\(1^{st}\\) International Precision Forestry Symposium_, pages 11-21, Seattle, Washington, USA, June 2001. * [2] T. Brandtberg and F. Walter. Automated delineation of individual tree crowns in high spatial resolution aerial images by multiple-scale analysis. _Machine Vision and Applications_, (2):64-73, 1998. * [3] V. Caselles, F. Catte, T. Coll, and F. Dibos. A geometric model for active contours. _Numerische Mathematik_, 66:1-31, 1993. * [4] V. Caselles, R. Kimmel, and G. Sapiro. Geodesic active contours. _International Journal of Computer Vision_, 22(1):61-79, 1997. * [5] Y. Chen, H.D. Tagare, S. Thiruvenkadam, F. Huang, D. Wilson, K.S. Gopinath, R.W. Briggs, and E.A. Geiser. Using prior shapes in geometric active contours in a variational framework. _International Journal of Computer Vision_, 50(3):315-328, 2002. * [6] Y. Choquet-Bruhat, C. DeWitt-Morette, and M. Dillard-Bleick. _Analysis, Manifolds and Physics_. Elsevier Science, Amsterdam, The Netherlands, 1996. * [7] L.D. Cohen. On active contours and balloons. _CVGIP: Image Understanding_, 53:211-218, 1991. * [8] L.D. Cohen and R. Kimmel. Global minimum for active contour models: A minimal path approach. _International Journal of Computer Vision_, 24(1):57-78, August 1997. * [9] D. Cremers and S. Soatto. A pseudo-distance for shape priors in level set segmentation. In _Proceedings of the 2nd IEEE Workshop on Variational, Geometric and Level Set Methods_, pages 169-176, Nice, France, 2003. * [10] D. Cremers, F. Tischhauser, J. Weickert, and C. Schnorr. Diffusion snakes: Introducing statistical shape knowledge into the Mumford-Shah functional. _International Journal of Computer Vision_, 50(3):295-313, 2002. * [11] D. Cremers, T. Kohlberger, and C. Schnorr. Shape statistics in kernel space for variational image segmentation. _Pattern Recognition_, 36(9):1929-1943, September 2003. * [12] D. Cremers, S. Osher, and S. Soatto. Kernel density estimation and intrinsic alignment for knowledge-driven segmentation: Teaching level sets to walk. In C. Rasmussen _et al._, editor, _Proc. Patt. Rec._, volume 3175 of _Lecture Notes in Computer Science_, pages 36-44, Tubingen, Germany, 2004. * [13] A. Foulonneau, P. Charbonnier, and F. Heitz. Geometric shape priors for region-based active contours. _Proc. IEEE International Conference on Image Processing (ICIP)_, 3:413-416, 2003. * [14] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 6:721-741, 1984. * [15] F. A. Gougeon. Automatic individual tree crown delineation using a valley-following algorithm and rule-based system. In D.A. Hill and D.G. Leckie, editors, _Proc. Int'l Forum on Automated Interpretation of High Spatial Resolution Digital Imagery for Forestry_, pages 11-23, Victoria, British Columbia, Canada, February 1998. * [16] F.A. Gougeon. A crown-following approach to the automatic delineation of individual tree crowns in high spatial resolution aerial images. _Canadian Journal of Remote Sensing, 21(3)_, pages 274-284, 1995. * [17] U. Grenander. _General Pattern Theory_. Oxford University Press, Oxford, UK, 1993. * [18] M. Kass, A. Witkin, and D. Terzopoulos. Snakes: Active contour models. _International Journal of Computer Vision_, 1(4):321-331, 1988. * [19] M. Larsen. Finding an optimal match window for Spruce top detection based on an optical tree model. In D.A. Hill and D.G. Leckie, editors, _Proc. of the International Forum on Automated Interpretation of High Spatial Resolution Digital Imagery for Forestry_, pages 55-66, Victoria, British Columbia, Canada, February 1998. * [20] M. Larsen. Individual Tree Top Position Estimation by Template Voting. In _Proc. of the Fourth International Airborne Remote Sensing Conference and Exhibition / \\(21^{st}\\) Canadian Symposium on Remote Sensing_, volume 2, pages 83-90, Ottawa, Ontario, June 1999. * [21] M.E. Leventon, W.E.L. Grimson, and O. Faugeras. Statistical shape influence in geodesic active contours. In _Proc. IEEE Computer Vision and Pattern Recognition (CVPR)_, volume 1, pages 316-322, Hilton Head Island, South Carolina, USA, 2000. * [22] D.N. Metaxas. _Physics-based Deformable Models: Applications to Computer Vision, Graphics and Medical Imaging_. Kluwer, 1997. * [23] M. I. Miller and L. Younes. Group actions, homeomorphisms, and matching: A general framework. _International Journal of Computer Vision_, 41:61-84, 2002. * [24] M. I. Miller, U. Grenander, J. A. O'Sullivan, and D. L. Snyder. Automatic target recognition organized via jump-diffusion algorithms. _IEEE Transactions on Image Processing_, 6(1):157-174, January 1997. * [25] S. Osher and J. A. Sethian. Fronts propagating with curvature dependent speed: Algorithms based on Hamilton-Jacobi formulations. _Journal of Computational Physics_, 79(1):12-49, 1988. * [26] N. Paragios and M. Rousson. Shape priors for level set representations. In _Proc. European Conference on Computer Vision (ECCV)_, pages 78-92, Copenhagen, Denmark, 2002. * [27] G. Perrin, X. Descombes, and J. Zerubia. Tree crown extraction using marked point processes. In _Proc. European Signal Processing Conference (EUSIPCO)_, Vienna, Austria, September 2004. * [28] G. Perrin, X. Descombes, and J. Zerubia. A marked point process model for tree crown extraction in plantations. In _Proc. IEEE International Conference on Image Processing (ICIP)_, Genova, Italy, September 2005. * [29] M. Rochery, I. H. Jermyn, and J. Zerubia. Higher order active contours and their application to the detection of line networks in satellite imagery. In _Proc. IEEE Workshop Variational, Geometric and Level Set Methods in Computer Vision_, at ICCV, Nice, France, October 2003. * [30] M. Rochery, I. H. Jermyn, and J. Zerubia. Higher order active contours. Research Report 5656, INRIA, France, August 2005. * [31] M. Rochery, I. H. Jermyn, and J. Zerubia. Higher-order active contours. _International Journal of Computer Vision_, 69(1):27-42, 2006. URL [http://dx.doi.org/10.1007/s11263-006-6851-y](http://dx.doi.org/10.1007/s11263-006-6851-y). * [32] G. Sundaramoorthi and A. Yezzi. More-than-topology-preserving flows for active contours and polygons. In _Proc. IEEE International Conference on Computer Vision (ICCV)_, pages 1276-1283, Beijing, China, 2005. Unite de recherche INRIA Sophia Antipolis 2004, route des Lucioles - BP 93 - 06902 Sophia Antipolis Cedex (France) Unite de recherche INRIA Futurs : Parc Club Orsay Universite - ZAC des Vignes 4, rue Jacques Monod - 91893 ORSAY Cedex (France) Unite de recherche INRIA Lorraine : LORIA, Technopole de Nancy-Brabois - Campus scientifique 615, rue du Jardin Botanique - BP 101 - 54602 Villers-les-Nancy Cedex (France) Unite de recherche INRIA Rennes : IRISA, Campus universitaire de Beaulieu - 35042 Rennes Cedex (France) Unite de recherche INRIA Rhone-Alpes : 655, avenue de l'Europe - 38334 Monthonnot Saint-Ismier (France) Unite de recherche INRIA Rocquencourt : Domaine de Voluceau - Rocquencourt - BP 105 - 78153 Le Chesnay Cedex (France) Editeur INRIA - Domaine de Voluceau - Rocquencourt, BP 105 - 78153 Le Chesnay Cedex (France) [http://www.inria.fr](http://www.inria.fr)
Many image processing problems involve identifying the region in the image domain occupied by a given entity in the scene. Automatic solution of these problems requires models that incorporate significant prior knowledge about the shape of the region. Many methods for including such knowledge run into difficulties when the topology of the region is unknown a priori, for example when the entity is composed of an unknown number of similar objects. Higher-order active contours (HOACs) represent one method for the modelling of non-trivial prior knowledge about shape without necessarily constraining region topology, via the inclusion of non-local interactions between region boundary points in the energy defining the model.
Summarize the following text.
arxiv-format/0611292v1.md
# Self-imaging silicon Raman amplifier Varun Raghunathan1, Hagen Renner2, Robert R. Rice3 and Bahram Jalali1 ###### 1 Electrical Engineering Department, UCLA, 420 Westwood Plaza, Los Angeles, CA 90095-1594, USA; 2Technische Universitat Hamburg-Harburg, 21071 Hamburg, Germany 3Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278, USA [email protected] (190.5650) Raman effect, (190.4380) Nonlinear optics, (130.3120) Integrated optics devices, (230.7370) Waveguides. ## References * [1] G.P. Agrawal, \"Non linear fiber optics\", Academic Press San Diego (2001). ISBN: 0-12-045143-3. * [2] K. Suto, T. Kimura, T. Saito, J. Nishizawa; \"Raman amplification in GaP-Al\\({}_{x}\\)Ga\\({}_{1.x}\\)P waveguides for light frequency discrimination,\" IEE Proc.-Optoelectron. 145, 105-108 (1998). * [3] R. Claps, D. Dimitropoulos, B. Jalali, \"Stimulated Raman scattering in silicon Waveguides,\" IEE Electron. Lett. 38, 1352-1354 (2002). * [4] H. M. Pask, \"The design and operation of solid-state Raman lasers,\" Progress in Quantum Electronics, 27, pp. 3-56, (2003). * [5] N. Bloembergen, \"Multimode effects in stimulated Raman emission,\" Phys. Rev. Lett. 13, 720-724 (1964). * [6] P. Lallemand and N. Bloembergen, \"Multimode effects in the gain of Raman amplifiers and oscillators I. Oscillators,\" Appl. Phys. Lett. 6, 210-212 (1965). * [7] P. Lallemand and N. Bloembergen, \"Multimode effects in the gain of Raman amplifiers and oscillators II. Amplifiers,\" Appl. Phys. Lett. 6, 212-2123 (1965). * [8] Baek, S. H., and Roh, W. B., \"Single-mode Raman fiber laser based on a multimode fiber,\" Opt. Lett. 29, 153-155 (2004). * [9] L.B. Soldana and E.C.M. Pennings, \"Optical multi-mode interference devices based on self-imaging: Principles and Applications,\" IEEE Journ. of light. Tech. 13, 615-627 (1995). * [10] Baker, H. J., Lee, J. R. and Hall, D. R., \"Self-imaging and high-beam-quality operation in multi-mode planar waveguide optical amplifiers,\" Opt. Express 10, 297-302 (2002). * [11] I.T. McKinnie, J.E. Koroshetz, W.S. Pelouch, D.D. Smith, J.R. Unternahrer, and S.W. Henderson, \"Self-imaging waveguide Nd:YAG laser with 58% slope efficiency,\" Conference on Lasers and Electro-Optics (CLEO), CTuP2, (2002). * [12] M.S. Salisbury, P.F. McManamon, B.D. Duncan, \"Optical-fiber preamplifiers for ladar detection and associated measurement for improving the signal-to-noise ratio,\" Opt. Eng. V. 33, 4023-4032 (1994). * [* [13] L.K. Calmes, J.T. Murray, W.L. Austin, R.C. Powell, \"Solid state Raman image amplifier,\" Proc. Of SPIE. v. 3382, 57-67 (1998). * [14] A. Kier (Ed.), \"Mid infrared semiconductor optoelectronics,\" Springer series in optoelectronics (2006). * [15] V. Raghunathan, R. Shori, O.M. Stafsudd, B. Jalali, \"Nonlinear absorption in silicon and the prospects of mid-infrared Silicon Raman laser,\" Physica status solidi (a), 203, R38-R40 (2006). * [16] S.J. Garth and R.A. Sammut, \"Theory of stimulated Raman scattering in two-mode optical fibers,\" J. Opt. Soc. Am. B. v. 10, 2040-2047 (1983). * [17] B. Jalali, S. Yegnanarayanan, T. Yoon, T. Yoshimoto, I. Redina, F. Coppinger, \"Advances in silicon-on-insulator optoelectronics,\" IEEE J Sel Top Quant 4, 938-947 (1998). * [18] W.C. Hurlburt, K.L. Vodopyanov, P.S. Kuo, M.M. Fejer, Y.S. Lee, \"Multiphoton absorption and nonlinear refraction of GaAs in the mid infrared,\" Conference on Lasers and Electro-Optics (CLEO/QELS), QThM3 (2006). * [19] A. Zajac, M. Skorczakowski, Jacek Swiderski, P. Nyga, \"Electrooptically Q-switched mid-infrared Er:YAG laser for medical applications,\" Opt. Express, v. 12, 5125-5130 (2004). * [20] A.E. Siegman, \"How to (may be) measure laser beam quality,\" Tutorial OSA annual meeting (1997). * [21] A. Brignon, G. Feugnet, J.P. Huignard and J.P. Pocholle, \"Large-field-of-view, high-gain, compact diode-pumped Nd:YAG amplifier,\" Opt. Lett. v. 22, 1421-1423 (1997). * [22] L. Raddatz, I. H. White, D. G. Cunningham, and M. C. Norwell, \"Influence of restricted mode excitation on bandwidth of multimode fiber links,\" IEEE Photon. Technol. Lett., 10, 534-536 (1998). ## 1 Introduction The stimulated Raman scattering effect has been used successfully to realize amplifiers and lasers in various solid state media including optical fibers [1], semiconductors such as GaP [2], silicon [3] and more exotic solid state media such as, Ba(NO\\({}_{3}\\))\\({}_{2}\\), LiIO\\({}_{3}\\), and KGd(WO\\({}_{4}\\))\\({}_{2}\\) etc [4]. The Raman process in general can achieve high gains by using suitable interaction lengths and modal area without the need for complicated phase-matching schemes as in the case of Optical Parametric Oscillator (OPO) and Optical parametric amplifier (OPA). However, for high power applications, direct power-scaling of Raman amplifiers by increasing the pump power, keeping the modal area constant, is challenged by: (i) the lack of high power pump sources with good beam quality, and (ii) by the possibility of damaging the medium at high optical intensities. Problems also exists in bulk media where increasing the pump power results in beam distortion due to thermal lensing and self focusing. These problems can be addressed by concomitant scaling of modal area with the increase in pump power. The scaling of pump power / modal area invariably results in multiple spatial modes in waveguide structures. The Raman interactions in the presence of multiple spatial modes were first studied by Bloembergen et al. [5-7], who described interesting effects such as increased Raman gain in an amplifier due to interaction of higher order pump and Stokes modes, and the so-called Raman beam clean-up effect. More recently the beam clean-up effect has been observed in bulk Raman crystals [4] and in multimode optical fibers [8]. Certain multimode waveguide structures also exhibit Talbot self-imaging effect on account of constructive interference among the various waveguide modes every periodic length [9]. This phenomenon also takes place when the multimode waveguide is comprised of an optically amplifying medium. In such multimode waveguides with an active gain medium, the input electric field distribution is amplified and replicated at the focal points, which correspond to full Talbot planes. These amplifier configurations offer the benefits of good beam quality at the focal points along with better light-gain medium interaction and also reduction of deleterious effects such as self-focusing and thermal lensing. So far, the use of amplifiers with Talbot imaging has been restricted to rare-earth doped solid state medium, with top or side pumping using diodes [10,11]. In this paper we propose a multimode silicon waveguide Raman amplifier that consists of collinearly propagating mid infrared pump and Stokes beams. The waveguide amplifies and images the spatial profile of an input beam. We describe a coupled-mode analysis that includes the conventional Raman amplification, and the four-wave mixing that occurs between spatial pump and Stokes modes, due to the Raman nonlinearity. We find that the conventional Raman amplification term and the Raman Spatial FWM (RS-FWM) start to distort the amplified image beyond a certain waveguide length or pump power. The prospects of using this device as an image preamplifier in MWIR Laser Radar (LADAR) are discussed. Image amplifiers in the near-infrared region (\\(\\sim\\)1-1.5um) have been implemented in the past using rare earth doped fibers or other solid state media [12, 13]. The recent progress made in the MWIR laser sources and the richness of the spectral signatures of molecules in these wavelengths has opened up new applications of MWIR sources in LADAR and standoff chemical detection [14]. The range and sensitivity of these remote sensing applications can be improved by the use of an optical preamplifier before signal detection. The requirements imposed on a gain medium for the image amplifier includes low linear and nonlinear optical loss at both the pump and Stokes wavelengths, high optical damage threshold, high Raman gain coefficient, high thermal conductivity and the availability of large samples with high crystal quality. A material that meets all these criteria in the MWIR is single crystal silicon. In all respects, silicon is an excellent choice for the high power MWIR applications, especially since the problem of two photon excitation and free carrier absorption are eliminated when the pump wavelength is longer than the two-photon absorption edge of 2.3 um [15]. Such a technology will also benefit from the mature silicon fabrication processes. Moreover, the device offers large field-of-view image amplification due to the high numerical aperture of the multimode Silicon waveguide. ## 2 Modeling of a multimode Raman amplifier Multimode interference in passive waveguides have applications in optical splitters and couplers, a detailed account of which can be found in ref [9]. The multimode silicon waveguides analyzed in this work consist of a silicon thin film structure as shown in figure 1. The waveguide width, \\(a=80\\mu\\)m, and its thickness \\(b=50\\mu m\\). Under symmetric (on-axis) launch with a 40um beam, the optical power is coupled mainly (\\(\\sim\\)98%) in the fundamental mode in the Y-direction and the multimode nature of the waveguide considered in the X-direction. This simplifies the numerical solutions and helps to elucidate the key features of the device, without loss of generality. The analysis can be naturally extended to other waveguide dimensions. The orthonormal eigen-modes of the 2-D planar waveguide are taken to be of sinusoidal form: \\[\\phi_{mn}=\\sqrt{\\frac{4Z}{ab}Sin\\left(\\frac{m\\,\\pi\\kappa}{a}\\right)}Sin\\left( \\frac{n\\,\\pi\ u}{b}\\right),\\hskip 28.452756pt0<x<a\\;and\\;0<\\;y<b \\tag{1}\\] with mode index (\\(m\\),\\(n\\)) being integers \\(\\geq 1\\) in general. Z is the impedance of the medium. This mode profile assumes that there is no evanescent tail of the mode present in the cladding region, which is approximately true for large core (multimode) high-index contrast silicon waveguides. For ease of analysis, input pump and Stokes beams are assumed to be plane-waves with transverse Gaussian profile as shown below: \\[\\psi_{in}=\\frac{\\sqrt{P\\ Z}}{w\\sqrt{2\\pi}}.\\exp\\left(\\frac{\\ -\\left(x-a\\ /2\\right)^{2}}{4w^{2}} \\right)\\exp\\left(\\frac{\\ -\\left(y-b\\ /\\ 2\\right)^{2}}{4w^{2}}\\ \\right)\\!.\\exp(\\ j\\theta) \\tag{2}\\] \\(P\\) refers to the laser power, the beams are assumed to be launched with _1/e\\({}^{2}\\)_ beam radius of _2w_ and launched centered with respect to X and Y axis (_a/2_ and _b/2_ respectively). The input phase factor is taken as \\(\\theta\\). The evolution of the optical field along the waveguide is written as: \\[\\psi\\ (\\ z)=\\sum_{m}\\ \\sum_{n}\\ A_{mn}\\ (\\ z)\\phi_{mn}\\ e^{\\ j\\beta_{mn}\\ z} \\tag{3}\\] where _A\\({}_{mn}\\)_ and _\\(\\beta_{mn}\\)_ are the mode coefficient and propagation constant for the mode (_m_,_n_). The multimode imaging phenomenon arises due to the periodic constructive interference of the modes in the waveguide at the _focal points_, where the input profile is essentially reproduced. The focal points occurs periodically at intervals [9]: \\[L_{{}_{image}}\\ \\approx\\ p\\ \\frac{4\\,n_{{}_{0}}a^{\\ 2}}{\\lambda_{{}_{0}}} \\tag{4}\\] where, even and odd values of \\(p\\) lead to original and reversed images of the input beam respectively. The self-imaging points which reproduce the original image are referred to as focal points in this paper. For the passive device (ie. no gain), the mode coefficients _A\\({}_{mn}\\)_ are essentially constant along the waveguide. However, in the case of the active device (ie. for a Raman amplifier) to be considered next, the evolution of the mode coefficients along the waveguide are also to be accounted for. Table 1 lists the parameters used in the simulations. It was found that for the launch condition considered, 11 modes along the X-direction and 1 mode along the Y-direction accounted for \\(\\sim\\) 98% of the coupled optical power and was sufficient for the purpose of this simulation. Based on the parameter as listed in Table 1, imaging length at the pump and Stokes wavelengths computed using Eq. (4) are \\(\\sim\\) 6cm and 5.07cm respectively. The imaging lengths are slightly different due to chromatic dispersion. \\begin{table} \\begin{tabular}{c c} \\hline **Parameters** & **Value** \\\\ \\hline Input beam profile & 1/e\\({}^{2}\\) diameter of the gaussian beam = 40\\(\\mu\\)m \\\\ Waveguide dimensions & a = 80\\(\\mu\\)m, b = 50\\(\\mu\\)m \\\\ Simulation grid size & \\(\\Delta\\)x = 1\\(\\mu\\)m, \\(\\Delta\\)y = 2\\(\\mu\\)m, \\(\\Delta\\)z =100\\(\\mu\\)m \\\\ Waveguide modes & X-direction: 11 \\\\ & Y-direction: 1 \\\\ n – (Sellemier model) & \\\\ (\\(\\lambda_{\\mathrm{g}}\\) bandgap in \\(\\mu\\)m) & \\(n^{2}\\) = 11.6858 + \\(\\frac{0.939816}{\\lambda^{2}}+\\frac{8.1x10^{-3}\\times\\lambda_{\\mathrm{r}}^{2}} {\\lambda^{2}-\\lambda_{\\mathrm{g}}^{2}}\\) \\\\ Wavelengths & Pump: 2.936\\(\\mu\\)m \\\\ & Stokes: 3.466\\(\\mu\\)m \\\\ Raman susceptibility & 1.6x10\\({}^{-18}\\) m\\({}^{2}\\)/V\\({}^{2}\\) \\\\ Electronic susceptibility & 0.5x10\\({}^{-18}\\) m\\({}^{2}\\)/V\\({}^{2}\\) \\\\ \\hline \\end{tabular} \\end{table} Table 1: List of parameters used throughout this paper and the values of these parameters used in these simulations In the context of image amplification, there are two mechanisms that can affect the desired self-imaging of the amplified Stokes beam. Both lead to a trade-off between the achievable gain and the reproducibility and quality of the Stokes image at focal points. First, with increasing length and pump power the Stokes modes which have highest overlap with the pump modes experience higher gains compared to the other Stokes modes. This leads to preferential amplification of certain Stokes modes and hence distortion of the image. Secondly, the RS-FWM effect changes the phases of Stokes mode coefficients and further affects the image. The RS-FWM effect has another implication as can be seen in the 2\\({}^{\\mathrm{nd}}\\) term on the right side of Eq. (5.1), when multiple free running pump lasers are used to provide high pump powers, the transfer of random pump phases to the amplified Stokes modes can also degrade the image quality. For MWIR applications considered here, the pump and Stokes photons are below the two-photon bandedge, hence two-photon absorption and the concomitant free-carrier losses are nonexistent [15]. Silicon being indirect bandgap is expected to have insignificant three-photon absorption when compared to other shorter bandgap semiconductors such as GaAs [18]. The coupled mode equations presented above were solved numerically using a finite difference algorithm. The simulator grid size in the x, y and z directions are taken to be 1\\(\\upmu\\)m, 2\\(\\upmu\\)m and 100\\(\\upmu\\)m respectively. For the purpose of studying the Raman amplification process, the input pump was considered to be 1KW peak power or 1mW average power at the wavelength of 2.936\\(\\upmu\\)m. This may be achievable in solid-state laser systems under quasi-CW conditions at energy levels of \\(\\sim\\)100\\(\\upmu\\)J over 100nsec pulse [19]. The input peak intensity of the pump is \\(\\sim\\)25MW/cm\\({}^{2}\\) and is comparable to the intensities used in the near infrared to study Raman scattering in silicon. The multimode Raman amplifier presented in this work is power scalable and the Raman amplification process can be made to work at lower or higher power levels through appropriate waveguide dimension scaling. This would also change the imaging length as given by Eq. (4). The input Stokes beam is assumed to be at 1\\(\\upmu\\)W at 3.466\\(\\upmu\\)m wavelength. Figure 2 shows the electric-field contour (X-Z profile) of the pump and Stokes as they propagate along the waveguide. The amplification of the Stokes is clearly noticeable along with the self-imaging effect. The pump is not significantly depleted due to the weak input Stokes beam. It is found that the pump and Stokes fields self-image much more frequently than the focal length as calculated using eq. (4). This is due to the fact that we have considered a symmetrical input field in this example. In this case, the image will reproduce itself 8 times within the focal length of a general input field, L\\({}_{\\mathrm{image}}\\), described by Eq. (4) [9]. Figure 3 shows the small signal Raman gain that can be achieved along the waveguide for varying waveguide propagation losses. The vertical dashed lines denote the location of the 1\\({}^{\\mathrm{st}}\\) and 2\\({}^{\\mathrm{nd}}\\) focal points. For the case of zero loss, Raman gain of 10dB is reached at the first focal point for peak pump power levels of \\(\\sim\\)1KW. For such large area silicon waveguide, losses are expected to be on the order of 0.1dB/cm [17]. As seen in this figure, for the propagation losses considered here, 0.1dB/cm, 0.2dB/cm and 0.5dB/cm, the gain at the first imaging length are 9dB, 8dB, and 5dB, respectively. The impact of linear losses need to be explained. If the linear losses increase with the mode number then a spatial low-pass filtering will occur as the amplitudes of the higher modes are diminished, while their phases remain unaffected. This would be the case if losses occur primarily on the waveguide surfaces. In addition, propagation losses also impact the image quality by diminishing the contribution of the RS-FWM effect as the image propagates along the waveguide. Figure 3: Evolution of small-signal Raman gain along the length of the multimode silicon Raman amplifier. The input pump and Stokes power are taken to be 1KW peak (1mW average) and 1\\(\\upmu\\)W respectively. Figure 2: Contour profile of the electric field amplitude (X-Z profile) showing the self-imaging Raman amplifier with the evolution of the pump and Stokes along the length of the multimode silicon waveguide. A single pump and Stokes Gaussian beam is launched into the waveguide. Pump power coupled into the waveguide is 1KW peak (1mW average) and Stokes power is 1\\(\\upmu\\)W. To assess the impact of Raman interaction on the self imaging phenomenon, we have computed the well known \\(M^{2}\\) parameter for the beam profile, and its evolution along the waveguide. Following the known procedure [20], the \\(M^{2}\\) parameter was computed by taking the Fourier transform of the transversal beam profile. The M\\({}^{2}\\) parameter indicates deviations from a Gaussian profile with \\(M^{2}\\)=1 corresponding to an ideal Gaussian beam, such as the input profile considered in the present case. Figure 4 shows the \\(M^{2}\\) parameter for the Stokes beams for the following cases: (a) Stokes beam propagating in a passive waveguide without Raman amplification (no pump) and (b) in the presence of Raman amplification with input pump intensities of 25 and 50 MW/cm\\({}^{2}\\). The input beam is found to reproduce itself periodically along the waveguide with \\(M^{2}\\) reaching it minima at focal points. In the absence of Raman interactions (Figure 4(a)) self imaging is perfect with \\(M^{2}\\)=1 reproduced at all focal points. In the presence of Raman amplification however, the image begins to be distorted while it is amplified along the waveguide (figure 4(b)). This distortion in image quality is found to be more significant with increase in pump power and waveguide length. The slight increase in \\(M^{2}\\) at focal points (leading to loss of image quality) and the overall reduction in its average value are due to the preferential amplification of the fundamental mode. The periodicity of M\\({}^{2}\\) along the waveguide length is also modulated by ripples due to the RS-FWM terms in Eqs. (5.1) and (5.2) by contributing addition phase factors to the mode coefficients. Thus, it is clear that there exists a tradeoff between the amount of gain that can be achieved and the image distortion. From figures 3 and 4 it is found that gains close to 10dB can be achieved in waveguide lengths of ~5cm with minimal image distortion. ## 3 Image Amplification The range and sensitivity of LADAR (Laser Detection And Ranging) and remote sensing systems can be improved by the use of optical preamplifiers before detection. So far, efforts to implement optical image preamplifiers have been mainly focused on near infrared LADAR systems using doped fiber, YAG-based amplifiers and Barium Nitrate Raman medium [12, 21, 13]. There is increasing interest in LADAR systems operating in the MWIR due to strong vibrational resonance of molecules in this range and the recent availability of efficient MWIR laser sources [14]. Moreover, the solid-state Raman media used previously for image amplification consisted of a bulk crystal with no waveguiding [13]. The lack of waveguiding, and hence self imaging, limits the interaction length before image quality is deteriorated. The waveguide implementation introduced here increases the length over which image quality can be maintained and also reduces the threshold pump power through confinement of the pump mode. The use of silicon as the active medium also provides many advantages such as a high Raman coefficient, high thermal conductivity, high optical damage threshold, and mature fabrication technology. To assess the performance of the device as an image amplifier, simple test images were launched into the device input and its evolution through the waveguide was computed using the methodology described above. Fig. 5 shows the cross-sectional amplitude profiles at the input and output at the first imaging length. For the purpose of this simulation we have considered 20 waveguide modes along the X-direction and 1 mode along the Y-direction. We emphasize the transversal profile is shown here as opposed to the longitudinal view shown in Fig. 2. A pump beam with 1KW peak (1mW average) power is assumed to be launched centered with respect to the waveguide. The periodic patterns considered here are 50, 100 and 200 lines per mm ruling. The center portion of the image appears brighter than the edges due to the preferential amplification of the fundamental Stokes modes. The deterioration is more severe when the image contains higher spatial frequencies. Nonetheless the rulings are clearly resolvable at amplification level (~10dB gain). The preferential amplification of the fundamental Stokes mode leading to loss of image quality can be eliminated using selective pump mode excitation as described below. Figure 4: The beam quality (M\\({}^{2}\\)) of the Stokes beam calculated along the waveguide. The image periodically repeats itself due to the self-imaging property of the multimode waveguide. (a) Stokes beam propagating through a passive waveguide with no pump launched and (b) Stokes beam propagating through an active waveguide Raman amplifier with input pump intensities of 25MW/cm\\({}^{2}\\) and 50MW/cm\\({}^{2}\\). Pump powers are 1KW peak (1mW average) and 2KW peak (2mW average). The M\\({}^{2}\\) parameter is a measure of the image quality at the focal points with an ideal value of unity. The image deteriorates (\\(M^{2}\\) increases) with increase in pump power due to preferential amplification of fundamental Stokes mode in comparison with higher order modes, and the presence of the phase-sensitive Raman four wave mixing. Figure 5: The electric-field amplitude profile of a test image (left) and the amplified image (right). These figures describe the cross sectional (X-Y) profile as opposed to Figures 2 and 3 which show the propagation (X-Z) along the waveguide length. The spatial frequency of the test image is (a) 50 lines per mm, (b) 100 lines per mm, and (c) 250 lines per mm. Pump power of 1KW peak (1 mW average) is launched. The Stokes image experiences \\(\\sim\\)10dB gain over a length of \\(\\sim\\)5cm (ie. the first focal length). ## 4 Distortion-free image amplification using a single, high-order pump mode excitation From the analysis of Raman amplification in a multimode waveguide with Gaussian-like transverse pump profile presented in sections 2 and 3 it is clear that there is a trade-off between the amount of image amplification and the distortion experienced by the image. However, there exists a special case in which distortion-free image amplification is possible. Since the pump is at a lower wavelength than the Stokes, the pump field supports more waveguide modes than the Stokes field. By propagating the pump only in a higher order mode which the Stokes image does not support, it is possible to eliminate the two sources of image distortion discussed in section 2. The RS-FWM terms vanish because the pump being in a single mode prevents the pump mode mixing effect (2\\({}^{nd}\\) term on RHS of Eqs. 5.1 and 5.2). Let \\(M\\) be the index for the excited pump mode and \\(m\\)\\(<\\)\\(M\\) be the Stokes mode index. With \\(k\\)=\\(M\\), the self-coupling coefficient simplifies to: \\[\\kappa_{mn-kl}=\\kappa_{m1-M1}=\\omega_{S}\\varepsilon_{o}\\mathcal{X}_{Raman}^{(3 )}\\int\\limits_{0}^{b}\\int\\limits_{0}^{a}\\left|\\phi_{P-M1}\\right|^{2}\\left|\\phi_ {S-m1}\\right|^{2}dxdy=\\omega_{S}\\varepsilon_{o}\\mathcal{X}_{Raman}^{(3)}. \\frac{3Z_{p}Z_{S}}{2ab} \\tag{6}\\] which is independent of \\(m\\), and hence is a constant for all Stokes modes. In this case the coupled mode equation for the Stokes mode simplifies to: \\[\\frac{dA_{S-m1}}{dz}=A_{S-m1}\\kappa_{m1-M1}\\left|A_{P-M1}\\right|^{2},\\,for\\, all\\,m\\textless M \\tag{7}\\] With a simple solution: \\[A_{S-m1}\\left(z\\right)=A_{S-m1}(0).\\exp\\left(K_{m1-M1}\\int\\limits_{0}^{z} \\left|A_{P-M1}\\left(z^{\\prime}\\right)\\right|^{2}dz^{\\prime}\\right) \\tag{8}\\] The exponential term represents the Raman gain experienced by the mode and is uniform for all the Stokes modes. Thus distortion free amplification is possible without trading off the gain. Selective mode excitation in multimode fibers is typically used for reducing the impact of modal dispersion in data communication applications [22]. Such techniques will prove powerful in utilizing the full potential of the silicon image amplifier. ## 5 Conclusions In this paper we have proposed and analyzed a novel Raman amplifier in a multimode silicon waveguide. This amplifier consists of collinearly propagating pump and Stokes beams which are periodically self-imaged along the waveguide length. This ensures that the Stokes beam gets amplified and also reconstructs its profile at the focal points. We have also analyzed an application of this amplifier as an image pre-amplifier for MWIR remote sensing applications. The use of a multimode silicon waveguide as the active medium, with mature processing technology, large field-of-view and excellent thermal, damage and transmission properties in the MWIR lends itself to diverse image amplification applications. We have performed coupled-mode analysis of multimode waveguides including the conventional Raman terms and the phase-sensitive Raman four-wave mixing terms. We find that there are two different contributions to image distortion: (i) the conventional Raman terms can lead to preferential amplification of certain modes with the highest coupling coefficients at the expense of the other modes, and (ii) the RS-FWM terms which lead to phase distortions of the Stokes images and hence affect the self-imaging property. These introduce a tradeoff between the gain and image quality. One possible solution that can overcome this trade-off is to restrict the pump to a single high order spatial mode. **Acknowledgements:** This work was supported by DARPA and Northrop Grumman Corporation.
We propose a new type of waveguide optical amplifier. The device consists of collinearly propagating pump and amplified Stokes beams with periodic imaging of the Stokes beam due to the Talbot effect. The application of this device as an Image preamplifier for Mid Wave Infrared (MWIR) remote sensing is discussed and its performance is described. Silicon is the preferred material for this application in MWIR due to its excellent transmission properties, high thermal conductivity, high damage threshold and the mature fabrication technology. In these devices, the Raman amplification process also includes four-wave-mixing between various spatial modes of pump and Stokes signals. This phenomenon is unique to nonlinear interactions in multimode waveguides and places a limit on the maximum achievable gain, beyond which the image begins to distort. Another source of image distortion is the preferential amplification of Stokes modes that have the highest overlap with the pump. These effects introduce a tradeoff between the gain and image quality. We show that a possible solution to this trade-off is to restrict the pump into a single higher order waveguide mode.
Provide a brief summary of the text.
arxiv-format/0611595v1.md
# Compact star constraints on the high-density EoS H. Grigorian 1Institut fur Physik, Universitat Rostock, 18051 Rostock, Germany 12Department of Physics, Yerevan State University, 375047 Yerevan, Armenia 23Laboratory for Information Technologies, JINR Dubna, 141980 Dubna, Russia 3 D. Blaschke 1Institut fur Physik, Universitat Rostock, 18051 Rostock, Germany 14Bogoliubov Laboratory for Theoretical Physics, JINR Dubna, 141980 Dubna, Russia 45Instytut Fizyki Teoretycznej, Uniwersytet Wroclawski, 50-204 Wroclaw, Poland 5 T. Klahn 1Institut fur Physik, Universitat Rostock, 18051 Rostock, Germany 16 Gesellschaft fur Schwerionenforschung mbH (GSI), 64291 Darmstadt, Germany 6 ## 1 Introduction Recently, new observational limits for the mass and the mass-radius relationship of CSs have been obtained which provide stringent constraints on the equation of state of strongly interacting matter at high densities, see Klahn et al. (2006) and references therein. In this latter work several modern nuclear EsoS have been tested regarding their compatibility with phenomenology. It turned out that none of these nuclear EsoS meets all constraints whereas every constraint could have been fulfilled by some EsoS. As we will point out in this contribution, a phase transition to quark matter in the interior of CSs might resolve this problem. In the following we will apply an exemplary EoS for NM obtained from the ab-initio relativistic Dirac-Brueckner-Hartree-Fock (DBHF) approach using the Bonn A potential (van Dalen et al. (2005)). There is not yet an ab-initio approach to the high-density EoS formulated in quark and gluon degrees of freedom, since it would require an essentially nonperturbative treatment of QCD at finite chemical potentials. For some promising steps in the direction of a unified QM-NM description on the quark level, we refer to the nonrelativistic potential model approach by Ropke et al. (1986) and the NJL model one by Lawley et al. (2006). Simulations of QCD on the Lattice meet serious problems in the low-temperature finite-density domain of the QCD phase diagram relevant for CS studies. However, there are modern effective approaches to high-density QM which, albeit still simplified, focus on specific nonperturbative aspects of QCD. They differ from the traditional bag model approach and allow for CS configurations with sufficiently large masses, see Alford et al. (2006). For our QM description we employ a three-flavor chiral quark model of the NJL type with selfconsistent mean fields in the scalar meson (coupling \\(G_{S}\\)) and scalar diquark (coupling \\(G_{D}=\\eta_{D}\\ G_{S}\\)) channels (Blaschke et al. (2005)), generalized by including a vector meson mean field (coupling \\(G_{V}=\\eta_{V}\\ G_{S}\\)), see Klahn et al. (2006a). We show that the presence of a QM core in the interior of CSs does not contradict any of the discussed constraints. Moreover, CSs with a QM interior would be assigned to the fast coolers in the CS temperature-age diagram. Another interesting outcome of our investigations is the prediction of a small latent heat for the deconfinement phase transition in both, symmetric and asymmetric NM. Such a behavior leads to hybrid stars that \"masquerade\" as neutron stars and has been discussed earlier by Alford et al. (2005) for a different EoS. This finding is of relevance for future heavy-ion collision programs at FAIR Darmstadt. ## 2 The flow constraint from HICs The behaviour of elliptic flow in heavy-ion collisions is related to the EoS of isospin symmetric matter. The upper and lower limits for the stiffness deduced from such analyses (Danielewicz et al. (2002)) are indicated in Fig. 1 as a shaded region. The nuclear DBHF EoS is soft at moderate densities with a compressibility \\(K=230\\) MeV (van Dalen et al. (2004), Gross-Boelting et al. (1999)), but tends to violate the flow constraint for densities above 2-3 times nuclear saturation. As a possible solution to this problem we adopt a phase transition to QM with an EoS fixed to sketch the upper boundary of the flow constraint. In order to obtain an EoS as stiff as possible we use a vector coupling of \\(\\eta_{V}=0.50\\) and a diquark coupling of \\(\\eta_{D}=1.03\\). Herewith the EoS is completely fixed. ## 3 Constraints from astrophysics ### Maximum mass and mass-radius constraints These most severe constraints come in particular from the mass measurement for PSR J0751+1807 (Nice et al. (2005)) giving a lower limit for the maximum mass \\(\\approx 1.9\\)\\(M_{\\odot}\\) at \\(1\\sigma\\) level, and from the thermal emission of RX J1856-3754 (Trumper et al. (2004)) providing a lower limit in the mass-radius plane with minimal radii \\(R>12\\) km. These constraints can only be fulfilled by a rather stiff EoS. The most stiff quark matter contribution to the EoS which still fulfills the flow constraint in symmetric matter corresponds to \\(\\eta_{V}=0.5\\) with a maximum mass for hybrid stars \\(\\approx 2.1\\)\\(M_{\\odot}\\), rather independent of the choice of \\(\\eta_{D}\\) which fixes the critical mass for the onset of deconfinement, see Figs. 2, 3. For a more detailed discussion, see Klahn et al. (2006), Klahn et al. (2006). ### Cooling constraints _Direct Urca (DU) processes_ are flavor-changing processes with the prototype being \\(n\\to p+e^{-}+\\bar{\ u}_{e}\\) (Gamow and Schoenberg (1941)), providing the most effective cooling mechanism in the hadronic layer of compact stars. It acts if the proton fraction \\(x\\) exceeds the DU threshold \\(x_{DU}\\), \\(x=n_{p}/(n_{n}+n_{p})\\geq x_{DU}\\). The threshold is given by \\(x_{DU}=0.11\\) (Lattimer et al. (1991)) and rises up to \\(x_{DU}=0.14\\) upon inclusion of muons. Although the onset of the DU process entails a sensible dependence of cooling curves on the star masses, hadronic cooling with realistic pairing gaps is not sufficient to explain young, nearby X-ray dim objects, like Vela, with typical CS masses, not exceeding 1.5 \\(M_{\\odot}\\) (Blaschke et al. (2004), Grigorian et al. (2005)). The point on the stability curve in Fig. 2 marks the DU threshold density for the DBHF EoS. Quark matter Figure 1: Constraint on the high-density behavior of the EoS from simulations of flow data from heavy-ion collision experiments (shaded area from Danielewicz et al. (2002)) compared to the nuclear matter and hybrid EsoS discussed in the text. Figure 3: Mass-radius relations for CSs with possible phase transition to deconfined quark matter, see Kläahn et al. (2006). Figure 2: Stable CS configurations for neutron stars (DBHF) and hybrid stars, characterized by the parameters \\(\\eta_{D}\\) and \\(\\eta_{V}\\) of the quark matter EoS. DU processes provide enhanced cooling, characterized by the diquark pairing gaps (Blaschke et al. (2000), Page et al. (2000)) and their density dependence (Grigorian et al. (2005), Popov et al. (2006a)). For a recent review, see Sedrakian (2007). To verify this rather heuristic approach we apply explicit calculations of the cooling of hybrid configurations which shall describe present data of the _temperature-age_ distribution of CSs. The main processes in nuclear matter that we accounted for are the direct Urca, the medium modified Urca and the pair breaking and formation processes. Furthermore we accounted for the \\(1S_{0}\\) neutron and proton gaps and the suppression of the \\(3P_{2}\\) neutron gap. For the calculation of the cooling of the quark core we incorporated the most efficient processes, namely the quark modified Urca process, the quark bremsstrahlung, the electron bremsstrahlung and the massive gluon-photon decay. In the 2-flavor superconducting phase one color of quarks remains unpaired. Here we assume a small residual pairing (\\(\\Delta_{X}\\)) of the hitherto unpaired quarks. For detailed discussions of cooling calculations and the required ingredients see Blaschke et al. (2004), Popov et al. (2006a), 1 and references therein. The resulting temperature-age relations for the introduced hybrid EoS are shown in Fig. 4. The critical density for the transition from nuclear to quark matter has been set to a corresponding CS mass of \\(M_{\\rm crit}=1.22~{}M_{\\odot}\\). All cooling data points are covered and correspond to CS configurations with reasonable masses. In this picture slow coolers correspond to light, pure neutron stars (\\(M<M_{\\rm crit}\\)), whereas fast coolers are rather massive CSs (\\(M>M_{\\rm crit}\\)) with a QM core. Another constraint on the temperature-age relation is given by the _maximum brightness_ of CSs, as discussed by Grigorian (2006). It is based on the fact that despite many observational efforts one has not observed very hot NSs (\\(\\rm logT>6.3-6.4~{}K\\)) with ages of \\(10^{3}\\) - \\(10^{4.5}\\) years. Since it would be very easy to find them - if they exist in the galaxy - one has to conclude that at least their fraction is very small. Therefore a realistic model should not predict CSs with typical masses at temperatures noticeable higher than the observed ones. The region of avoidance is the hatched trapezoidal region in Fig. 4. The final CS cooling constraint in our scheme is given by the _Log N-Log S_ distribution, where \\(N\\) is the number of sources with observed fluxes larger than \\(S\\). This integral distribution grows towards lower fluxes and is inferred, e.g., from the ROSAT all-sky survey (Neuhauser and Trumper (1999)). The observed _Log N-Log S_ distribution is compared with the ones calculated in the framework of a population synthesis approach in Fig. 5. A detailed discussion of merits and drawbacks can be found in Popov et al. (2006). Altogether, the hybrid star cooling behavior obtained for our EoS fits all of the sketched constraints under the assumption of the existence of a 2SC phase with X-gaps. ## 4 Outlook: The QCD phase-diagram Within the previous sections we exemplified how to apply the testing scheme introduced in Klahn et al. (2006) to the modeling of a reliable hybrid EoS with a NM-QM phase transition that fulfills a wide range of constraints from HICs and astrophysics. In a next step we extend the description to finite temperatures focusing on the behaviour at the transition line. For this purpose we apply a relativistic mean-field model with density-dependent masses and couplings (Typel (2005)) adapted such as to Figure 4: Cooling evolution for hybrid stars of different masses given in units of \\(M_{\\odot}\\). Note that Vela is described with a typical CS mass not exceeding 1.45 \\(M_{\\odot}\\). From Popov et al. (2006a). Figure 5: Comparison of observational data for the LogN-LogS distribution with results from population synthesis using hybrid star cooling according to Popov et al. (2006a). mimick the DBHF-EoS and generalize to finite temperatures (DD-F4). Fig. 6 shows the resulting phase diagram including the transition from nuclear to quark matter (\\(\\eta_{D}=1.030,\\eta_{V}=0.50\\)) which exhibits almost a crossover transition with a negligibly small coexistence region and a tiny density jump. At temperatures beyond \\(T\\sim 45\\) MeV our NM description is not reliable any more since contributions from mesons, hyperons and nuclear resonances are missing. This will be amended in future studies. ## 5 Conclusions We have presented a new scheme for testing nuclear matter equations of state at supernuclear densities using constraints from neutron star and HIC phenomenology. Modern constraints from the mass and mass-radius-relation measurements require stiff EsoS at high densities, whereas flow data from heavy-ion collisions seem to disfavor too stiff behavior of the EoS. As a compromise we have presented a hybrid EoS with a phase transition to color superconducting quark matter which, due to a vector meson meanfield, is stiff enough at high densities to allow compact stars with a mass of 2 \\(M_{\\odot}\\). Such a hybrid EoS could be superior to a purely hadronic one as it allows a faster cooling of objects within the typical CS mass region. This way, young nearby X-ray dim objects such as Vela could be explained with masses not exceeding 1.5 \\(M_{\\odot}\\). The present hybrid EoS predicts hybrid stars that \"masquerade\" as neutron stars, suggesting only a tiny density jump at the phase transition. This characteristics is also present for the symmetric matter case and persists at higher temperatures in the QCD phase diagram. It is suggested that the CBM experiment at FAIR might softly enter the quark matter domain without extraordinary hydrodynamical effects from the deconfinement transition. ###### Acknowledgements. We thank all our collaborators who have contributed to these results, in particular D. Aguilera, J. Berdermann, C. Fuchs, S. Popov, F. Sandin, S. Typel, and D.N. Voskresensky. The work is supported by DFG under grant 436 ARM 17/4/05 and by the Virtual Institute VH-VI-041 of the Helmholtz Association. We also gratefully acknowledge the support by J. E. Trumper and the organizers of the 363\\({}^{\\rm rd}\\) Heraeus seminar on \"Neutron Stars and Pulsars\". ## References * (1) Alford M., Braby M., Paris M. W., and Reddy S., 2005, ApJ 629, 969 * (2) Alford M., Blaschke D., Drago A., Klahn T., Pagliara G., and Schaffner-Bielich J., 2006, arXiv:astro-ph/0606524. * (3) Blaschke D., Klahn T. and Voskresensky D. N., 2000, ApJ 533, 406, * (4) Blaschke D., Grigorian H., and Voskresensky D.N., 2004, A&A 424, 979 * (5) Blaschke D., Fredriksson S., Grigorian H., Oztas A.M. and Sandin F., 2005, PRD 72, 065020 * (6) Blaschke D., 2006, PoS JHW2005, 003 * (7) Danielewicz P., Lacey R., and Lynch W. G., 2002, Science 298, 1592 * (8) Gamow G., and Schoenberg M., 1941, Phys. Rev 59, 539 * (9) Grigorian H., Blaschke D., and Aguilera D.N., 2004, PRC 69, 065802 * (10) Grigorian H., Blaschke D., and Voskresensky D.N., 2005, PRC 71, 045801 * (11) Grigorian H., 2006, PRC 74, 025801 * (12) Grigorian H., 2006a, Phys. Part. Nucl. Lett. 3, in press; arXiv:hep-ph/0602238. * (13) Gross-Boelting T., Fuchs C., and Faessler A., 1999, Nucl. Phys. A 648, 105 * (14) Klahn T. et al., 2006, PRC 74, 035802 * (15) Klahn T. et al., 2006a, arXiv:nucl-th/0609067 * (16) Lattimer J. M. et al., 1991, Phys. Rev. Lett. 66, 2701 * (17) Lawley S., Bentz W., and Thomas A. W., 2006, J. Phys. G 32, 667 * (18) Neuhauser R., and Trumper J., 1999, A&A 343, 151 * (19) Nice D.J., et al., 2005, AJ 634, 1242 * (20) Popov S., Grigorian H., Turolla R., and Blaschke D., 2006, A&A 448, 327 * (21) Popov S., Grigorian H. and Blaschke D., 2006a, PRC 74, 025803 * (22) Page D., Prakash M., Lattimer J. M., and Steiner A., PRL 85, 2048 * (23) Ropke G., Blaschke D., and Schulz H., 1986, PRD 34, 3499. * (24) Sedrakian A., 2007, Prog. Part. Nucl. Phys. 58, 168 * (25) Trumper J.E., Burwitz V., Haberl F., and Zavlin V.E., 2004, Nucl. Phys. Proc. Suppl. 132, 560 * (26) Typel S., 2005, PRC 71, 064301 * (27) van Dalen E.N.E., Fuchs C., and Faessler A., 2004, Nucl. Phys. A 744, 227; 2005, PRC 72, 065803 * (28) van Dalen E.N.E., Fuchs C., and Faessler A., 2005, PRL 95, 022302 Figure 6: Phase diagram for isospin symmetry using the most favorable hybrid EoS of the present study. The NM-2SC phase transition is almost a crossover. The model DD-F4 is used as a finite-temperature extension of DBHF. For the parameter set (\\(\\eta_{D}=0.75\\), \\(\\eta_{V}=0.0\\)) the flow constraint is fulfilled but no stable hybrid stars are obtained.
A new scheme for testing the nuclear matter (NM) equation of state (EoS) at high densities using constraints from compact star (CS) phenomenology is applied to neutron stars with a core of deconfined quark matter (QM). An acceptable EoS shall not be in conflict with the mass measurement of 2.1 \\(\\pm\\) 0.2 M\\({}_{\\odot}\\) (1 \\(\\sigma\\) level) for PSR J0751+1807 and the mass-radius relation deduced from the thermal emission of RX J1856-3754. Further constraints for the state of matter in CS interiors come from temperature-age data for young, nearby objects. The CS cooling theory shall agree not only with these data, but also with the mass distribution inferred via population synthesis models as well as with LogN-LogS data. The scheme is applied to a set of hybrid EoS with a phase transition to stiff, color superconducting QM which fulfills all above constraints and is constrained otherwise from NM saturation properties and flow data of heavy-ion collisions. We extrapolate our description to low temperatures and draw conclusions for the QCD phase diagram to be explored in heavy-ion collision experiments.
Write a summary of the passage below.
arxiv-format/0611885v1.md
# Gaia Data Processing Architecture W. O'Mullane, U. Lammers C. Bailer-Jones U. Bastian A.G.A. Brown R. Drimmel L. Eyer C. Huc D. Katz L. Lindegren D. Pourbaix X. Luri, J. Torra F. Mignard F. van Leeuwen ## 1 Introduction This paper is sub-divided in four sections: We give a brief overview of the Gaia satellite and introduce the Data Processing and Analysis Consortium (DPAC). Following on from this we describe the overall system architecture for Gaia processing and finally take a more detailed look at the core processing. ## 2 The Gaia Satellite and Science The Gaia payload consists of three distinct instruments for astrometric, photometric and spectroscopic measurements, mounted on a single optical bench. Unlike HST and SIM, which are pointing missions observing a preselected list of objects, Gaia is a scanning satellite that will repeatedly survey in a systematic way the whole sky. The main performances of Gaia expressed with just a few numbers are just staggering and account for the vast scientific harvest awaited from the mission: a complete survey to 20th magnitude of all point sources amounting to more than one thousand million objects, with an astrometric accuracy of 12-25 \\(\\mu\\)as at 15th magnitude and 7 \\(\\mu\\)as for the few million stars brighter than 13th magnitude; radial velocities down to 17th magnitude, with an accuracy ranging from 1 to 15 km s\\({}^{-1}\\); multi-epoch spectrophotometry for every observed source sampling from the visible to the near IR. Beyond its sheer measurement accuracy, the major strength of Gaia follows from (i) its capability to perform an all-sky and sensitivity limited absolute astrometry survey at sub-arcsecond angular resolution, (ii) the unique combination into a single spacecraft of the three major instruments carrying out nearly contemporaneous observations, (iii) the huge number of objects and observations allowing to amplify the accuracy on single objects to large samples with deep statistical significance, a feature immensely valuable for astrophysics and unique to Gaia. ## 3 The Data Processing and Analysis Consortium (DPAC) The DPAC has been formed to answer the Announcement of Opportunity (AO) for the Gaia Processing. The DPAC is formed around a series of \"Coordination Units\" (CU), themselves sub-divided into \"development units\" (DU). The CUs are supported by a set of Data Processing Centres (DPC). The overall coordination is performed by the consortium executive (DPACE). The structure is shown in Fig. 1 and described in more detail below. Consider that there are over 270 scientists and developers currently registered in DPAC who will contribute to the scientific processing on Gaia. ### Coordination Units (CU) The CUs are small in number, with clearly-defined responsibilities and interfaces, and their boundaries fit naturally with the main relationships between tasks and the associated data flow. There will be several areas of involvement across these boundaries, but in first instance it is up to the coordination units to ensure that a group of tasks Figure 1: DPAC Organigram showing DPACE, Coordination Units and Data processing Centres is prepared and optimised, as well as fully tested and documented, as required by the project. The coordination units will have a reasonable amount of autonomy in their internal organisation and in developing what they consider as the best solution for their task. However they are constrained by the fact that any such solution has to meet the requirements and time schedules determined by the Consortium Executive for the overall data processing. In this respect the data exchange protocol and the adherence to the data processing development cycles are mandatory to ensure that every group can access the data it needs in the right format and at the right moment. While the coordination units are intended to reflect the top level structure of the data processing, with completely well-defined responsibilities and commitments to the DPAC, they could for practical reasons be sub-divided into more manageable components, called development units (DU). This is a more operational level with a lighter management which will take the responsibility for the development of a specific part of the software with well defined boundaries. Not every CU will organise its DUs (if any) in the same way, and how they interact with the CU level is left to the CU management. Responsibilities of the coordination units include: (a) establishing development priorities; (b) procuring, optimizing and testing algorithms; (c) defining and supervising the development units linked to them. Each coordination unit is headed by a scientific manager and one or two deputy managers. The CUs will also comprise software engineers. ### Data Processing Centres (DPC) The development activities of each CU are closely associated with at least one DPC (Data Processing Centre) where the computer hardware is available to carry out the actual processing of the data. A technical manager from this centre belongs to the upper management structure of every CU. The software development and the preparation and testing of its implementation in a DPC are parallel activities within every CU and their mutual adequacy must be closely monitored by the CU manager with his DPC representative. Advancement reports are regularly presented to the DPAC executive. ### DPAC Executive (DPACE) Our overall organisation gives the CUs much autonomy in the way they handle their part of the data processing, and the internal organisation and management structures do not need to be uniform across CUs. However there is a single goal shared by all the CUs, and they must follow a common schedule and adhere strictly to many interfaces so that the results produced by one group are available in a timely manner and may be used efficiently by other groups. A variety of standards and conventions, the content and structure of the MDB (Main Database) and the processing cycles must be agreed collectively. Therefore in addition to a local management of each CU, the overall DPAC is coordinated and managed by an Executive Committee, called DPACE for \"Data Processing and Analysis Consortium Executive\". This overall management structure of the Consortium deals with all the matters which are not specific to the internal management of a CU and is meant to make an efficient interaction between the CUs possible. The DPACE responsibilities are primarily coordination tasks although it will make important decisions to be implemented by all CUs which are akin to real management. ## 4 Gaia Data Processing Architecture ### Approach Any large system is normally broken down into logical components to allow distributed development. Gaia data processing is on a very large and highly distributed scale. The approach taken to the decomposition has been to identify major parts of the system which may operate relatively independently, although practically all parts of the Gaia processing are in fact interdependent from the point of view of the data. From a development point of view however, a well defined ICD (Interface Control Document) would allow completely decoupled components to be developed and even operated in disparate locations. The approach is driven by the fact that this is a large system which will be developed in many countries and by teams of varying competence. Hence at this level of decomposition libraries or infrastructure are not considered to be components. At some lower level these components may indeed share libraries and infrastructure but this is not a cornerstone for the architecture. Only the top level components and their interaction are considered in this decomposition. ### Logical Components Figure 2 shows the logical components of the system and the data flow between them. * Mission Control System (MCS)1 Footnote 1: The MCS and DDS are responsibilities of the Mission Operations Centre (MOC), not part of the DPAC and are included here for completeness. * Data Distribution System (DDS) * Initial Data Treatment and First Look (IDT/FL) * Simulation (SIM) * Intermediate Data Update (IDU) * Astrometric Global Iterative Solution (AGIS) * Astrometric Verification Unit (AVU) * Object Processing (OBJ) * Photometric Processing (PHOT) * Spectroscopic Processing (SPEC) * Variability Processing (VARI) * Astrometric Parameters (ASTP) * Main Database (MDB) * Archive ### Data Flow Gaia processing is all about data. The data flow is the most important description of the system and has been under discussion within the community for some time. The result of these discussions is the data flow scheme depicted in Fig. 2. The flow lines in Fig. 2 are labeled and these labeled are referred to in the text below. The data flow is divided into two categories, Near-Realtime and Scientific Processing. Near-Realtime dataflowThe Near-realtime data flow represents the data flow on a time scale of approximately 1 or 2 days, corresponding to the activities of the Mission Operations Ground Segment. The Mission Operations Centre (MOC) at ESOC receives all telemetry from the Space Segment [1.1] via the ground stations. The Science Operations Centre (SOC) at ESAC will receive all telemetry directly from the ground station also [1.2]. The raw data flow from the satellite is not shown explicitly in the diagram. Over the nominal mission duration of five years the payload will yield a total uncompressed data volume of roughly 100 TB. The satellite will have contact with the ground station once a day for a mean duration of 11 hours. During this period, or \"pass\", an uncompressed data volume of roughly 50 GB is downlinked from the satellite via its medium-gain antenna, at a mean rate of about 5 Mbit/s. Mission Control SystemThe raw telemetry data received by the ground station will be transmitted to the Mission Control System (MCS) at the MOC Figure 2: Top Level Components and Data Flow for the DPACand to the SOC. Housekeeping data will be transmitted to the MCS within one hour after reception at the ground station. The MCS will provide an immediate assessment on the spacecraft and instrument status through analysis of the housekeeping data. Data Distribution System.All telemetry received by the MOC systems will be ingested into the Data Distribution System (DDS) [2]. The DDS will also contain data that were generated on-ground (e.g. orbit data, time correlation data), operations reports (telecommand history and timeline status), Satellite Databases used by the MCS and a copy of all telecommands sent to the spacecraft. Initial Data Treatment and First Look.Science Telemetry is received by the SOC [1.2] for processing by the IDT. Data is also retrieved from the DDS by the MOC Interface Task at the SOC and passed to the IDT [3.1]. The IDT processing will decode and decompress the telemetry. It will also extract higher-level image parameters and provide an initial cross matching of observations to known sources (or else to new ones created in this step). Finally it will provide an initial satellite attitude at sub-arcsec precision. The primary objective of the First Look (FL) is to ensure the scientific health of Gaia. This information is returned to the MCS [U1]. First Look processing will carry out a restricted astrometric solution on a dataset from a small number of great-circle scans. To perform some of its tasks IDT/FL requires reference data, such as up-to-date calibration data as well as positions, magnitudes etc. of bright objects that are expected to be observed by Gaia during the time period to be processed. This data will be made available to IDT/FL [3.2] from the MDB. FL will also calibrate the current data set itself and this calibration will be used by IDT. Uplink.The telemetry is received by the MCS which does basic system monitoring. The First Look Diagnostics produced by FL [U1] will indicate if there are anomalies in the scientific output of the satellite which can be corrected on-board. After interpretation of the diagnostics, the Flight Control Team is informed of the anomaly, which can be resolved either through immediate commanding or during the next mission planning cycle. On a regular basis the MCS will send the prepared command schedule to Gaia [U2], taking into account normal planning and inputs from IDT/FL. During a ground station pass, immediate commanding is also possible. Daily transfers and Raw Database.The outputs of IDT/FL are made available to all tasks on a daily basis [4,5] and ingested into the Main Database [5.1]. The Raw Database will be a repository for all raw data [5.2]. Copies of the Raw Database are expected at ESAC, BSC and CNES. Other tasks may retrieve the data according to their requirements [5.3]. Raw data will only be transmitted on a daily basis i.e., it does not form part of the Main Database and is not foreseen to be sent again later. Data Processing Centres may produce'science alerts' from the Gaia observations. Science alerts are sent to the SOC for immediate distribution to the scientific community and for archiving in the Main Database [7]. ### Scientific Processing Scientific Processing represents the production of the Gaia data products by the Data Processing Ground Segment from Intermediate Data. The timescale for each iteration of this process is much longer than that of the near-realtime processing, of the order of six months or more. It will continue after routine satellite operations have finished and will culminate in the production of the Gaia Catalogue. The outputs of processing from each CU will be sent for incorporation in the Main Database [7]. The Main Database is the hub of all data in the Gaia Data processing system. Our plan is to version this database at regular intervals, probably every six months. The science processing is in general iterative. Hence each version of the Main Database is derived from the data in the previous version. By fixing the versions of the entire dataset at some point in time we avoid tracking individual object versions for the billions of objects in the database. ## 5 Gaia core astrometric processing As described above the core processing involves IDT, FL, IDU and AGIS. In this section we will look at the Astrometric Global Iterative Solution (AGIS) in a little more detail. The astrometric core solution is the cornerstone of the data processing since it provides calibration data and attitude solution needed for all the other treatments, in addition to the astrometric solution of \\(\\sim 100\\) million _primary sources_. The main equations to be solved can be summarized by relating the observed position on a detector to a general astrometric and instrument model as, \\[O=S+A+G+C+\\epsilon \\tag{1}\\] where * \\(O\\) is the observed one-dimensional location of the source at the instant determined by the centroiding algorithm applied to the observed photo-electron counts. * \\(S\\) is the astrometric model which for the primary stars should only comprise the five astrometric parameters (\\(\\alpha_{0},\\delta_{0},\\pi,\\mu_{\\alpha},\\mu_{\\delta}\\)). * \\(A\\) represents the parameters used to model the attitude over a given interval of time. They are, for example, the cubic spline coefficients of the quaternion describing the orientation of the instrument with respect to the celestial reference frame as function of time. * \\(G\\) represents the global parameters such as the PPN parameters or other relevant parameters needed to fix the reference frame of the observations. * \\(C\\) comprises all the parameters needed for the instrument modelling: geometric calibration parameters (both intra- and inter-CCDs), basic angle, chromaticity effect and other instrumental offsets. * \\(\\epsilon\\) is the Gaussian white noise which can be estimated from the photon counts and centroiding for every observation and used to weigh the equations. A test is performed at the end to validate the assumption on the noise. Assuming some \\(10^{8}\\) primary stars, the total number of unknowns for the astrometric core solution includes some \\(5\\times 10^{8}\\) astrometric parameters, \\(\\sim 10^{8}\\)attitude parameters, and a few million calibration parameters. The condition equations connecting the unknowns to the observed data are intrinsically non-linear, although they generally linearise well at the sub-arcsec level. Direct solution of the corresponding least-squares problem is unfeasible, by many orders of magnitude, simply in view of the large number of unknowns and their strong inter-connectivity, which prevents any useful decomposition of the problem into manageable parts. The proposed method is based on the _Global Iterative Solution_ scheme (ESA 1997, Vol. 3, Ch. 23), which in the current context is referred to as the _Astrometric GIS_ (AGIS) since related methods are adopted for the photometric and spectroscopic processing. It is necessary to have reasonable starting values for all the unknowns, so as to be close to the linear regime of the condition equations. These are generally provided by the Initial Data Treatment. The idea of AGIS is then quite simple, and consists of the following steps: 1. Assuming that the attitude and calibration parameters are known, the astrometric parameters can be estimated for all the stars. This can be done for one star at a time, thus comprising a least-squares problem with only 5 unknowns and of order 1000 observations. Moreover, this part of the solution is extremely well suited for distributed processing. 2. Next, assuming that the astrometric parameters and the calibration parameters are known, it is possible to use the same observations to estimate the attitude. This can be done for each uninterrupted observation interval at a time. For a typical interval of one week, the number of unknowns is about \\(500\\,000\\) and the number of observations \\(\\sim 2\\times 10^{7}\\). The number of unknowns may seem rather large for a least-squares problem, but the band-diagonal structure of the normal equations resulting from the spline fitting makes the memory consumption and computing time a linear function of the number of unknowns, rather than the cubic scaling for general least-squares solutions. The problem is thus easily manageable. 3. Assuming then that the astrometric and attitude parameters are known, the calibration parameters can be estimated from the residuals in transit time and across-scan field angles. 4. It is necessary to iterate the sequence of steps 1, 2, 3 as many times as it takes to reach convergence. Once the linear regime has been reached, the convergence should be geometric, i.e., the errors (and updates) should decrease roughly by a constant factor in each cycle. Based on simple considerations of redundancy and the geometry of observations, a convergence factor of 0.2-0.4 is expected. If a geometric behaviour is indeed observed, it may be possible to accelerate the convergence by over-relaxation. The iteration must be driven to a point where the updates are much smaller than the accuracy aimed at in the resulting data. 5. After convergence, the astrometric and attitude parameters refer to an internally consistent celestial reference frame, but this does not necessarily, and in general will not, coincide with the International Celestial Reference System (ICRS). A subset of the primary stars and quasars, with known positions/proper motions in the ICRS, is therefore analyzed to derive the nine parameters describing a uniform rotation between the two systems, plus the apparent streaming motion of quasars due to the cosmological acceleration of the solar-system barycentre. The astrometric and attitude parameters are then transformed into the ICRS by application of a uniform rotation. It is envisaged that the whole sequence 1-5 is repeated several times during the processing, initially perhaps every 6 months during the accumulation of more observations. These repeats are called outer AGIS iterations. Optionally, the iteration loop 1-3 may also include an estimation of global parameters. ### AGIS Implementation To facilitate the execution of the AGIS algorithms a data driven approach has been followed. The notion is that any data should only be read once and passed to algorithms rather than each algorithm accessing data directly. A process, termed the 'Data Train', sits between the data access layer and the algorithm requiring data. It accesses the data and invokes the algorithm thus providing an absolute buffer between scientific code and data access. The 'Data Train' is a manifestation of the 'intermediary pattern' [Gamma 1994] and has the advantage that different implementations of varying complexity may be provided. Hence a scientist may run a simplified AGIS on a laptop for testing which does not require the full hardware of the data processing centre. ESAC currently hosts AGIS on a sixteen node cluster of dual processor Dell Xeon machines. A 6 TB Storage Area Network (SAN) is used to host the Oracle database containing the data. The system is entirely written in Java and runs on the 64-bit Sun JDK1.5. The current implementation is executing with simulation data and reaches convergence within 27 outer iterations for very noisy input data. The simulated dataset is of 1.1 million sources with five years of observation amounting to about \\(10^{8}\\) observations. Convergence is declared when the median added to the width of the parallax update histogram is below 1 \\(\\mu\\)as. See also [Hernandez 2007]. ## 6 Conclusion Gaia is an ambitious space mission where the instrument and data processing are intimately related. An overall distributed data processing architecture has been outlined. A distributed management structure is in place to ensure the processing software is built. Rapid development of key software modules is underway, for example the core astrometric solution has been presented in this paper. DPAC has made an excellent start but a difficult road lies ahead to achieve the demanding accuracies required by the Gaia mission. ## References * [] Hernandez, J. et al. 2007, this volume [P1.05] * [] ESA and FAST and NDAC and TDAC and INCA and Matra Marconi Space and Alenia Spazio 1997 The Hipparcos and Tycho Catalogues, SP-1200 * [] Gamma, E. and Helm, R. and Johnson, R. and Vlissides, J. 1994, Addison-Wesley, Design Patterns: Elements of Reusable Object-Oriented Software
Gaia is ESA's ambitious space astrometry mission with a main objective to astrometrically and spectro-photometrically map notless than 1000 million celestial objects in our galaxy with unprecedented accuracy. The announcement of opportunity (AO) for the data processing will be issued by ESA late in 2006. The Gaia Data Processing and Analysis Consortium (DPAC) has been formed recently and is preparing an answer to this AO. The satellite will downlink around 100 TB of raw telemetry data over a mission duration of 5-6 years. To achieve its required accuracy of a few tens of microarcseconds in astrometry, a highly involved processing of this data is required. In addition to the main astrometric instrument Gaia will host a radial-velocity spectrometer and two low-resolution dispersers for multi-colour photometry. All instrument modules share a common focal plane made of a CCD mosaic about 1 square meter in size and featuring close to 1 Giga pixels. Each of the various instruments requires a relatively complex processing while at the same time being interdependent. We describe the composition and structure of the DPAC and the envisaged overall architecture of the system. We shall delve further into the core processing - one of the nine, so-called coordination units comprising the Gaia processing system.
Condense the content of the following passage.
arxiv-format/0612107v3.md
# Phase Structures of Compact Stars in the Modified Quark-Meson Coupling Model Chang-Qun Ma and Chun-Yuan Gao School of Physics, Peking University, Beijing 100871, China Electronic address: [email protected] # _PACS:_ 26.60.+c, 21.65.+f, 12.39.Ba, 13.75.JzNeutron stars are some of the densest objects in the universe since their masses are of the order of 1.5 solar masses while their radii are only of \\(\\sim 12\\) km[1]. Therefore, the density in the inner core of a neutron star could be as large as several times nuclear saturation density(\\(\\cong 0.17\\) fm\\({}^{-3}\\)) and the appearance of new phases other than normal nuclear matter is possible. Kaplan and Nelson proposed the possibility of kaon condensation by a chiral theory[2]. While Bodmer[3] and Witten[4] suggested that the strange quark matter phase, which was discussed by Itoh in 1970[5], might provide the absolutely stable form of the dense matter. Following their work, many authors have devoted to studying the kaon condensation and/or quark matter phase in neutron stars[6]. In this letter, neutron star matter will be investigated with novel EOSes and the possibilities of appearances of exotic phases in neutron stars are going to be discussed. Predictions by models with quark effects would be preferred to those by models with only hadron degrees of freedom because neutron star matter is extraordinarily dense. While at the moment QCD is not realized in investigating neutron stars because of its nonperterbative features, it is worthwhile studying the neutron star with effective quark models. In 1988, Guichon proposed a novel model[7], the quark-meson coupling (QMC) model, where the 'quark effect' was incorporated. The model and its modified versions give satisfactory description for saturation properties of nuclear matter[8] and reproduce the bulk properties of finite nuclei well[9]. Recently, Panda, Menezes and Providencia discussed the kaon condensation[10] and deconfinement phenomena[11] in neutron star within the QMC model, where the bag constant was fixed at its free-space value and the strange quark was unaffected in the medium and set to its constant bare mass value. But the QMC model predicts much smaller scalar and vector potentials for the nucleon than that obtained in the well established quantum hadrodynamics model. Jin and Jennings modified the QMC model by introducing a density-dependent bag constant so that large scalar and vector potentials are obtained without affecting its abilities in other aspects[12]. It is imaginable that the \\(s\\) quark mass should also be modified at the supernuclear density in the core of neutron stars. So an additional pair of hidden strange meson fields (\\(\\sigma^{*},\\ \\phi\\)), which had been proved that they can account for the strongly attractive \\(\\Lambda\\Lambda\\) interaction observed in hypernuclei that cannot be reproduced by (\\(\\sigma,\\ \\omega,\\ \\rho\\)) mesons only[13], are included in the modified quark-meson coupling model (MQMC)[14]. (\\(\\sigma^{*},\\ \\phi\\)) couple only to the \\(s\\) quark in the MQMC model and only to the hyperons in the QHD model. The improved MQMC model has been used to study kaon production in hot and dense hypernuclear matter[15]. In the present work, we shall extend the MQMC to investigate both the K\\({}^{-}\\) condensation and quark deconfinement phase transitions in compact stars at zero temperature. All of the three most possible phases, i.e. the hadronic phase (HP) with strangeness-rich hyperons, the condensation for negative charged kaon and the quark matter phase are considered. Both baryons and kaon meson are described by static spherical MIT bags. Quarks are taken as explicit degrees of freedom, and are coupled to the meson fields. The nonstrange (\\(u\\) and \\(d\\)) quarks in the baryons and kaons are coupled to the well known \\(\\sigma\\), \\(\\omega\\) and \\(\\rho\\) meson fields while the strange quark in the baryons and kaons is coupled to \\(\\sigma^{*}\\) and \\(\\phi\\) only, because the former three pieces are built out of \\(u\\)-, \\(d\\)-quarks or their anti-counterparts and the later two are composed of strange quarks. Let the mean fields be denoted by \\(\\sigma\\), \\(\\sigma^{*}\\) for the scalar meson fields, and \\(\\omega_{0}\\), \\(\\phi_{0}\\) and \\(\\rho_{03}\\) for expectation values of the timelike and the isospin three-component of the vector and the vector-isovector meson fields. In the mean field approximation the dirac equation for a quark field of flavor \\(q\\equiv(u,\\ d,\\ s)\\) in the bag for the hadron species \\(i\\equiv({\\rm p},\\ {\\rm n},\\ {\\rm\\Lambda},\\ {\\rm\\Sigma}^{+},\\ {\\rm\\Sigma}^{0},\\ {\\rm \\Sigma}^{-},\\ {\\rm\\Xi}^{0},\\ {\\rm\\Xi}^{-},\\ {\\rm K}^{-})\\) is then given by \\[\\left[{\\rm i}\\gamma\\cdot\\partial-m_{q}+\\left(g_{\\sigma}^{q}\\sigma-g_{ \\omega}^{q}\\omega_{0}\\gamma^{0}-g_{\\rho}^{q}I_{3i}\\rho_{03}\\gamma^{0}\\right)\\right.\\] \\[+\\left(g_{\\sigma^{*}}^{q}\\sigma^{*}-g_{\\phi}^{q}\\phi_{0}\\gamma^{ 0}\\right)\\right]\\psi_{qi}(\\vec{r},t)=0. \\tag{1}\\] The normalized ground state is solved as \\[\\psi_{qi}(\\vec{r},t)={\\cal N}_{q}\\exp\\left(-{\\rm i}\\epsilon_{qi}t/R_{i}\\right) \\left(\\begin{array}{c}j_{0}\\left(x_{qi}r/R_{i}\\right)\\\\ {\\rm i}\\beta_{qi}\\vec{\\sigma}\\cdot\\hat{r}j_{1}\\left(x_{qi}r/R_{i}\\right)\\end{array} \\right)\\frac{\\chi_{q}}{\\sqrt{4\\pi}}, \\tag{2}\\]where \\[\\epsilon_{qi\\pm} = \\Omega_{qi}\\pm R_{i}\\left(g_{\\omega}^{q}\\omega_{0}+g_{\\rho}^{q}I_{3i }\\rho_{03}+g_{\\phi}^{q}\\phi_{0}\\right), \\tag{3}\\] \\[\\beta_{qi} = \\sqrt{\\frac{\\Omega_{qi}-R_{i}m_{q}^{*}}{\\Omega_{qi}+R_{i}m_{q}^{*}}},\\] (4) \\[\\Omega_{qi} = \\sqrt{x_{qi}^{2}+\\left(R_{i}m_{q}^{*}\\right)^{2}}, \\tag{5}\\] with \\[m_{q}^{*}=m_{q}-g_{\\sigma}^{q}\\sigma-g_{\\sigma^{*}}^{q}\\sigma^{*}, \\tag{6}\\] the effective mass of quark with flavor \\(q\\); \\(R_{i}\\) is the bag radius of hadron species \\(i\\); \\(I_{3i}\\) is the isospin projection for the hadron species \\(i\\); \\(x_{qi}\\) is the dimensionless quark momentum and it can be determined from the boundary condition on the bag surface by the eigenvalue equation \\[j_{0}(x_{qi})=\\beta_{qi}j_{1}(x_{qi}). \\tag{7}\\] In Eq. (3), \\(+\\) sign is for quarks and \\(-\\) sign is for antiquarks. The MIT bag energy is given as \\[E_{i}^{\\rm bag}=\\sum_{q}n_{q}\\frac{\\Omega_{qi}}{R_{i}}-\\frac{Z_{i}}{R_{i}}+ \\frac{4}{3}\\pi R_{i}^{3}B_{i}(\\sigma,\\ \\sigma^{*}), \\tag{8}\\] where \\(n_{q}\\) is the number of the constituent quarks (antiquarks) \\(q\\) inside the bag; \\(Z_{i}\\) is the zero-point motion parameter of the MIT bag and \\(B_{i}\\) is the bag constant for the hadron \\(i\\). In the MQMC model, the bag constant is affected by the medium effect, and we adopte the following directly coupling form[16]: \\[B_{i}(\\sigma,\\ \\sigma^{*})=B_{0}\\exp\\left[-\\frac{4}{M_{i}}\\left(g_{\\sigma}^{ \\rm bag,i}\\sigma+g_{\\sigma^{*}}^{\\rm bag,i}\\sigma^{*}\\right)\\right], \\tag{9}\\] with \\(M_{i}\\) is the vacuum mass of the bag. After the corrections of spurious center of mass motion, the effective mass of a bag is given by[17] \\[M_{i}^{*}=\\sqrt{E_{i}^{\\rm bag}{}^{2}-\\langle p_{\\rm c.m.}^{2}\\rangle_{i}}, \\tag{10}\\] with \\[\\langle p_{\\rm c.m.}^{2}\\rangle_{i}=\\sum_{q}n_{q}^{i}\\left(x_{qi}/R_{i}\\right) ^{2}, \\tag{11}\\] in which \\(n_{q}^{i}\\) is the number of constituent quark(antiquark) \\(q\\) in hadron \\(i\\). And the radius \\(R_{i}\\) of the bag is determined by minimizing the effective mass, which gives \\[\\frac{\\partial M_{i}^{*}}{\\partial R_{i}}=0. \\tag{12}\\] Assume hadronic matter to consist of the members of the SU(3) baryon octet and the kaon doublet. Baryons interact via (\\(\\sigma,\\ \\omega,\\ \\rho,\\ \\sigma^{*},\\ \\phi\\)) meson exchanges and antikaons are treated in the same footing. Then the total Lagrangian density of the hadronic matter in the MQMC model can be written as \\[{\\cal L}_{\\rm MQMC}= \\sum_{B}\\bar{\\Psi}_{B}\\left[{\\rm i}\\gamma_{\\mu}\\partial^{\\mu}-M _{B}^{*}-\\left(g_{\\omega}^{B}\\omega_{\\mu}\\gamma^{\\mu}\\right.\\right. \\tag{13}\\] \\[\\left.\\left.+g_{\\rho}^{B}\\frac{\\vec{\\tau}_{B}}{2}\\cdot\\vec{\\rho} _{\\mu}\\gamma^{\\mu}+g_{\\phi}^{B}\\phi_{\\mu}\\gamma^{\\mu}\\right)\\right]\\Psi_{B}\\] \\[+\\frac{1}{2}\\left(\\partial_{\\mu}\\sigma\\partial^{\\mu}\\sigma+ \\partial_{\\mu}\\sigma^{*}\\partial^{\\mu}\\sigma^{*}\\right)\\] \\[-\\frac{1}{2}\\left(m_{\\sigma}^{2}\\sigma^{2}+m_{\\sigma^{*}}^{2} \\sigma^{*2}-m_{\\omega}^{2}\\omega_{\\mu}\\omega^{\\mu}\\right.\\] \\[\\left.-m_{\\rho}^{2}\\vec{\\rho}_{\\mu}\\cdot\\vec{\\rho}^{\\mu}-m_{\\phi} ^{2}\\phi_{\\mu}\\phi^{\\mu}\\right)\\] \\[-\\frac{1}{4}\\left(W_{\\mu\ u}W^{\\mu\ u}+\\vec{G}_{\\mu\ u}\\cdot\\vec{ G}^{\\mu\ u}+F_{\\mu\ u}F^{\\mu\ u}\\right)\\] \\[+\\sum_{l}\\bar{\\Psi}_{l}\\left({\\rm i}\\gamma_{\\mu}\\partial^{\\mu}- m_{l}\\right)\\Psi_{l}\\] \\[+{\\cal D}_{\\mu}^{*}K^{*}{\\cal D}^{\\mu}K-{M_{\\rm K}^{*}}^{2}K^{*}K,\\] where the summation on \\(B\\) is over the octet of baryons (p, n, \\(\\Lambda\\), \\(\\Sigma^{+}\\), \\(\\Sigma^{0}\\), \\(\\Sigma^{-}\\), \\(\\Xi^{0}\\), \\(\\Xi^{-}\\)), \\(l\\equiv(e^{-},\\ \\mu^{-})\\) and the isospin doublet for the antikaons is denoted by \\(K^{*}\\equiv(K^{-},\\ \\bar{K}^{0})\\), \\(W_{\\mu\ u}=\\partial_{\\mu}\\omega_{\ u}-\\partial_{\ u}\\omega_{\\mu}\\), \\(\\vec{G}_{\\mu\ u}=\\partial_{\\mu}\\vec{\\rho}_{\ u}-\\partial_{\ u}\\vec{\\rho}_{\\mu}\\), \\(F_{\\mu\ u}=\\partial_{\\mu}\\phi_{\ u}-\\partial_{\ u}\\phi_{\\mu}\\), \\({\\cal D}_{\\mu}=\\partial_{\\mu}+{\\rm i}g_{\\omega}^{\\rm K}\\omega_{\\mu}+{\\rm i}g_{ \\rho}^{\\rm K}\\frac{\\vec{\\tau}_{\\rm K}}{2}\\cdot\\vec{\\rho}_{\\mu}+{\\rm i}g_{ \\phi}^{\\rm K}\\phi_{\\mu}\\). The form of the lagrangian is similar to the usual relativistic mean field Lagrangian[18, 19], except that the effective mass is pre-determined by Eq. (10). The dispersion relation for \\(K^{-}\\) can be easily derived from the equation of motion, it takes \\[\\omega_{\\rm K^{-}}=M_{\\rm K}^{*}-\\left(g_{\\omega}^{\\rm K}\\omega_{0}+g_{\\rho}^{ \\rm K}I_{3K}\\rho_{03}+g_{\\phi}^{\\rm K}\\phi_{0}\\right). \\tag{14}\\] For the sake of simplicity, we ignore \\(\\bar{K}^{0}\\) in the present work, and include \\(K^{-}\\) field only because it is the most possible one to be \\(s\\) wave condensation (\\(\\vec{k}_{\\rm K}=0\\)) in dense neutron star matter[18]. From Eqs. (13) and (10), we can derive the equations of motion for the meson fields in uniform static matter: \\[m_{\\sigma}^{2}\\sigma = -\\sum_{B}\\frac{2J_{B}+1}{2\\pi^{2}}\\int_{0}^{k_{B}}k^{2}{\\rm d}k \\frac{M_{B}^{*}}{\\sqrt{k^{2}+M_{B}^{*2}}}\\frac{\\partial M_{B}^{*}}{\\partial \\sigma} \\tag{15}\\] \\[-\\frac{\\partial M_{\\rm K}^{*}}{\\partial\\sigma}\\rho_{\\rm K},\\] \\[m_{\\sigma^{*}}^{2}\\sigma^{*} = -\\sum_{B}\\frac{2J_{B}+1}{2\\pi^{2}}\\int_{0}^{k_{B}}k^{2}{\\rm d}k \\frac{M_{B}^{*}}{\\sqrt{k^{2}+M_{B}^{*2}}}\\frac{\\partial M_{B}^{*}}{\\partial \\sigma^{*}}\\] (16) \\[-\\frac{\\partial M_{\\rm K}^{*}}{\\partial\\sigma^{*}}\\rho_{\\rm K},\\] \\[m_{\\omega}^{2}\\omega_{0} = \\sum_{B}g_{\\omega}^{B}\\left(2J_{B}+1\\right)k_{B}^{3}/\\left(6\\pi^ {2}\\right)-g_{\\omega}^{\\rm K}\\rho_{\\rm K},\\] (17) \\[m_{\\rho}^{2}\\rho_{03} = \\sum_{B}g_{\\rho}^{B}I_{B3}\\left(2J_{B}+1\\right)k_{B}^{3}/\\left(6 \\pi^{2}\\right)-g_{\\rho}^{\\rm K}\\rho_{\\rm K},\\] (18) \\[m_{\\phi}^{2}\\phi_{0} = \\sum_{B}g_{\\phi}^{B}\\left(2J_{B}+1\\right)k_{B}^{3}/\\left(6\\pi^{2} \\right)-g_{\\phi}^{\\rm K}\\rho_{\\rm K}, \\tag{19}\\] where \\(J_{B}\\) and \\(k_{B}\\) are the spin projection and the fermi momentum for baryon \\(B\\), respectively. Using \\[\\frac{\\partial M_{i}^{*}}{\\partial\\sigma}=\\left.\\frac{\\partial M_{i}^{*}}{ \\partial\\sigma}\\right|_{R_{i}}+\\left.\\frac{\\partial M_{i}^{*}}{\\partial R_{i} }\\right|_{\\sigma}\\frac{\\partial R_{i}}{\\partial\\sigma}\\] and Eq. (12), we can give the differentiation of the effective hadron (baryon and kaon) species mass with scalar field \\(\\sigma\\): \\[\\frac{\\partial M_{i}^{*}}{\\partial\\sigma} = \\frac{E_{i}^{\\rm bag}\\frac{\\partial E_{\\rm bag}^{i}}{\\partial \\sigma}-\\frac{1}{2}\\frac{\\partial\\langle p_{c.{\\rm m.}}^{2}\\rangle_{i}}{ \\partial\\sigma}}{M_{i}^{*}}, \\tag{20}\\] \\[\\frac{\\partial E_{i}^{\\rm bag}}{\\partial\\sigma} = \\sum_{q}\\frac{n_{q}}{R_{i}}\\frac{\\partial\\Omega_{qi}}{\\partial \\sigma}+\\frac{4}{3}\\pi R_{i}^{3}\\frac{\\partial B_{i}}{\\partial\\sigma},\\] (21) \\[\\frac{\\partial\\langle p_{c.{\\rm m.}}^{2}\\rangle_{i}}{\\partial\\sigma} = \\frac{2}{R_{i}^{2}}\\sum_{q}n_{q}\\left(\\Omega_{qi}\\frac{\\partial \\Omega_{qi}}{\\partial\\sigma}+R_{i}^{2}g_{\\sigma}^{q}m_{q}^{*}\\right),\\] (22) \\[\\frac{\\partial\\Omega_{qi}}{\\partial\\sigma} = -R_{i}g_{\\sigma}^{q}\\frac{\\Omega_{qi}/2+m_{q}^{*}R_{i}\\left(\\Omega _{qi}-1\\right)}{\\Omega_{qi}\\left(\\Omega_{qi}-1\\right)+m_{q}^{*}R_{i}/2}, \\tag{23}\\] and the differentiation with respect to \\(\\sigma^{*}\\) is likewise. Since the time scale of a star can be regarded as infinite compared to the typical time for weak interaction, which violates the strangeness conservation, the strangeness quantum number is therefore not conserved. While the \\(\\beta\\) equilibrium should be maintained. All the \\(\\beta\\) equilibrium conditions involving the baryon octet \\[p+e^{-}\\leftrightarrow n+\ u_{e}, \\Lambda\\leftrightarrow n,\\] \\[\\Sigma^{+}+e^{-}\\leftrightarrow n+\ u_{e}, \\Sigma^{0}\\leftrightarrow n, \\Sigma^{-}\\leftrightarrow n+e^{-}+\\bar{\ u}_{e},\\] \\[\\Xi^{0}\\leftrightarrow n, \\Xi^{-}\\leftrightarrow n+e^{-}+\\bar{\ u}_{e}\\] may be summarized by a single generic equation: \\[\\mu_{B}=\\mu_{n}-q_{B}\\mu_{e}, \\tag{24}\\] where \\(\\mu_{B}\\) and \\(q_{B}\\) are, respectively, the chemical potential and electric charge of baryon \\(B\\) with \\[\\mu_{B}=\\sqrt{k_{B}^{2}+M_{B}^{*2}}+g_{\\omega}^{B}\\omega_{0}+g_{\\phi}^{B}\\phi _{0}+g_{\\rho}^{B}I_{3B}\\rho_{03}. \\tag{25}\\] From the decay modes \\[K^{-}\\leftrightarrow e^{-}+\\bar{\ u}_{e}, \\mu^{-}\\leftrightarrow e^{-}+\\bar{\ u}_{e}+\ u_{\\mu},\\] we know that when the effective energy of \\(K^{-}\\) meson, \\(\\omega_{\\rm K^{-}}\\), equals to its chemical potential, \\(\\mu_{\\rm K^{-}}\\), which in turn is equal to the electrochemical potential \\(\\mu_{e}\\), \\(K^{-}\\) condensation is formed, i.e. \\[\\omega_{\\rm K^{-}}=\\mu_{e}=\\sqrt{k_{e}^{2}+m_{e}^{2}}=\\mu_{\\mu}=\\sqrt{k_{\\mu}^ {2}+m_{\\mu}^{2}}. \\tag{26}\\] Note that the first equal sign in Eq. (26) is only valid when the condensation takes place. And there are two physical constraints on the HP phase left, they are the conservation of baryon-number and electric charge, which are \\[\\rho_{\\rm HP} = \\frac{1}{6\\pi^{2}}\\sum_{B}b_{B}\\left(2J_{B}+1\\right)k_{B}^{3}, \\tag{27}\\] \\[\\rho_{\\rm HP}^{\\rm ch} = \\frac{1}{6\\pi^{2}}\\sum_{B}q_{B}\\left(2J_{B}+1\\right)k_{B}^{3}+ \\frac{1}{3\\pi^{2}}\\sum_{l}q_{l}k_{l}^{3}-\\rho_{\\rm K}.\\] The electric charge neutrality condition for the pure HP phase is \\[\\rho_{\\rm HP}^{\\rm ch}=0. \\tag{29}\\]Therefore, the energy density and pressure for the HP are: \\[{\\cal E}_{\\rm HP} = \\frac{1}{2}\\left(m_{\\sigma}^{2}\\sigma^{2}+m_{\\sigma^{*}}^{2}{ \\sigma^{*}}^{2}+m_{\\omega}^{2}\\omega_{0}^{2}+m_{\\rho}^{2}\\rho_{03}^{2}+m_{\\phi} ^{2}\\phi_{0}^{2}\\right) \\tag{30}\\] \\[+m_{\\rm K}^{*}\\rho_{\\rm K}+\\sum_{B}\\frac{2J_{B}+1}{2\\pi^{2}}\\int_ {0}^{k_{B}}\\sqrt{k^{2}+M_{B}^{*2}}k^{2}{\\rm d}k\\] \\[+\\frac{1}{\\pi^{2}}\\sum_{l}\\int_{0}^{k_{l}}\\sqrt{k^{2}+m_{l}^{2}}k ^{2}{\\rm d}k,\\] \\[{\\cal P}_{\\rm HP} = \\frac{1}{2}\\left(m_{\\omega}^{2}\\omega_{0}^{2}+m_{\\rho}^{2}\\rho_{0 3}^{2}+m_{\\phi}^{2}\\phi_{0}^{2}-m_{\\sigma}^{2}\\sigma^{2}-m_{\\sigma^{*}}^{2} \\sigma^{*2}\\right)\\] (31) \\[+\\sum_{B}\\frac{2J_{B}+1}{6\\pi^{2}}\\int_{0}^{k_{B}}\\frac{k^{4}{ \\rm d}k}{\\left(k^{2}+M_{b}^{*2}\\right)^{1/2}}\\] \\[+\\frac{1}{3\\pi^{2}}\\sum_{l}\\int_{0}^{k_{l}}\\frac{k^{4}{\\rm d}k}{ \\left(k^{2}+m_{l}^{2}\\right)^{1/2}}.\\] We assume the quark matter phase may occur in the core of star in the form of unpair quark matter (UQM) and the spontaneous broken chiral symmetry is restored so the quarks take their current masses. To describe the UQM, the MIT bag model is adopted, where the quarks are confined in a giant bag without dynamic freedom. If the bag constant for UQM is \\(B\\), the energy, pressure, baryon number and electric charge densities at zero temperature are given by \\[{\\cal E}_{\\rm UQM} = \\frac{3}{\\pi^{2}}\\sum_{q}\\int_{0}^{k_{q}}\\sqrt{k^{2}+m_{q}^{2}}k ^{2}{\\rm d}k \\tag{32}\\] \\[+\\frac{1}{\\pi^{2}}\\sum_{l}\\int_{0}^{k_{l}}\\sqrt{k^{2}+m_{l}^{2}}k ^{2}{\\rm d}k+B,\\] \\[{\\cal P}_{\\rm UQM} = \\frac{1}{\\pi^{2}}\\sum_{q}\\int_{0}^{k_{q}}\\frac{k^{4}{\\rm d}k}{ \\left(k^{2}+m_{q}^{2}\\right)^{1/2}}\\] (33) \\[+\\frac{1}{3\\pi^{2}}\\sum_{l}\\int_{0}^{k_{l}}\\frac{k^{4}{\\rm d}k}{ \\left(k^{2}+m_{l}^{2}\\right)^{1/2}}-B,\\] \\[\\rho_{\\rm UQM} = \\frac{1}{3\\pi^{2}}\\sum_{q}k_{q}^{3},\\] (34) \\[\\rho_{\\rm UQM}^{\\rm ch} = \\frac{1}{\\pi^{2}}\\sum_{q}q_{q}k_{q}^{3}+\\frac{1}{3\\pi^{2}}\\sum_{l }q_{l}k_{l}^{3} \\tag{35}\\] with \\(q_{q}\\) is the electric charge for quark \\(q\\). The exact value of \\(B\\) is not fixed till now, and the phase transition point from HP to UQM depends on its value sensitively, which will be discussed later. Chemical equilibrium among the quark flavors and the leptons is maintained by the following weak reactions: \\[d\\leftrightarrow u+e^{-}+\\bar{\ u}_{e},\\ \\ \\ \\ s\\leftrightarrow u+e^{-}+\\bar{ \ u}_{e},\\ \\ \\ \\ s+u\\leftrightarrow d+u.\\] we can get the equilibrium condition for the pure quark matter phase \\[\\mu_{d}=\\mu_{s}=\\mu_{u}+\\mu_{e}, \\tag{36}\\] where \\[\\mu_{q}=\\sqrt{m_{q}^{2}+k_{q}^{2}} \\tag{37}\\] is the chemical potential for the quark \\(q\\), and can be obtained by the \\(\\beta\\) equilibrium in mixed state. For the state where HP and UQM coexist, i.e. the mixed phase, the quark chemical potentials for a system in chemical equilibrium are related to those for baryon and electron by[20] \\[\\mu_{u} = \\frac{1}{3}\\mu_{n}-\\frac{2}{3}\\mu_{e}, \\tag{38}\\] \\[\\mu_{d} = \\mu_{s}=\\frac{1}{3}\\mu_{n}+\\frac{1}{3}\\mu_{e}. \\tag{39}\\] Global electric charge neutrality condition must be satisfied and the Gibbs construction requires that the pressures of two phases should be equal at zero temperature. If the volume fraction of UQM phase is \\(\\chi\\), then coexisting conditions are \\[\\chi\\rho_{\\rm UQM}^{\\rm ch}+\\left(1-\\chi\\right)\\rho_{\\rm HP}^{ \\rm ch}=0, \\tag{40}\\] \\[{\\cal P}_{\\rm UQM}={\\cal P}_{\\rm HP}. \\tag{41}\\] The energy density and the total baryon-number density read \\[{\\cal E} = \\chi{\\cal E}_{\\rm UQM}+\\left(1-\\chi\\right){\\cal E}_{\\rm HP}, \\tag{42}\\] \\[\\rho = \\chi\\rho_{\\rm UQM}+\\left(1-\\chi\\right)\\rho_{\\rm HP}. \\tag{43}\\] We take \\(m_{u}=m_{d}=0,\\ m_{s}=130\\)MeV[21]. The bag constants and zero-point motion parameters are calibrated to reproduce the mass spectrum and the stable condition Eq. (12) for the MIT-bags in free space. Assuming the nucleon's radius to be 0.6fm,the bag constant \\(B_{0}\\) in vacuum for the nucleon can be fitted together with the mass 939MeV. The result is \\(B_{0}^{1/4}=188.2385\\)MeV. In Table 1, the zero-point motion parameters and bag-radii for baryons and \\(K^{-}\\) are listed. And the mass spectrum for mesons transferring interactions are listed in Table 2. The \\(\\sigma,\\ \\omega\\) and \\(\\rho\\) mesons couple only to the up and down quarks while \\(\\sigma^{*}\\) and \\(\\phi\\) couple to the strange quark. We thus set \\[g_{\\sigma}^{s}=g_{\\omega}^{s}=g_{\\rho}^{s}=g_{\\sigma^{*}}^{u}=g_{\\sigma^{*}}^{ d}=g_{\\phi}^{u}=g_{\\phi}^{d}=0.\\] By assuming the SU(6) symmetry of the simple quark model we have the relations[14] \\[\\begin{array}{ll}g_{\\sigma}^{u}=g_{\\sigma}^{d}\\equiv g_{\\sigma}^{u,d},&g_{ \\sigma^{*}}^{s}=\\sqrt{2}g_{\\sigma}^{u,d},\\\\ g_{\\sigma}^{i}=\\left(n_{u}^{i}+n_{d}^{i}\\right)g_{\\sigma}^{u,d},&g_{\\sigma^{*} }^{i}=\\sqrt{2}n_{s}^{i}g_{\\sigma}^{u,d};\\\\ g_{\\omega}^{u}=g_{\\omega}^{d}\\equiv g_{\\omega}^{u,d},&g_{\\phi}^{s}=\\sqrt{2}g _{\\omega}^{u,d},\\\\ g_{\\omega}^{i}=\\left(n_{u}^{i}+n_{d}^{i}\\right)g_{\\omega}^{u,d},&g_{\\phi}^{i}= \\sqrt{2}n_{s}^{i}g_{\\omega}^{u,d};\\\\ g_{\\rho}^{u}=g_{\\rho}^{d}\\equiv g_{\\rho}^{u,d},&g_{\\rho}^{i}=g_{\\rho}^{u,d}; \\\\ g_{\\sigma}^{{\\rm bag},i}=\\frac{1}{3}\\left(n_{u}^{i}+n_{d}^{i}\\right)g_{\\sigma}^ {{\\rm bag},N},&g_{\\sigma^{*}}^{{\\rm bag},i}=\\frac{\\sqrt{2}}{3}n_{s}^{i}g_{ \\sigma}^{{\\rm bag},N}.\\end{array}\\] Then there are only four independent constants of coupling left. Three of them are the couplings between light quarks and nonstrange meson mean fields, i.e. \\(g_{\\sigma}^{u,d}\\), \\(g_{\\omega}^{u,d}\\) and \\(g_{\\rho}^{u,d}\\). The last one is \\(g_{\\sigma}^{{\\rm bag},N}\\) measuring the interaction between the bag constant and the scalar \\(\\sigma\\) mean fields. We adjust them to reproduce the saturation properties of nuclear matter: the symmetric energy index \\(a_{\\rm sym}\\)=32.5MeV, the binding energy \\(E_{b}=-16\\)MeV and the compressibility \\(K\\)=289MeV at the density \\(\\rho_{0}\\)=0.17fm\\({}^{-3}\\). The four independent coupling constants are listed in Table 3. The hadron, lepton and quark population at different baryon-densities in neutron star matter with and without UQM respectively, are shown in Figure 1. The bag constant for UQM is fixed at \\(B^{1/4}\\)=180MeV. Figure 1(a) tells us that when the density reaches 1.6\\(\\rho_{0}\\) mixed phase appears. The critical density obtained here for phase transition from pure hadronic matter to mixed phase is similar to those reported by other models, such as that by FST model in Ref.[6] or the result by QMC model in Ref.[11]. While in the present model hyperons seem to appear more easily than that in QMC model. The reason is that the effective masses of hyperons in MQMC model are lower than that in QMC model because in the MQMC model the bag constants of hadrons keep decreasing as the density increases. When \\(\\rho_{B}\\)=7.8\\(\\rho_{0}\\), the volume of hadronic matter go down to zero and \\begin{table} \\begin{tabular}{c|c c c} \\hline \\hline & M(MeV) & Z & R(fm) \\\\ \\hline N & 939.0 & 2.0314 & 0.6000 \\\\ \\(\\Lambda\\) & 1115.7 & 1.7913 & 0.6472 \\\\ \\(\\Sigma^{+}\\) & 1189.4 & 1.6124 & 0.6731 \\\\ \\(\\Sigma^{0}\\) & 1192.6 & 1.6041 & 0.6742 \\\\ \\(\\Sigma^{-}\\) & 1197.4 & 1.5919 & 0.6758 \\\\ \\(\\Xi^{0}\\) & 1314.8 & 1.4439 & 0.6940 \\\\ \\(\\Xi^{-}\\) & 1321.3 & 1.4262 & 0.6960 \\\\ \\(K^{-}\\) & 493.7 & 1.1632 & 0.3391 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: The zero-point motion parameters \\(Z_{i}\\) and radii \\(R_{i}\\) are obtained to reproduced the mass spectrum in vacuum and Eq. (12). And that \\(B_{0}^{1/4}=188.2385\\)MeV has been fixed by the properties of nucleon. The mass spectrum adopted here is taken from Ref[21]. \\begin{table} \\begin{tabular}{c|c|c|c|c} \\hline \\hline \\(m_{\\sigma}\\) & \\(m_{\\omega}\\) & \\(m_{\\rho}\\) & \\(m_{\\sigma^{*}}\\) & \\(m_{\\phi}\\) \\\\ \\hline 550 & 783 & 776 & 980 & 1020 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: The mass spectrum (in MeV) for mesons transferring interactions[21]. the pure UQM exists. In the present work, the negative charged kaon is also taken into account, however we cannot find K\\({}^{-}\\) in Figure 1(a). To illustrate the fact, let's look at Figure 1(b), which shows the population of compositions in pure HP. It can be found that K\\({}^{-}\\) begins to condense at a critical density of about 4.4\\(\\rho_{0}\\) which is larger than the critical density to mixed phase. Therefore we can learn that K\\({}^{-}\\) condensation is suppressed because of the deconfinement mechanism. The fact is that the presence of UQM lowers electrochemical potential than that in pure HP and therefore will force the critical point of condensation to a higher density. And it is clear that the critical point has already been forced into the region where pure UQM exists without any hadrons. Our calculations indicate that for any choice of the bag constant the condensed phase is suppressed once the deconfinement phase transition is attainable, i.e. when \\(B^{1/4}<\\)202.2MeV which we know from Figure 2. Our result is different from that in Ref.[6], where the kaon condensation appears within the mixed phase at 9.26\\(\\rho_{0}\\) for B\\({}^{1/4}\\)=185MeV. But the condensed point is so high that the authors have to concluded that K\\({}^{-}\\) condensation would not come along with the neutron star also. The relation between the bag constant \\(B\\) and the critical point of deconfinement is shown in Figure 2. The light blue surface represents the pressure as a function of \\(\\rho\\) and \\(B\\) for HP, and the gray one is for UQM with the conditions (38) and (39). The figure reveals that when \\(B^{1/4}<\\)202.2MeV, the two surfaces can always have intersection as the density increases, which means the system would enter the mixed phase at the matching point. When \\(B^{1/4}\\) is greater than about 202.2MeV, no intersection appears at all possible densities in the interior of neutron star which means no hadron will deconfines and the behaviors of compositions are shown in Figure 1(b). For a given \\(B\\), pressure of quark matter phase increases firstly till it reaches a maximum point then it drops as the density increases. We found that the maximum is at the critical density for K\\({}^{-}\\) condensation and therefore with any value of \\(B^{1/4}<\\)202.2MeV deconfiment Figure 1: The hadron, lepton and quark population at different baryon-densities in a system compose of (a) HP+UQM with \\(B^{1/4}=\\)180MeV, (b) pure HP Figure 2: Pressure as a function of nuclear density and bag constant. The light blue one is for pure HP and that gray one is for pure UQM using relations (38) and (39). position is always lower than that for K\\({}^{-}\\) condensation. Moreover, from the figure we find that the critical density for deconfinement is sensitive to the value of \\(B\\). The EOSes for pure HP and HP+UQM are shown in Figure 3. For pure HP, the first turning point corresponds to the appearance of hyperons. And when the density reaches 4.4\\(\\rho_{0}\\), K\\({}^{-}\\) begins to condense, consequently the EOS is softened significantly. In the system of HP+UQM, hyperons are forced to appear at higher densities. At the low energy density, the EOS for HP+UQM is softer than that of pure HP because of the deconfinement phase transition. However, after the K\\({}^{-}\\) condensation takes place in the pure HP, the case is contrary, which can be interpreted by two facts: First, the abundance of hyperon is higher for pure HP at the same energy density; Second, the \\(s\\) wave K\\({}^{-}\\) condensation contributes only to the total energy but not to pressure because of the zero momentum at ground state. The radius-mass relationships of static neutron star obtained by solving the Tolman-Oppenheimer-Volkoff equations[22] are shown in Figure 4(a) for different equations of state. For all the cases studied here, the maximum masses of the stars are found to lie between 1.45M\\({}_{\\odot}\\) and 1.52M\\({}_{\\odot}\\) which are all larger than the best measured pulsar mass 1.44M\\({}_{\\odot}\\) in the binary pulsar PSR 1913+16[24]. Furthermore, for the case of HP+UQM, a smaller bag constant gives a lower maximum mass, so the EOSes with about B\\({}^{1/4}<180\\)MeV should be ruled out. The gravitational redshifts are plotted in Figure 4(b). A redshift of z=0.35[25], with a total measurement error of order of 5%[26], was inferred by identifying three sets of redshifted transitions in the EXO0748-676 spectrum, so it imposes a lower limit of about 0.3325 to the maximum redshift. From the figure, we see that the EOS of pure hadron matter with condensed K\\({}^{-}\\) phase is ruled out, and EOSes of HP+UQM with \\(B^{1/4}\\) more than 180MeV would be ruled out likewise, but that with 180MeV is marginally permitted because it produces a maximum redshift of 0.3330. So the value of \\(B^{1/4}\\) is constrained to be about 180MeV by the combined con Figure 4: (a)The radii versus masses for neutron stars for different EOSes. The dots show the positions for maximum masses.The vertical line shows the maximum mass limit by PSR 1913+16[24]. (b)The gravitational redshift versus masses for neutron stars. The two lower horizontal lines denote the observational value for the gravitational redshift of the neutron star 1E 1207.4-5209[23] and the one lies on 0.3325 is the lower limit for maximum redshift[25, 26]. Figure 3: EOSes for HP and HP+UQM with different bag constants \\(B\\). straint from PSR 1913+16 and EXO0748-676. Sanwal et al. discussed the absorption lines from the neutron star 1E 1207.4-5209, where a redshift of 0.12\\(\\sim\\)0.23 was yielded if the observed features are identified as atomic transitions of once-ionized helium in a strong magnetic field[23]. By the EOS of HP+UQM with \\(B^{1/4}=180\\)MeV, we can see that the identification corresponds to that mass M= 0.82 \\(\\sim\\)1.27M\\({}_{\\odot}\\) and radius \\(R=11.0\\sim 11.9\\)km respectively. These values appear to be realistic. In Figure 5, the phase structure in possible hybrid stars are figured out, where \\(B^{1/4}\\)=180MeV is fixed. When \\(\\rho_{c}<8.4\\rho_{0}\\), the neutron star is stable since \\(\\frac{dM}{d\\mathcal{E}_{c}}>\\)0, where \\(\\mathcal{E}_{c}\\) is the energy density at the center. In the possible hybrid star the density increases as going deeply into the interior and the mixed phase exists within some critical radius. Within the hadronic matter crust the phase transition from normal nuclear matter into hyper-nuclear matter may occur, but K\\({}^{-}\\) condensation phase is suppressed. For different hybrid stars, the volume fraction for UQM keeps increasing, whereas the pure hadronic matter crust becomes thinner and thinner as the central density rises. Especially when \\(\\rho_{c}\\) goes up to about 7.3\\(\\rho_{0}\\), a core of pure quark matter comes into being. And the quark core will expands further as the \\(\\rho_{c}\\) increases. For the neutron star with maximum mass, an evident quark matter core is presented, which is described by the third pattern. The appearance of the pure quark core is notable. Many other works have been carried out to study the quark matter phase within the three-flavor NJL model, but all of them are unable to construct a stable hybrid star with pure quark core[27] and only a star with mixed phase core is possible[28]. Recently, Ransom, et. al. inferred that at least one of the stars in Terzan 5 is more massive than 1.48, 1.68, or 1.74 M\\({}_{\\odot}\\) at 99%, 95%, and 90% confidence levels[29]. While compared with the limit of 1.68M\\({}_{\\odot}\\), all the EOSes with exotic phases presented here would be ruled out. Therefore, if the rather massive star is confirmed condensed K\\({}^{-}\\) phase and deconfiment phase on unpaired state are likely to be denied in neutron stars. In summary, we have investigated the K\\({}^{-}\\) condensation and the deconfinement phase transition in the frame of MQMC model. The model predicts a critical density for kaon condensation in pure HP. When UQM exists, which is only possible for \\(B^{1/4}<\\)202.2MeV, condensed phase is suppressed. We find that only the EOS of HP+UQM with \\(B^{1/4}\\) about 180MeV can fit the observational mass of star PSR 1913+16 and the inferred redshift for EXO0748-676 at the same time. The phase structures of possible hybrid stars with different central densities are discussed, it is found that for EOS of HP+UQM with \\(B^{1/4}=\\)180MeV a star with central density higher than 7.3\\(\\rho_{0}\\) will has a pure quark core and the pure hadronic matter star exists when \\(\\rho_{c}<1.6\\rho_{0}\\). Between the two densities, the star is characterized by a crust of hadronic matter and a core of mixed phase. The recent inferred mass of the star Terzan 5 I is also considered, and it is found that the mass limit of 1.68M\\({}_{\\odot}\\) at 95% confidence level makes all the EOSes presented here ruled out. Therefore, if this rather massive star is confirmed, condensed K\\({}^{-}\\) phase and deconfined phase in unpaired state are unlikely to appear in neutron star by the present model and accordingly the matter of hadrons in normal state seems to be preferred as claimed by Ozel[30]. Does this really mean that the ground state of matter is composed of normal nuclear matter without exotic phases? Actually, we should note that in the present Figure 5: Phase structure of hybrid stars with bag constant fixed at B\\({}^{1/4}\\)=180MeV. The color shows the volume fraction \\(\\chi\\) of quark matter, and that for hadronic matter is \\(1-\\chi\\). Above the star, its properties are marked. model all the octet of baryons are included in the HP but quarks may be deconfined within the matter of nucleons without hyperons. Furthermore, the deconfined quarks are in the unpaired state in the present calculation where the quark-quark interactions are neglected, but quarks could be in the color superconducting state as well if the the attractive interaction in color antitriplet channel is considered. Therefore the possibility of constructing an EOS with exotic phases which satisfies the observational constraints could not be eliminated, which deserves further investigations. **Acknowledgments** The authors would like to thank Prof. Naoki Itoh for his kindly introducing his previous work on strange quark matter. We are grateful to Prof. Qi-Ren Zhang for valuable suggestions. Financial support by the National Natural Science Foundation of China under Grant Nos. 10305001, 10475002 & 10435080 is gratefully acknowledged. ## References * [1] J. M. Lattimer and M. Prakash, Science 304 (2004) 536. * [2] D. B. Kaplan and A. E. Nelson, Phys. Lett. B 175 (1986) 57. * [3] A. R. Bodmer, Phys. Rev. D 4 (1971) 1601. * [4] E. Witten, Phys. Rev. D 30 (1984) 272. * [5] N. Itoh, Prog. Theor. Phys. 44 (1970) 291. * [6] J. f. Gu, H. Guo, X. G. Li, Y. X. Liu and F. R. Xu, Phy. Rev. C 73 (2006) 055803 and references therein. * [7] P. A. M. Guichon, Phys. Lett. B 200 (1988) 235. * [8] K. Saito and A. W. Thomas, Phys. Rev. C 52 (1995) 2789; H. Muller, B. K. Jennings, Nucl. Phys. A 626 (1997) 966; J. C. Caillon and J. Labarsouque, Phys. Lett. B 425 (1998) 13. * [9] P. A. M. Guichon, K. Saito, E. Rodionov, and A. W. Thomas, Nucl. Phys. A 601 (1996) 349; P. G. Blunden and G. A. Miller, Phys. Rev. C 54 (1996) 359; H. Muller, Phys. Rev. C 57 (1998) 1974. * [10] D. P. Menezes, P. K. Panda and C. Providencia, Phy. Rev. C 72 (2005) 035802. * [11] P. K. Panda, D. P. Menezes and C. Providencia, Phy. Rev. C 69 (2004) 025270. * [12] X. Jin and B. K. Jennings, Phys. Lett. B 374 (1996) 13; X. Jin and B. K. Jennings, Phy. Rev. C 54 (1996) 1427; X. Jin and B. K. Jennings, Phy. Rev. C 55 (1997) 1567. * [13] J. Schaffner, C. B. Dover, A. Gal, C. Greiner, D. J. Millener and H. Stocker, Annals of Physics 235 (1994) 35. * [14] S. Pal, M. Hanauske, I. Zakout, H. Stocker and W. Greiner, Phy. Rev. C 60 (1999) 015802. * [15] I. Zakout, W. Greiner, H. R. Jaqaman, Nucl. Phys. A 759 (2005) 201. * [16] I. Zakout, H. R. Jaqaman and W. Greiner, J. Phys. G: Nucl. Part. Phys. 27 (2001) 1939. * [17] S. Fleck, W. Bentz, K. Shimizu and K. Yazaki, Nucl. Phys. A 510 (1990) 731. * [18] S. Pal, Debades Bandyopadhyay, W. Greiner, Nucl. Phys. A 674 (2000) 553. * [19] N. K. Glendenning and Jurgen Schaffner-Bielich, Phy. Rev. C 60 (1999) 025803. * [20] N. K. Glendenning, Phy. Rev. D 46 (1992) 1274. * [21] Particle Data Group, S. Eidelman, K. G. Hayes, K. A. Olive, M. Aguilar-Benitez, C. Amsler, D. Asner, K. S. Babu, R. M. Barnett and J. Beringer, et al., Phys. Lett. B 592 (2004) 1. * [22] R. C. Tolman, Phys. Rev. 55 (1939) 364; J. R. Oppenheimer, G. Volkoff, Phys. Rev. 55 (1939) 374. * [23] D. Sanwal, G. G. Pavlov, V. E. Zavlin and M. A. Teter, Astrophys. J. 574 (2002) L61. * [24] J. M. Weisberg and J. H. Taylor, Radio Pulsars ed M. Bailes, D. J. Nice and S. Thorsett (San Francisco: Astronomical Society of the Pacific) 2003; S. E. Thorsett and D. Chakrabarty, Astrophys. J. 512 (1999) 288. * [25] J. Cottam, F. Paerels and M. Mendez, Nature 420 (2002) 51. * [26] S. Bhattacharyya, M. C. Miller and F. K. Lamb, Astrophys. J. 644 (2006) 1085. * [27] K. Schertler, S. Leupold and J. Schaffner-Bielich, Phys. Rev. C 60 (1999) 025801; M. Baldo, M. Buballab and G. F. Burgioa, et al., Phys. Lett. B 562 (2003) 153; M. Baldo, G. F. Burgio, P. Castorina, S. Plumari and D. Zappala, Phys. Rev. C 75 (2007) 035804. * [28] M. Buballa, F. Neumann, M. Oertel and I. Shovkovy, Phys. Lett. B 595 (2004) 36. * [29] S. M. Ransom, Jason W. T. Hessels and I. H. Stairs, et al., Science 307 (2005) 892. * [30] F. Ozel, Nature 441 (2006) 1115.
The K\\({}^{-}\\) condensation and quark deconfinement phase transitions are investigated in the modified quark-meson coupling model. It is shown that K\\({}^{-}\\) condensation is suppressed because of the quark deconfinement when \\(B^{1/4}<\\)202.2MeV, where \\(B\\) is the bag constant for unpaired quark matter. With the equation of state (EOS) solved self-consistently, we discuss the properties of compact stars. We find that the EOS of pure hadron matter with condensed K\\({}^{-}\\) phase should be ruled out by the redshift for star EXO0748-676, while EOS containing unpaired quark matter phase with \\(B^{1/4}\\) being about 180MeV could be consistent with this observation and the best measured mass of star PSR 1913+16. We then probe into the change of the phase structures in possible compact stars with deconfinnent phase as the central densities increase. But if the recent inferred massive star among Terzan 5 with M\\(>\\)1.68M\\({}_{\\odot}\\) is confirmed, all the present EOSes with condensed phase and deconfined phase would be ruled out and therefore these exotic phases are unlikely to appear within neutron stars.
Summarize the following text.
arxiv-format/0612172v2.md
# The Metallicity of Stars with Close Companions Daniel Grether1 & Charles H. Lineweaver2 Footnote 1: affiliation: Department of Astrophysics, School of Physics, University of New South Wales, Sydney, NSW 2052, Australia Footnote 2: affiliation: Planetary Science Institute, Research School of Astronomy and Astrophysics & Research School of Earth Sciences, Australian National University, Canberra, ACT, Australia ## 1. Introduction With the detection to date of more than 160 exoplanets using the Doppler technique, the observation of Gonzalez (1997) that giant close-orbiting exoplanets have host stars with relatively high stellar metallicity compared to the average field star has gotten stronger (Reid, 2002; Santos _et al._, 2004; Fischer & Valenti, 2005; Bond _et al._, 2006). To understand the nature of this correlation between high host metallicity and the presence of Doppler-detectable exoplanets, we investigate whether this correlation extends to stellar mass companions. There has been a widely held view that metal-poor stellar populations possess few stellar companions (Batten, 1973; Latham _et al._, 1988; Latham, 2004). This may have been largely due to the difficulty of finding binary stars in the galactic halo, e.g. Gunn & Griffin (1979). Duquennoy & Mayor (1991) investigated the properties of stellar companions amongst Sun-like stars but did not report a relationship between stellar companions and host metallicity. Latham _et al._ (2002) and Carney _et al._ (2005) reported a lower binarity for stars on retrograde Galactic orbits compared to stars on prograde Galactic orbits but found no dependence between binarity and metallicity within those two kinematic groups. Dall _et al._ (2005) speculated that the frequency of host stars with stellar companions may be correlated with metallicity in the same way that host stars with planets are. In this paper we describe and characterize the correlation between host metallicity and the fraction of planetary and stellar companions. In Section 2 we define our sample of close planetary and stellar companions and we describe the variety of techniques used to obtain metallicities of stars that do not have spectroscopic metallicities from Doppler searches. In Section 3 we analyze the distribution of planetary and stellar companions as a function of host metallicity. We confirm and quantify the correlation between planet-hosts and high metallicity and we find a new anti-correlation between the frequency of stellar companions and high metallicity. In Section 4 we compare our stellar companion results to analogous analyses of the Nordstrom _et al._ (2004) and Carney _et al._ (2005) samples. ## 2. The Sample We analyze the distribution of the metallicities of FGK main-sequence stars with close companions (period \\(<5\\) years). For this we use the sample of stars analyzed by Grether & Lineweaver (2006). This subset of 'Sun-like' stars in the Hipparcos catalog, is defined by \\(0.5\\leq B-V\\leq 1.0\\) and \\(5.4(B-V)+2.0\\leq M_{V}\\leq 5.4(B-V)-0.5\\). This forms a parallelogram -0.5 and 2.0 mag, below and above an average main-sequence in the HR diagram. The stars range in spectral type from approximately F7 to K3 and in absolute magnitude in V band from 2.2 to 7.4. From this we define a more complete closer (\\(d<25\\) pc) sample of stars and an independent more distant (\\(25<d<50\\) pc) sample. See Grether & Lineweaver (2006) for additional details about the sample definition. ### Measuring Stellar Metallicity The metallicity of most of the extrasolar planet hosts have been determined spectroscopically. We analyse the metallicity data from three of these groups: (1) the McDonald observatory (hereafter, McD) group (e.g. Gonzalez _et al._, 2001; Laws _et al._, 2003), (2) the European Southern Observatory (hereafter, ESO) group (e.g. Santos _et al._, 2004, 2005), and (3) the Keck, Lick and Anglo-Australian observatory (hereafter, KLA) group (Fischer & Valenti, 2005; Valenti & Fischer, 2005). All three of these groups find similar metallicities for the extrasolar planet target stars that they have all observed as shown by the comparisons in Fig. 1. Apart from the \\(\\sim 1000\\) KLA target stars analyzed with a consistently high precision by Valenti & Fischer (2005), many nearby (\\(d<50\\) pc) FGK stars lack precise metallicities if they have any published measurement at all. A smaller sample of precise spectroscopic metallicities has also beenpublished by the ESO group for non-planet hosting stars (Santos _et al._, 2005). Since the large sample of KLA stars has been taken from exoplanet target lists it also has the same biases. This includes selection effects (i) against high stellar chromospheric activity (ii) towards more metal-rich stars that have a greater probability of being a planetary host and (iii) against most stars with known close (\\(\\theta<2\\arcsec\\)) stellar companions. We need to correct for or minimize these biases to determine quantitatively not only how the planetary distribution varies with host metallicity but also how the close stellar companion distribution varies with host metallicity, that is, we need metallicities of all stars in our sample in order to compare companion-hosting stars to non-companion-hosting stars, and to compare the metallicities of planet-hosting stars to the metallicities of stellar-companion-hosting stars. In addition to the metallicities reported by the McD, ESO and KLA groups, we use a variety of other sources and techniques to determine stellar metallicity although with somewhat less precision. These include other sources of spectroscopic metallicities such as the Cayrel de Strobel _et al._ (2001) (hereafter, CdS) catalog, metallicities derived from \\(uvby\\) narrow-band photometry or broad-band photometry and metallicities derived from a star's position in the HR diagram. The precision of the spectroscopic metallicity values in the CdS catalog are not well quantified. However, many of the stars in the catalog have several independent metallicity values which we average, excluding obvious outliers. To derive metallicities from \\(uvby\\) narrow-band photometry we apply the calibration of Martell & Smith (2004) to the Hauck & Mermilliod (1998) catalog. We also use values of metallicity derived from broad-band photometry in (Ammons _et al._, 2006). For stars with \\(5.5<M_{V}<7.3\\) (K dwarfs) the relationship between stellar luminosity and metallicity is very tight (Kotoneva _et al._, 2002). Using this relationship, we derive metallicities for some K dwarfs from their position in the HR diagram. To quantify the precision of their metallicities, we compare in Fig. 2 the different methods of determining metallicity. We use the high precision exoplanet target spectroscopic metallicities from the McD, ESO and KLA surveys (or the average when a star has two or more values) as the reference sample. We compare these metallicities with metallicities of the following test samples: (1) CdS spectroscopic metallicities, (2) \\(uvby\\) photometric metallicities, (3) broad-band photometric metallicities and (4) HR Diagram K dwarf metallicities. The result of this comparison is that the uncertainties associated with the high quality exoplanet target spectroscopic metallicities of McD, ESO and KLA groups are the smallest, with the CdS spectroscopic metallicities only slightly more uncertain. The uncertainties associated with the \\(uvby\\) photometric metallicities are inter Figure 1.— Exoplanet Target Stars Metallicity Comparison. We compare the spectroscopic exoplanet target metallicities of the McD, ESO and KLA groups. The 59 red dots compare the ESO to the McD values of exoplanet target metallicity that these groups have in common. We find that the ESO values are on average 0.01 dex smaller than the McD values with a dispersion of 0.05 dex. Similarly the 99 green dots compare the KLA values to the average 0.01 dex smaller ESO values with a dispersion of 0.06 dex. The 56 blue dots compare the KLA to the average 0.01 dex larger McD values with a dispersion of 0.06 dex. A solid black line shows the slope-one line with dashed lines at \\(\\pm 0.1\\) dex. The three linear best-fits for these three comparisons are nearly identical to the slope-one line and almost all scatter is contained within 0.1 dex. The relationship between the McD and ESO values is very close with a marginally looser relationship to the KLA values. Thus, these values for exoplanet target metallicity are consistent at the \\(\\sim 0.1\\) dex level. Figure 2.— Metallicity values from exoplanet spectroscopy compared to four other methods of obtaining stellar metallicities. We compare the exoplanet target spectroscopic metallicities (plotted on the \\(x\\) axis as a ‘reference’) with the following test samples plotted on the \\(y\\) axis: (1) CdS spectroscopic metallicities (red dots), (2) \\(uvby\\) photometric metallicities (green dots) (3) broad-band photometric metallicities (blue dots) and (4) HR diagram K dwarf metallicities (aqu dots). The mean differences between the test and the reference sample metallicities ([Fe/H]\\({}_{\\rm eff}\\)\\(-\\)[Fe/H]\\({}_{\\rm ref}\\)) are \\(-0.05\\), \\(-0.08\\), 0.01, and \\(-0.10\\) dex respectively, with dispersions of 0.08, 0.11, 0.14 and 0.14 dex respectively. Comparing these mean differences and dispersion are find that the mean differences are within 1\\(\\sigma\\) of the solid black slope-one line and thus we regard the systematic offsets as marginal. The four linear best-fits for these four comparisons (shown by the four colored lines) do not show significant deviation from the slope-one line (black) except for the metallicities derived using broad-band photometry (dark blue line). mediate with broad-band photometric and HR diagram K dwarf metallicities being the least certain. ### Selection Effects and Completeness To minimize the scatter in the measurement of stellar metallicity while including as many stars in our samples as possible, we choose the metallicity source from one of the five groups based upon minimal dispersion. Thus we primarily use the spectroscopic exoplanet target metallicities. If no such value for metallicity is available for a star in our sample we use a spectroscopic value taken from the CdS catalog, followed by a _uvby_ photometric value, a broad-band value and lastly a HR diagram K dwarf value for the metallicity. We use the dispersions discussed above as estimates of the uncertainties of the metallicity measurements. Almost all (\\(453/464=98\\%\\)) of the close (\\(d<25\\) pc) sample and \\(2745/2832=97\\%\\) of the more distant (\\(25<d<50\\) pc) sample thus have a value for metallicity. Given the different uncertainties associated with the five sources of metallicity, the close sample has more precise values of metallicity than the more distant sample. The dispersion for the close and far sample are 0.07 and 0.10 dex respectively. See Table 1 for details. We also investigate the color or host mass dependence of the host metallicity distributions. Thus we split our close and far samples which are defined by \\(0.5\\leq B-V\\leq 1.0\\) into 2 groups, those with \\(0.5\\leq B-V\\leq 0.75\\) which we call FG dwarfs and those with \\(0.75<B-V\\leq 1.0\\) which we call K dwarfs. This split is shown in Table 1. In this table we also show the total number of stars in the sample that have a known value of metallicity and the fraction that are close binaries, exoplanet target stars and exoplanet hosts. In order to determine whether there is a real physical correlation between the presence of stellar or planetary companions and host metallicity we need to show that there are only negligible selection effects associated with the detection and measurement of these two quantities that could cause a spurious correlation. In Section 2.3 we show that the planetary companion fraction should be complete for planetary companions with periods less than 5 years for the sample of target stars that are being monitored for exoplanets. This completeness helps assure minimal spurious correlation between the probability of detecting planetary companions and host metallicity. The stellar companion sample is made up of two subsamples: those companions detected as part of an exoplanet survey and those that were not. The target list for exoplanets is biased against stellar binarity as discussed in Grether & Lineweaver (2006). We show that there is negligible bias between the probability of detecting stellar companions and host metallicity in two ways: (1) by showing that our close sample of stellar companions is nearly complete and (2) by using the Geneva-Copenhagen survey (hereafter, GC) of the Solar neighbourhood (Nordstrom _et al._ 2004) sample of stars, containing similar types of stars to those found in our sample, as an independent check on our results. The GC sample of stars which contains F0-K3 stars is expected to be complete for stars with stellar companions closer than \\(d<40\\) pc. For the close sample of stars (\\(d<25\\) pc), the northern hemisphere of stars with close stellar companions is approximately complete. The \\begin{table} \\begin{tabular}{l c c c c c c c} \\hline \\hline & & Range & & & & Stars with [Fe/H] Measurements & \\\\ \\multicolumn{1}{c}{Sample} & \\(B-V\\) & (pc) & Totala & Binaryb & Targetsc & Planet Hostsd & [Fe/H] Source \\\\ \\hline Our FGK & \\(0.5-1.0\\) & \\(d<25\\) & 453 & 45 (9.9\\%) & 379 (84\\%) & 19 (5.0\\%) & Mostly Spec.a \\\\ & \\(0.5-1.0\\) & \\(25<d<50\\) & 2745 & 107 (3.9\\%) & 1597 (58\\%) & 36 (2.3\\%) & Mostly Phot.f \\\\ Our FG & \\(0.5-0.75\\) & \\(d<25\\) & 257 & 27 (10.5\\%) & 228 (89\\%) & 13 (5.7\\%) & Mostly Spec.g [FOOTNOTE:g]Footnote g: 63% Hip Spec., 18% CdS Spec., 19% Web Phot. & 19% Web Phot. southern hemisphere of stars is also nearly complete if we include the binary stars from Jones _et al._ (2002) that are likely to fall within our sample (Grether & Lineweaver, 2006). We then find that \\(\\sim 10\\%\\) of stars have stellar companions with periods shorter than 5 years. If we make a small asymmetry correction (to account for the southern hemisphere not being as well monitored for binaries) we find that \\(\\sim 11\\pm 3\\%\\) of stars have stellar companions within this period range (Grether & Lineweaver, 2006). We also compare our sample with that of the \"Carney-Latham\" survey (hereafter, CL) of proper-motion stars (Carney & Latham, 1987; Carney _et al._, 1994) in Section 4. The CL sample also contains \\(\\sim 11\\%\\) of stars with stellar companions with periods shorter than 5 years (Latham _et al._, 2002). We tabulate the properties of all these samples in Table 1. ### Close Companions The close companions included in our \\(d<25\\) pc and \\(25<d<50\\) pc samples are enclosed in a rectangle of mass-period space shown in Fig. 3. These companions have primarily been detected using the Doppler technique but the stellar companions have been detected with a variety of techniques not exclusively from high precision exoplanet Doppler surveys. Thus we need to consider the selection effects of the Doppler method in order to define a less-biased sample of companions (Lineweaver & Grether, 2003). Given a fixed number of targets, the \"Detected\" region should contain all companions that will be found for this region of mass-period space. The \"Being Detected\" region should contain some but not all companions that will be found in this region and the \"Not Detected\" region contains no companions since the current Doppler surveys are either not sensitive enough or have not been observing for a long enough duration to detect companions in this regime. Thus as a consequence of the exoplanet surveys' limited monitoring duration and sensitivity for our sample we only select those companions with an orbital period \\(P<5\\) years and mass \\(M_{2}>0.001M_{\\odot}\\). In Grether & Lineweaver (2006) we found that companions with a minimum mass in the brown dwarf mass regime were likely to be low mass stellar companions seen face on, thus producing a very dry brown dwarf desert. We also included the 14 stellar companions from Jones _et al._ (2002) that have no published orbital solutions but are assumed to orbit within periods of 5 years. We find one new planet and no new stars in our less biased rectangle when compared with the data used in Grether & Lineweaver (2006). This new planet HD 20782 (HIP 15527) (indicated by a vertical line through the point in Fig. 3), has been monitored for well over 5 years but only has a period of \\(\\sim 1.6\\) years and a minimum mass of \\(1.8M_{\\rm Jup}\\) placing it just between the \"Detected\" and \"Being Detected\" regions. While most planets are detected within a time frame comparable to the period, the time needed to detect this planet was much longer than its period because of its unusually high eccentricity of 0.92 (Jones _et al._, 2006). We thus have two groups of close companions to analyse as a function of host metallicity - giant planets and stars. In Fig. 3 we split the close companion sample into 3 groups defined by the metallicity of their host star: metal-poor ([Fe/H] \\(<-0.1\\)), Sun-like (\\(-0.1\\leq\\) [Fe/H] \\(\\leq\\) 0.1) and metal-rich ([Fe/H] \\(>\\) 0.1) which are plotted as \\begin{table} \\begin{tabular}{l c c c c c} \\hline \\hline \\multicolumn{1}{c}{ Companions} & Range & Total & Metal-poor & Sun-like & Metal-rich \\\\ \\hline Planets & \\(d<25\\) & 19 & 2 (11\\%) & 5 (26\\%) & 12 (63\\%) \\\\ Stars & \\(d<25\\) & 45a & 25 (56\\%) & 14 (31\\%) & 6 (13\\%) \\\\ Planets & \\(25<d<50\\) & 36 & 3 (8\\%) & 9 (25\\%) & 24 (67\\%) \\\\ Stars & \\(25<d<50\\) & 107b & 55 (51\\%) & 36 (34\\%) & 16 (15\\%) \\\\ \\hline \\end{tabular} \\end{table} Table 2Metallicity and Frequency of Hosts with Close Planetary and Stellar Companions Figure 3.— Masses and periods of close companions to stellar hosts of FGK spectral type. We split the close companion sample into 3 groups defined by the metallicity of their host star: metal-poor ([Fe/H] \\(<-0.1\\)), Sun-like (\\(-0.1\\leq\\) [Fe/H] \\(\\leq\\) 0.1) and metal-rich ([Fe/H] \\(>\\) 0.1) which are plotted as white, grey and black dots respectively. The larger points are companions orbiting stars in the more complete \\(d<25\\) pc sample, while the smaller points are companions to stars at distances between \\(25<d<50\\) pc. We divide the stellar companions into those not monitored by one of the exoplanet search programs (shown with an ‘X’ behind the point) and those that are monitored. Both groups of stellar companions are distributed over the entire less-biased region (enclosed by thick line). Hence any missing stellar companions should be randomly distributed. For multiple companion systems, we select the most massive companion in our less-biased sample to represent the system. white, grey and black dots respectively. Fig. 3 suggests that the hosts of planetary companions are generally metal-rich whereas the hosts of stellar companions are generally metal-poor. Table 2 and Fig. 4 confirm the correlation between exoplanets and high-metallicity and indicate an anti-correlation between stellar companions and high metallicity. ## 3. Close Companion - Host Metallicity Correlation We examine the distribution of close companions as a function of stellar host metallicity in our two samples. We do this quantitatively by fitting power-law and exponential best-fits to the metallicity data expressed both linearly and logarithmically. We define the logarithmic [Fe/H] and linear Z/Z\\({}_{\\odot}\\) metallicity as follows: \\[{\\rm[Fe/H]}=\\log({\\rm Fe/H})-\\log({\\rm Fe/H})_{\\odot}=\\log({\\rm Z/Z}_{\\odot}) \\tag{1}\\] where Fe and H are the number of iron and hydrogen atoms respectively and Z = Fe/H. We examine the close planetary companion probability \\(P_{\\rm planet}\\) and the close stellar companion probability \\(P_{\\rm star}\\) as a function of [Fe/H] in Figs. 4 and 6 for the \\(d<25\\) and \\(25<d<50\\) pc samples respectively. Similarly we also examine \\(P_{\\rm planet}\\) and \\(P_{\\rm star}\\) as a function of Z/Z\\({}_{\\odot}\\) in Fig. 5 and 7, which is effectively just a re-binning of the data in Fig. 4 and Fig. 6. We then find the linear best-fits to the planetary and stellar companion fraction distributions as shown by the dashed lines in Figs. 4-7. We also fit an exponential to the [Fe/H] planetary (as in Fischer & Valenti 2005) and stellar companion fraction distributions in Figs. 4 and 6 and equivalently a power-law to the data points for the Z/Z\\({}_{\\odot}\\) plots, Figs. 5 and 7. The two linear parameterizations that we fit to the data are: \\[P_{\\rm lin~{}Fe/H} = a[{\\rm Fe/H}]+P_{\\odot} \\tag{2}\\] \\[P_{\\rm lin~{}Z/Z_{\\odot}} = A({\\rm Z/Z_{\\odot}})+(P_{\\odot}-A) \\tag{3}\\] and the two non-linear parameterizations are: \\[P_{\\rm EP} = P_{\\odot}10^{\\alpha~{}[{\\rm Fe/H}]} \\tag{4}\\] Figure 4.— Metallicities ([Fe/H]) of 453 stars in our close \\(d<25\\) pc sample. _Top:_ The lightest shade of grey are the Hipparcos ‘Sun-like’ stars in our close sample. The next darker shade of grey are all of the stars that are exoplanet targets. This is followed by a still darker shade of grey which are hosts of stellar companions. The darkest shade of grey are exoplanet hosts. In the _Bottom_ plot, the fraction of target stars, stellar companion hosts and planetary companion hosts are shown by squares, triangles and circles respectively. The linear best-fit to the target fraction is shown by a dotted line. The linear and exponential best-fits to the stellar and planetary companion fractions are shown by dashed and solid lines respectively. Figure 5.— Same as Fig. 4 except that metallicity is plotted linearly as Z/Z\\({}_{\\odot}\\). All of the metal-rich (Z/Z\\({}_{\\odot}>1.8\\)) sample stars are being monitored for exoplanets but as the stellar metallicity decreases so does the fraction being monitored. This is because of a bias towards selecting more metal-rich target stars for observation due to an increased probability of planetary companions orbiting metal-rich host stars. The linear best-fit to the target fraction is shown by a dotted line. The linear and power-law best-fits to the stellar and planetary companion fractions are shown by dashed and solid lines respectively. \\[=P_{\\odot}({\\rm Z}/{\\rm Z}_{\\odot})^{\\alpha} \\tag{5}\\] where \\(P_{\\odot}\\) is the fraction of stars of solar metallicity (i.e. \\({\\rm[Fe/H]}=0\\) and \\({\\rm Z}/{\\rm Z}_{\\odot}=1\\)) with companions. If the fits for the parameters \\(a,A\\) and \\(\\alpha\\) are consistent with zero then there is no correlation between the fraction of stars with companions and metallicity. On the other hand, a non-zero value, several sigma away from zero suggests a significant correlation (\\(a,A\\) or \\(\\alpha>0\\)) or anti-correlation (\\(a,A\\) or \\(\\alpha<0\\)). The best-fit parameters \\(a,A\\) and \\(P_{\\odot}\\) (but not \\(\\alpha\\)) depend upon the period range and completeness of the sample. In order to compare the slopes from different samples, we parametrize this dependence in terms of the average companion fraction \\(P_{\\rm avg}\\) for the sample, i.e., if the average companion fraction for a sample is twice as large as for another sample, the best-fit slopes \\(a\\) and \\(A\\) as well as the fraction of stars of solar metallicity \\(P_{\\odot}\\) will also be twice as large. To compare samples with different \\(P_{\\rm avg}\\), we scale the best-fit Eqs. 2-5 to a common average companion fraction by dividing each equation by \\(P_{\\rm avg}\\). Thus, we scale the best-fit parameters \\(a,A\\) and \\(P_{\\odot}\\) by dividing each by \\(P_{\\rm avg}\\). These scaled parameters are then referred to as \\(a^{\\prime}=a/P_{\\rm avg}\\), \\(A^{\\prime}=A/P_{\\rm avg}\\) and \\(P^{\\prime}_{\\odot}=P_{\\odot}/P_{\\rm avg}\\). We list the unscaled best-fit parameters \\(a,A,P_{\\odot},\\alpha\\) along with \\(P_{\\rm avg}\\) for each sample in Table 3. The parameters \\(a^{\\prime}=a/P_{\\rm avg}\\) and \\(\\alpha\\) of the different samples are compared in Fig. 10. We consistently find in Figs. 4 and 6 that metal-rich stars are being monitored more extensively for exoplanets than metal-poor stars as quantified by the \"Target Stars\" / \"Hipparcos Sun-like Stars\" ratio. This is because of a bias towards selecting more metal-rich stars for observation due to an increased probability of planetary companions orbiting metal-rich host stars. Note that this bias is well-represented by a linear trend as shown by the dotted best-fit line in these figures, and is not just a case of a few high metallicity stars being added to the highest metallicity bins. We correct for this bias by calculating \"P/T\" not \"P/H\" for each metallicity bin. We find a correlation between \\({\\rm[Fe/H]}\\) and the presence of planetary companions in Fig. 4. The linear best-fit (Eq. 2) has a gradient of \\(a=0.18\\pm 0.07\\) (\\(\\chi^{2}_{\\rm red}=1.21\\)) and thus the correlation is significant at the \\(2\\sigma\\) level. The non-linear best-fit (Eq. 4) is \\(\\alpha=2.09\\pm 0.54\\) (\\(\\chi^{2}_{\\rm red}=0.16\\)) and thus the correlation is significant at slightly more than the \\(3\\sigma\\) level. Similarly we find a correlation between linear metallicity \\({\\rm Z}/{\\rm Z}_{\\odot}\\) and the presence of planetary companions in the same data re-binned in Fig. 5. The linear best-fit (Eq. 3) has a gradient of \\(A=0.07\\pm 0.03\\) (\\(\\chi^{2}_{\\rm red}=1.25\\)) and the non-linear best-fit (Eq. 5) has an exponent of \\(\\alpha=2.22\\pm 0.39\\) (\\(\\chi^{2}_{\\rm red}=1.00\\)) which are non-zero at the \\(\\sim 2\\sigma\\) and \\(\\sim 5\\sigma\\) significance levels respectively. These results are summarized in Table 3. We can compare the non-linear best-fit (Eq. 5) for linear metallicity \\({\\rm Z}/{\\rm Z}_{\\odot}\\) and the non-linear best-fit (Eq. 4) for log metallicity \\({\\rm[Fe/H]}\\) since both contain the parameter \\(\\alpha\\). As shown by the \\(\\chi^{2}\\) per degree of freedom \\(\\chi^{2}_{\\rm red}\\), the non-linear goodness of fit is better than the linear goodness of fit. We rely on the best fitting functional form which is the non-linear parameterization of our results although we use both parameterizations in our analysis. We combine these two non-independent, non-linear best-fit estimates by computing their weighted average. We assign an error to this average by adding in quadrature (1) the difference between the two estimates and (2) the nominal error on the average. Thus our best estimate is \\(\\alpha=2.2\\pm 0.5\\). Hence the correlation between the presence of planetary companions and host metallicity is significant at the \\(\\sim 4\\sigma\\) level for a non-linear best-fit and at the \\(\\sim 2\\sigma\\) level with a lower goodness-of-fit for a linear best-fit in our close, most complete sample. In Fig. 4, and its re-binned equivalent Fig. 5, we find an anti-correlation between the presence of stellar companions and host metallicity. The linear stellar companion best-fits have gradients of \\(a=-0.14\\pm 0.06\\) (\\(\\chi^{2}_{\\rm red}=3.00\\)) and \\(A=-0.06\\pm 0.03\\) (\\(\\chi^{2}_{\\rm red}=0.91\\)) respectively, both significant at the \\(\\sim 2\\sigma\\) level. The non-linear best-fit to the stellar companions as a function of \\({\\rm[Fe/H]}\\) in Fig. 4 is \\(\\alpha=-0.86\\pm 0.10\\) (\\(\\chi^{2}_{\\rm red}=1.33\\)) and the non-linear best-fit to the stellar companions as a function of \\({\\rm Z}/{\\rm Z}_{\\odot}\\) in Fig. 5 is \\(\\alpha=-0.47\\pm 0.18\\) (\\(\\chi^{2}_{\\rm red}=0.40\\)). Averaging these two as above we obtain, \\(-0.8\\pm 0.4\\), which is significant at the \\(\\sim 2\\sigma\\) level. All these best-fits are Figure 6.— Same as Fig. 4 except for the 2745 stars in the more distant \\(25<d<50\\) pc sample. It is harder to detect distant planets because of signal to noise considerations which limit observations to the brighter stars. This fainter, more distant sample relies more on photometric metallicity determinations than does the closer, brighter sample which has predominantly spectroscopic metallicity determinations (see Table 1). The fraction of stars being monitored for exoplanets is much lower than in Fig. 4. Figure 7.— Same as Fig. 6 except that metallicity is plotted linearly as \\({\\rm Z}/{\\rm Z}_{\\odot}\\) analogous to Fig. 5. In this more distant sample we find the same trends as in Fig. 5 but they are not as prominent. summarized in Table 3. Having found a correlation for planetary companions and an anti-correlation for stellar companions in our close sample and having found them to be robust to different metallicity binnings, we perform various other checks to confirm their reality. We check the robustness of both results to (i) distance and (ii) spectral type (\\(\\sim\\) mass) of the host star. To check if these anti-correlations have a distance dependence, we repeat this analysis for the less complete \\(25<d<50\\) pc sample. As shown by the best-fits in Figs. 6 and 7 and summarized in Table 3 we find only a marginal anti-correlation between the presence of stellar companions and host metallicity for the linear best-fits. The non-linear best-fits however still suggest an anti-correlation with \\(\\alpha=-0.59\\pm 0.12\\) for log metallicity [Fe/H] and \\(\\alpha=-0.44\\pm 0.12\\) for linear metallicity Z/Z\\({}_{\\odot}\\), which are significant at the \\(4\\sigma\\) and \\(3\\sigma\\) levels respectively. Combining these two estimates as described above we find \\(\\alpha=-0.5\\pm 0.2\\) significant at the \\(2\\sigma\\) level in the \\(25<d<50\\) pc sample. The correlation between the presence of planetary companions and host metallicity for the less complete \\(25<d<50\\) pc is significant at the \\(4\\sigma\\) and \\(3\\sigma\\) levels for the linear best-fits \\(a\\) and \\(A\\) respectively. The non-linear best-fit correlation has \\(\\alpha=2.56\\pm 0.45\\) for log metallicity [Fe/H] and \\(\\alpha=3.00\\pm 0.46\\) for linear metallicity Z/Z\\({}_{\\odot}\\) which are significant at the \\(5\\sigma\\) and \\(6\\sigma\\) levels respectively. Combining these two estimates we find the weighted average as above of \\(\\alpha=2.8\\pm 0.6\\) significant at the \\(4\\sigma\\) level. Having found the correlation for planetary companions and the anti-correlation for stellar companions robust to binning but less robust in the less-complete more distant sample, we test for spectral type (\\(\\sim\\) host mass) dependence. We split our sample into bluer and redder subsamples to investigate the effect of spectral type on the close-companion/host-metallicity relationship. We define the bluer subsample by \\(B-V\\leq 0.75\\) (FG spectral type stars) and the redder subsample by \\(B-V>0.75\\) (K spectral type stars). Since \\(B-V\\) has a metallicity dependence, a cut in \\(B-V\\) will not be a true mass cut, but a diagonal cut in mass vs metallicity. Thus, interpreting a \\(B-V\\) cut as a pure cut in mass introduces a spurious anti-correlation between mass and metallicity. The linear best-fit to the stellar companions of the FG sample (\\(d<25\\) pc) has a normalised gradient of \\(a^{\\prime}=(-0.01\\pm 0.08)/10.5\\%=-0.1\\pm 0.8\\) and the non-linear best-fit is \\(\\alpha=-0.2\\pm 0.4\\) as shown in Fig. 8. Both of these best-fits are consistent with the frequency of stellar companions being independent of host metallicity. The linear best-fit to the stellar companions of the K sample has a gradient of \\(a^{\\prime}=(-0.27\\pm 0.07)/9.2\\%=2.9\\pm 0.8\\) and the non-linear best-fit is \\(\\alpha=-1.0\\pm 0.1\\) as shown in Fig. Figure 8.— Same as Fig. 4 for the stars in our close \\(d<25\\) pc sample but only for FG dwarfs (\\(B-V\\leq 0.75\\)). All stars have known metallicity in this sample. There is no apparent anti-correlation between metallicity and the presence of stellar companions. Figure 9.— Same as Fig. 4 for the stars in our close \\(d<25\\) pc sample but only for K dwarfs (\\(B-V>0.75\\)). This plot shows a strong anti-correlation between metallicity and the presence of stellar companions. 9 for the close \\(d<25\\) pc stars. Both of these best-fits show an anti-correlation between the presence of stellar companions and host metallicity at above the \\(3\\sigma\\) level. Less significant results are obtained for the \\(25<d<50\\) FG and K spectral type samples with stellar companions as shown in Table 3. These results suggest that the observed anti-correlation between close binarity and host metallicity is either (i) real and stronger for K spectral type stars than for FG stars or (ii) due to a spectral-type dependent selection effect. Under the hypothesis that the anti-correlation between host metallicity and binarity is real for K dwarfs, there is a possible selection effect limited to F and G stars that could explain why we do not see the anti-correlation as strongly in them. Doppler broadening of the line profile, due to random thermal motion in the stellar atmosphere and stellar rotation, both increase in more massive F and G stars due to their higher effective temperature and faster rotation speeds compared with less massive K stars. This wider line profile for F and G stars results in fewer observable shifting lines thus lowering the spectroscopic binary detection efficiency. However we directly examine the stellar companion fraction as a function of spectral type or color \\(B-V\\) in Fig. 11. For both single-lined and double-lined spectroscopic binaries, if the binary detection efficiency was systemically higher for K dwarfs then the anti-correlation could be a selection effect. However we find that it is fairly independent of spectral type. Thus the anti-correlation does not appear to be a spectral-type dependent selection effect. We also examine the spectral type (\\(\\sim\\) mass) dependence of the correlation between planetary companions and host metallicity. The linear best-fit to the planetary companions of the FG sample has a gradient of \\(a^{\\prime}=(0.22\\pm 0.09)/5.7\\%=3.9\\pm 1.6\\) and the non-linear best-fit is \\(\\alpha=2.3\\pm 0.6\\) as shown in Fig. 8 for the close \\(d<25\\) pc stars. These are significant at the 2 and 3 \\(\\sigma\\) levels respectively. The linear best-fit to the planetary companions of the K sample has a gradient of \\(a^{\\prime}=(0.11\\pm 0.10)/4.0\\%=2.8\\pm 2.5\\) and the non-linear best-fit is \\(\\alpha=1.6\\pm 0.9\\) as shown in Fig. 9 for the close \\(d<25\\) pc stars. These are both significant at between the 1 and 2 \\(\\sigma\\) levels. The K sample contains fewer planetary and stellar companions compared to the FG sample. Both the linear and non-linear fits are consistent between the FG and K samples suggesting that the correlation between the presence of planetary companions and host metallicity is independent of spectral type and consequently host mass. The fraction of planetary companions is also fairly independent of spectral type as shown in Fig. 11. Thus our results suggest that the correlation between the presence of planetary companions and host metal \\begin{table} \\begin{tabular}{l c c c c c c c c c} \\hline \\hline \\multicolumn{1}{c}{} & Range & \\multicolumn{5}{c}{Linear} & \\multicolumn{3}{c}{Non-Linear} \\\\ \\multicolumn{1}{c}{Sample} & (pc) & Typea & Figure & Companions & \\(a\\) or \\(A\\) & \\(P_{\\odot}\\) [\\%] & \\(\\alpha\\) & \\(P_{\\odot}\\) [\\%] & \\(P_{\\rm avg}\\)b [FOOTNOTE:b]Footnote b: “\\(P_{\\rm avg}\\)” is defined as the number of stars with stellar companions divided by the total number of stars. We use this parameter to scale \\(a\\), \\(A\\) and \\(P_{\\odot}\\) which are then referred to by \\(a^{\\prime}=a/P_{\\rm avg}\\), \\ licity is significant at the \\(\\sim 4\\sigma\\) level and that the anti-correlation between the presence of stellar companions and host metallicity is significant at the \\(\\sim 2\\sigma\\) for the \\(d<25\\) pc FGK sample. Splitting both samples into FG and K spectral type stars suggests that the correlation between the presence of planetary companions and host metallicity is independent of spectral type but that the anti-correlation between the presence of stellar companions and host metallicity is a strong function of spectral type with the anti-correlation disappearing for the bluer Figure 10.— We compare the linear \\(a^{\\prime}\\) (triangles) and non-linear \\(\\alpha\\) (circles) parameterizations for the various samples listed in Table 3. The red points are the best-fits to planetary companions and the green points the best-fits to stellar companions. The fact that the red planet values for \\(\\alpha\\) are significantly larger than zero confirms and quantifies the metallicity/planet correlation. The fact that the green stellar values for \\(\\alpha\\) are predominantly less than zero, significantly so only for K dwarfs, is a surprising new result. The labels on the RHS refer to the samples for which the best-fit parameterizations are valid. We normalize the linear parametrization by dividing the best-fit gradient \\(a\\) by the average companion fraction \\(P_{\\rm avg}\\) (see text). This plot is a graphical version of Table 3 whose notes also apply to this plot. All of the best-fits are from [Fe/H] plots except for our FGK stars where \\(\\alpha\\) is the average of the best-fits to both the [Fe/H] and Z/Z\\(\\zeta\\) plots. The \\(P_{\\odot}\\) values plotted in the vertical panel on the right refer to the corresponding best-fit normalization at solar metallicity (Eqs. 2-5). FG host stars (see Fig. 10). We find no spectral-type dependent binary detection efficiency bias that can explain this anti-correlation. ## 4. Is the Anti-Correlation between Metallicity and Stellar Binarity Real? We further examine the relationship between stellar metallicity and binarity by comparing our sample with that of the Geneva and Copenhagen survey (GC) of the solar neighbourhood (Nordstrom _et al._ 2004) that has been selected as a magnitude-limited sample, a volume-limited portion (\\(d<40\\) pc) of which we analyse. This selection criteria infers that the sample is kinematically unbiased, i.e., the sample contains the same proportion of thin, thick and halo stars as is found in the solar neighbourhood. We also compare our sample with that of the \"Carney-Latham\" survey (CL) that has been kinematically selected to have high proper motion stars (Carney & Latham 1987), i.e., it contains a larger proportion of halo stars compared to disk stars than is observed for the solar neighbourhood. Our sample is based on the Hipparcos sample that has a limiting magnitude for completeness of \\(V=7.9+1.1\\sin|b|\\) (Reid 2002) where \\(b\\) is Galactic latitude. Thus the Hipparcos sample is more complete for stars at higher Galactic latitudes where the proportion of halo stars to disk stars increases. Hence our more distant (\\(25<d<50\\) pc) sample will have a small kinematic bias in that it will have an excess of halo stars, whereas our closer (\\(d<25\\) pc) sample will be less kinematically biased. ### Comparison with a Kinematically Unbiased Sample The GC sample contains primarily F and G dwarfs with apparent visual magnitudes \\(V\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}9\\) and is complete in volume for \\(d<40\\) pc for F0-K3 spectral type stars. Unlike our sample analyzed in Section 3, it also includes early F spectral type stars. The GC sample color range is defined in terms of \\(b-y\\) not \\(B-V\\) like our samples. We remove these early F stars with \\(b-y<0.3\\) (\\(B-V\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}0.5\\), Cox 2000) from the sample so that the GC sample spectral type range is similar to ours. The GC sample then ranges from \\(0.3\\leq b-y\\leq 0.6\\) (\\(0.5\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}B-V\\lower 2.15pt\\hbox{$\\; \\buildrel<\\over{\\sim}\\;$}1.0\\)) with those stars above \\(b-y=0.5\\) (\\(B-V\\sim 0.75\\)) referred to as K stars. We also exclude suspected giants from the GC sample. For the GC sample, we only include those binaries observed by CORAVEL between 2 and 10 times so as to avoid a potential bias where low metallicity stars were observed more often, thus leading to a higher efficiency for finding binaries around these stars. This homogernizes the binary detection efficiency such that any real signal will not be removed by such a procedure. Un Figure 11.— Color \\((B-V)\\) distribution for double-lined (squares) and single-lined (triangles) spectroscopic binaries (SB2s and SB1s respectively) and exoplanets (circles) in our close \\(d<25\\) pc sample. The linear best-fit gradient for SB2s is \\(0.00\\pm 0.06\\), for SB1s it is \\(-0.08\\pm 0.08\\) and for exoplanets it is \\(-0.05\\pm 0.08\\). All three of these gradients are only significant at \\(\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}1\\sigma\\) per. There is no significant correlation between SB1, SB2 or planetary fraction for either FG (\\(B-V\\leq 0.75\\)) stars or for K (\\(B-V>0.75\\)) stars. Figure 12.— Histogram of stars in the complete volume limited GC sample (\\(d<40\\) pc). We exclude those stars with \\(b-y<0.3\\) so that the spectral type range becomes F7-K3 and thus similar to that of our sample. We only include those stars that have between 2 and 10 radial velocity measurements with the CORAVEL spectrograph. We find an anti-correlation between binarity and host metallicity as shown by the linear and non-linear best-fits represented by the dashed and solid lines respectively. like our sample for which we only include binaries with \\(P<5\\) years, the GC sample also includes much longer period visual binaries in addition to short period spectroscopic binaries such that the total binary fraction of all types corresponds to \\(\\sim 25\\%\\). Comparing this with the period distribution for G dwarf stars of Duquennoy & Mayor (1991) this binary fraction corresponds to binary systems with periods less than \\(\\sim 10^{5}\\) days. For the volume limited \\(d<40\\) pc sample we again find an anti-correlation between binarity and stellar host metallicity as shown in Fig. 12. Both the linear and non-linear best-fits listed in Table 3 are significant at or above the \\(3\\sigma\\) level. We also split the GC sample into FG and K spectral type stars in Figs. 13 and 14 respectively. The anti-correlation between the presence of stellar companions and host metallicity is significant at less than the \\(1\\sigma\\) level for FG stars but significant at the \\(\\sim 4\\sigma\\) level for K stars. These results are qualitatively the same as those found for our sample but quantitatively weaker as shown in Fig. 10 (confer rows of points labeled GC). This may be due to the higher fraction of late F and early G spectral type stars compared to our samples or the larger range (\\(\\sim 10^{5}\\) days) in binary periods contained in the GC sample compared to our sample where \\(P<5\\) years. Another way of interpreting this anti-correlation between binarity and metallicity may be in terms of the age and nature of different components of the Galaxy described by stellar kinematics, i.e., F stars are generally younger than K stars and thus are more likely to belong to the younger thin disk star population than the older thick disk star population. Hence we examine our results in terms of stellar kinematics. ### Comparison with a Kinematically Biased Sample We also compare our samples and that of the GC survey with the Carney & Latham (1987) high proper motion survey (CL). The CL survey contains all of the A, F and early G, many of the late G and some of the early K dwarfs from the Lowell Proper Motion Catalog (GiCLs _et al._ 1971, 1978) and which were also contained in the NLTT Catalog (Luyten 1979, 1980). The number of stars in this distribution increases as the stellar colors become redder, peaking at about \\(B-V=0.65\\), following Figure 14.— Same as Fig. 12 but only for the K dwarfs in the GC sample of stars (\\(d<40\\) pc). We define K dwarfs as those with \\(b-y>0.5\\) (\\(B-V\\lower 2.15pt\\hbox{$\\;\\buildrel>\\over{\\sim}\\;$}0.75\\)). We find a very strong anti-correlation between binarity and host metallicity as shown by the linear best-fit with gradient \\(a=-0.46\\pm 0.10\\). The non-linear best-fit is \\(a=-0.52\\pm 0.11\\). The scaled linear gradient \\(a^{\\prime}=(-0.46\\pm 0.10)/32.0\\%=-1.4\\pm 0.3\\). Comparing this plot with Fig. 13 suggests that the anti-correlation between binarity and host metallicity is stronger for redder stars. Figure 13.— Same as Fig. 12 but only for the FG dwarfs in the GC sample of stars (\\(d<40\\) pc). We define FG dwarfs as those with \\(b-y<0.5\\) (\\(B-V\\lower 2.15pt\\hbox{$\\;\\buildrel>\\over{\\sim}\\;$}0.75\\)). We find only a marginal anti-correlation between binarity and host metallicity as shown by the linear best-fit with gradient \\(a=-0.05\\pm 0.06\\) and the non-linear best-fit with \\(\\alpha=-0.11\\pm 0.10\\). Using \\(P_{\\rm avg}=26.1\\%\\), the scaled linear gradient \\(a^{\\prime}=(-0.05\\pm 0.06)/26.1\\%=-0.2\\pm 0.2\\). which the numbers of stars begin to decrease (Carney _et al._ 1994). This group has also obtained data for a smaller number of stars from the sample of Ryan (1989) who sampled sub-dwarfs (metal-poor stars beneath the main-sequence) that have a high fraction of halo stars in the range \\(0.35<B-V<1.0\\). We refer to this combined sample as outlined in Carney _et al._ (2005) as the CL sample. This CL sample contains all binaries detected as spectroscopic binaries, visual binaries or common proper motion pairs. In Fig. 15 we plot the binary fraction of stars on prograde and retrograde Galactic orbits as shown in Fig. 3 of Carney _et al._ (2005). All of the CL stars have \\(\\rm[Fe/H]\\leq 0.0\\). The CL distribution contains a small subset of metal-poor \\(\\rm[Fe/H]\\leq-0.2\\) stars from Ryan (1989) that has a one-third lower prograde binary fraction due to fewer observations. Thus stars with metallicities of between \\(-0.2\\) and \\(0.0\\) have a higher binary fraction than the rest of the CL distribution. We make a small correction for this bias in the binary fraction in the range \\(-0.2<\\rm[Fe/H]<0.0\\) by lowering the 2 highest metallicity prograde points of the CL distribution by 2%. We note an anti-correlation between the binary fraction and metallicity for \\(-1.3<\\rm[Fe/H]<0.0\\) range of prograde disk stars of the CL distribution as shown in Fig. 15. We find that the linear best-fit to this anti-correlation has a gradient of \\(a=-0.07\\pm 0.06\\) and the non-linear best-fit has \\(\\alpha=-0.12\\pm 0.09\\), which are both significant at slightly above the \\(1\\sigma\\) level. For consistency we exclude the two lowest metallicity points from this best-fit so that we analyse the same region of metallicity as our samples and the GC sample and because these two low metallicity points will probably contain a significant fraction of halo stars. The average binary fraction is \\(P_{\\rm avg}=26\\%\\) for the disk-dominated part of the prograde CL distribution. Carney _et al._ (2005) found no correlation between binarity and host metallicity for the retrograde halo stars. We overplot our \\(d<25\\) pc binary fraction (from Fig. 4) along with the GC \\(d<40\\) pc binary fraction (from Fig. 12) onto the prograde CL sample in Fig. 15. All three of these samples have different binary period ranges and levels of completeness. We scale our sample and the GC sample to the size of the Carney _et al._ (2005) sample by scaling the distributions to contain the same number of binary stars at solar metallicity. The most metal-poor point in our close binary distribution is scaled above 100%, hence we set this point to 100%. The combined three sample distribution shows an anti-correlation between binarity and metallicity. The normalized linear best-fit to this is \\(a^{\\prime}=(-0.10\\pm 0.03)/25.7\\%=-0.39\\pm 0.12\\) and the non-linear best-fit is \\(\\alpha=-0.22\\pm 0.05\\) which are both significant at or above the \\(\\sim 3\\sigma\\) level (see last row of Table 3). This combined result is our best estimate and indicates a strong anti-correlation between stellar companions and metallicity for \\(\\rm[Fe/H]>-1.3\\). ### Discussion We examine our results in terms of Galactic populations by determining the most likely population membership (halo, thick or thin disk) for each star in the GC sample using the method outlined in the Appendix and then plotting them in the Galactic tangential velocity \\(V\\) - metallicity [Fe/H] plane as in Fig. 16. We use red points for the thin disk stars, green points for the thick disk stars and a blue point for the single halo star. The kinematically unbiased GC sample contains mostly thin disk stars. Excluding the one halo star in the GC sample, stars with \\(\\rm[Fe/H]\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}-0.9\\) belong to the thick disk and stars with \\(\\rm[Fe/H]\\lower 2.15pt\\hbox{$\\;\\buildrel>\\over{\\sim}\\;$}-0.1\\) belong to the thin disk. The region \\(-0.9\\lower 2.15pt\\hbox{$\\;\\buildrel>\\over{\\sim}\\;$}[Fe/H]\\lower 2.15pt\\hbox{$\\; \\buildrel>\\over{\\sim}\\;$}-0.1\\) contains a combination of both thick and thin disk stars. We also plot these thick and thin disk stars as separate histograms in metallicity in Fig. 17. In the region \\(\\rm[Fe/H]\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}-0.9\\) that contains only thick disk stars, we find that the binary fraction is approximately twice as large as for the region \\(\\rm[Fe/H]\\lower 2.15pt\\hbox{$\\;\\buildrel>\\over{\\sim}\\;$}-0.1\\) that contains only thin disk stars. In both of these single population regions the binary fraction also appears to be approximately independent of metallicity. While our purely probabilistic method of assigning the stars in the GC sample to Galactic populations is useful for determining the general regions of parameter space that the individual populations occupy, it is not precise enough to show exactly which stars belong to which population. This is especially true for the regions of parameter space that have large overlaps such as that between the thick and thin disk stars in Fig. 16. Thus the thin and thick disk binary fractions in the interval \\(-0.9\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}[Fe/H]\\lower 2.15pt\\hbox{$\\; \\buildrel<\\over{\\sim}\\;$}-0.1\\) are probably mixtures. We suspect that the thin and thick disk binary fractions in this overlap region will remain at the same levels as found for the non-overlapping regions. Figure 15.— This plot is adapted from Fig. 3 of Carney _et al._ (2005). The black triangles are the points from the CL sample of proper motion stars with prograde Galactic tangential velocities. We overplot the binary fraction as a function of host metallicity for our close (\\(d<25\\) pc) F7-K3 sample (Fig. 4) with red circles and the green squares are from the volume limited GC sample (\\(d<40\\) pc) for F7-K3 stars (Fig. 12). The three samples contain different average binary fractions because the period range and the levels of completeness of the stellar companions varies between the samples as discussed in the text. We normalise the distributions by scaling our sample and the GC sample so that they contain the same fraction of binary stars as the sample of Carney _et al._ (2005) at \\(\\rm[Fe/H]=0\\). The linear and non-linear best-fits to the three samples combined are shown as dashed and solid lines respectively. The anti-correlation between binarity and metallicity in the \\(-0.9\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}[{\\rm Fe/H}]\\lower 2.15pt \\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}-0.1\\) range may be due to this overlap between higher binarity thick disk stars and lower binarity thin disk stars. We now partition the Galactic tangential velocity \\(V\\) -metallicity [Fe/H] parameter space into four quadrants. We split the \\(V\\) parameter space into those stars on prograde Galactic orbits (P) and those on retrograde Galactic orbits (R). We split the [Fe/H] parameter space into those stars that are metal rich (r) with [Fe/H] \\(\\lower 2.15pt\\hbox{$\\;\\buildrel>\\over{\\sim}\\;$}-0.9\\) and those that are metal poor (p) with [Fe/H] \\(\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}-0.9\\). We then label these quadrants by the direction of Galactic orbital motion followed by the range in metallicity or Pp, Pr, Rr and Rp as shown in Fig. 16. We now assume that the Pp quadrant contains a mixture of halo and thick disk stars and that the Pr quadrant contains a mixture of thin and thick disk stars and that the Rp and Rr quadrants only contain halo stars. The combined anti-correlation between binarity and metallicity in Fig. 15, that all three samples appear to have in common, is predominantly in the Pr quadrant of \\(V-\\) [Fe/H] parameter space that contains a mixture of thick and thin disk stars. As discussed above this anti-correlation may be due to the overlap of high binarity thick disk stars and lower binarity thin disk stars. While Latham _et al._ (2002) suggest that the halo and disk populations have the same binary fraction, Carney _et al._ (2005) find lower binarity in retrograde stars. As shown in Fig. 15 there is a clear difference of about a factor of 2 in the region [Fe/H] \\(\\lower 2.15pt\\hbox{$\\;\\buildrel>\\over{\\sim}\\;$}-0.9\\) between the binary fractions of prograde disk stars compared to retrograde halo stars (Pr and Rr respectively). All the retrograde halo stars appear to have the same binary fraction (quadrants Rr and Rp). The Pp quadrant contains prograde halo stars and has a \\(\\sim 2\\) times higher binary fraction than the quadrants containing retrograde halo stars. However the Pp quadrant also contains thick disk stars in addition to prograde halo stars. We propose that the Pp region, [Fe/H] \\(\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}-0.9\\), for stars on prograde Galactic orbits contains a mixture of low binarity halo stars and high binarity thick disk stars. In Fig. 15 at [Fe/H] \\(\\sim-0.9\\) our close sample (\\(d<25\\) pc) and the GC sample (\\(d<40\\) pc) start to diverge from the data points of the CL survey. This observed divergence may be due to the CL survey being comprised of high proper motion stars and consequently a higher fraction of prograde halo stars compared to thick disk stars than the kinematically unbiased GC sample and our relatively kinematically unbiased sample where thick disk stars probably numerically dominate over halo stars. Using a kinemically unbiased sample, Chiba & Beers (2000) report that for the three regions \\(-1.0>\\) [Fe/H] \\(>-1.7\\), \\(-1.7>\\) [Fe/H] \\(>-2.2\\) and \\(-2.2>\\) [Fe/H] that the fraction of stars that belong to the thick disk are 29%, 8% and 5% respectively, with the rest belonging to the halo. We restrict these thick disk fraction estimates to stars only on prograde orbits by assuming that all of the thick disk stars are on prograde orbits and that Figure 16.— We plot tangential Galactic velocity \\(V\\) as a function of metallicity [Fe/H] for the kinematically unbiased GC sample (\\(d<40\\) pc). We use a probabilistic method to assign the stars in the GC sample to the three Galactic populations (halo, thick and thin disks) as discussed in the Appendix. Red points are thin disk stars, green points are thick disk stars and a blue point is the single halo star in the sample at \\(V<-500\\) km/s. Crosses represent FG spectral-type stars and circles K stars. The ratio of thick/thin disk stars is \\(\\sim 3\\) times higher for K stars than for FG stars. Figure 17.— Histogram of the stars in Fig. 16 suspected of belonging to the thick disk and the thin disk in the GC sample (\\(d<40\\) pc). Notice the difference by a factor \\(\\sim 2\\) between the higher binary fraction thick disk stars and the lower binary fraction thin disk stars. We note that the K star distribution contains a higher ratio of thick disk stars than the FG star distribution in the thick/thin disk overlap region. half of the halo stars are prograde and the other half are retrograde. Thus the fraction of prograde stars that are thick disk stars is 45%, 15% and 10% for the three metallicity regions respectively. Using these three prograde restricted thick disk/halo ratios reported in Chiba & Beers (2000) combined with observed binary fraction for the thick disk (55%) and halo (12%) stars in Fig. 15 we can test the proposal in the Pp quadrant that the two lowest metallicity prograde points from Carney _et al._ (2005) contain a mixture of low binarity halo stars and high binarity thick disk stars. We plot the three estimated mixed thick disk/halo binary fraction points as grey triangles in Fig. 15. We note that they are consistent with the two prograde Carney _et al._ (2005) points thus supporting our proposal that the Pp quadrant contains a mixture of low binarity halo stars and high binarity thick disk stars. These mixed thick disk/halo points also show a correlation between the presence of stellar companions and metallicity for stars in the Pp region. Our results suggest that thick disk stars have a higher binary fraction than thin disk stars which in turn have a higher binary fraction than halo stars. Thus for stars on prograde Galactic orbits we observe an anti-correlation between binarity and metallicity for the region of metallicity [Fe/H] \\(\\lower 2.15pt\\hbox{$\\;\\buildrel>\\over{\\sim}\\;$}-0.9\\) that contains an overlap between the lower-binarity, higher-metallicity thin disk stars and the higher-binarity, lower-metallicity thick disk stars. We also find for stars on prograde Galactic orbits, a correlation between binarity and metallicity for the range [Fe/H] \\(\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}-0.9\\), that contains an overlap between the higher-binarity, higher-metallicity thick disk stars and the lower-binarity, lower-metallicity halo stars. ## 5. Summary We examine the relationship between Sun-like (FGK dwarfs) host metallicity and the frequency of close companions (orbital period \\(<\\) 5 years). We find a correlation at the \\(\\sim 4\\sigma\\) level between host metallicity and the presence of planetary companion and an anti-correlation at the \\(\\sim 2\\sigma\\) level between host metallicity and the presence of a stellar companion. We find that the non-linear best-fit is \\(\\alpha=2.2\\pm 0.5\\) and \\(\\alpha=-0.8\\pm 0.4\\) for planetary and stellar companions respectively (see Table 3). Fischer & Valenti (2005) also quantify the planet metallicity correlation by fitting an exponential to a histogram in [Fe/H]. They find a best-fit of \\(\\alpha=2.0\\). Our result of \\(\\alpha=2.2\\pm 0.3\\) is a slightly more positive correlation and is consistent with theirs. Our estimate is based on the average of the best-fits to the metallicity data binned both as a function of [Fe/H] and Z/Z\\({}_{\\odot}\\). Larger bins tend to smooth out the steep turn up at high [Fe/H] and may be responsible for their estimate being slightly lower. We also analyze the sample of Nordstrom _et al._ (2004) and again find an anti-correlation between metallicity and close stellar companions for this larger period range. We also find that K dwarf host stars have a stronger anti-correlation between host metallicity and binarity than FG dwarf stars. We compare our analysis with that of Carney _et al._ (2005) and find an alternative explanation for their reported binary frequency dichotomy between stars on prograde Galactic orbits with [Fe/H] \\(\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}0\\) compared to stars on retrograde Galactic orbits with [Fe/H] \\(\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}-0.9\\). We propose that the region, [Fe/H] \\(\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}-0.9\\), for stars on prograde Galactic orbits contains a mixture of low binarity halo stars and high binarity thick disk stars. Thick disk stars appear to have a \\(\\sim 2\\) higher binary fraction compared to thin disk stars, which in turn have a \\(\\sim 2\\) higher binary fraction than halo stars. While the ratio of thick/thin disk stars is \\(\\sim 3\\) times higher for K stars than for FG stars we only observe a marginal difference in their distributions as a function of metallicity. In the region \\(-0.9\\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}[{\\rm Fe/H}] \\lower 2.15pt\\hbox{$\\;\\buildrel<\\over{\\sim}\\;$}-0.1\\) that we suspect contains a mixture of thick and thin disk stars, the K star distribution contains a higher ratio of thick disk stars compared to the FG star distribution at a given metallicity. This difference is marginal but can partially explain the kinematic and spectral-type (\\(\\sim\\) mass) results. Thus for stars on prograde Galactic orbits as we move from low metallicity to high metallicity we move through low binarity halo stars to high binarity thick disk stars to medium binarity thin disk stars. Since halo, thick disk and thin disk stars are not discrete populations in metallicity and contain considerable overlap, as we go from low metallicity to high metallicity for prograde stars, we firstly observe a correlation between binarity and metallicity for the overlapping halo and thick disk stars and then an anti-correlation between binarity and metallicity for the overlapping thick and thin disk stars. We would like to thank Johan Holmberg for his help on analysing the Geneva sample and Chris Flynn, John Norris, Virginia Trimble, Richard Larson, Pavel Kroupa and David Latham for helpful discussions. ## Appendix A Probability of Galactic Population Membership We use a similar method to that of Reddy _et al._ (2006) in assigning a probability to each star of being a member of the thin disk, thick disk or halo populations. We assume the GC sample is a mixture of the three populations. These populations are assumed to be represented by a Gaussian distribution for each of the 3 Galactic velocities \\(U,V,W\\) and for the metallicity [Fe/H]. The age dependence of the quantities for the thin disk are ignored. The equations establishing the probability that a star belongs to the thin disk (\\(P_{\\rm thin}\\)), the thick disk (\\(P_{\\rm thick}\\)) or the halo (\\(P_{\\rm halo}\\)) are \\[P_{\\rm thin}=f_{1}\\frac{P_{1}}{P}\\,\\ P_{\\rm thick}=f_{2}\\frac{P_{2}}{P}\\,\\ P_{\\rm halo }=f_{3}\\frac{P_{3}}{P}\\] (A1) where \\[P = \\sum f_{i}P_{i}\\] (A2) \\[P_{i} = C_{i}\\exp\\left[-\\frac{U^{2}}{2\\sigma_{U_{i}}^{2}}-\\frac{(V- \\langle V\\rangle)^{2}}{2\\sigma_{V_{i}}^{2}}-\\frac{W^{2}}{2\\sigma_{W_{i}}^{2}}- \\frac{([{\\rm Fe/H}]-\\langle[{\\rm Fe/H}]\\rangle)^{2}}{2\\sigma_{[{\\rm Fe/H}]_{i} }^{2}}\\right]\\] \\[C_{i} = \\frac{1}{\\sigma_{U_{i}}\\sigma_{V_{i}}\\sigma_{W_{i}}\\sigma_{{\\rm[ Fe/H]_{i}}}}(i=1,2,3)\\] Using the data in Table A1 taken from Robin _et al._ (2003) we compute the probabilities for stars in the GC sample. For each star, we assign it to the population (thin disk, thick disk or halo) that has the highest probability. We plot the probable halo, thick and thin disk stars of the GC sample in Fig. 16. ## References * () Ammons, S.M., Robinson, S.E., Strader, J., Laughlin, G., Fischer, D. & Wolf, A., 2006, ApJ, 638:1004-1017 * () Batten, A.H., 1973, Pergamon, Oxford * () Bond, J.C., Tinney, C.G., Butler, P., Jones, H.R.A., Marcy, G.W., Penny, A.J. & Carter, B.D., 2006, MNRAS, 370:163-173 * () Carney, B.W. & Latham, D.W., 1987, AJ 93:116-156 * () Carney, B.W., Latham, D.W., Laird, J.B. & Aguilar, L.A., 1994, AJ 107:2240-2289 * () Carney, B.W., Angular, L.A., Latham, D.W. & Laird, J.B., 2005, that Omega Centauri is Related to the Effect', ApJ, 129:1886-1905 * () Cayrel de Strobel, G., Soubiran, C., Ralite, N., 2001, A&A, 373:159-163 * () Chiba, M. & Beers, T.C., 2000, ApJ, 119:2843-2865 * () Cox, A.N., 2000, AIP Press, 4th Edition * () Dall, T.H., Brunt, H. & Strassmeier, K.G., 2005, A&A, 444:573-583 * () Duquennoy, A. & Mayor, M., 1991, A&A, 248:485-524 * () Fischer, D.A. & Valenti, J., 2005, ApJ, 622:1102-1117 * () Giclas, H.L., Burnham, R. & Thomas, N.G., 1971, Lowell Observatory, Flagstaffaff * () Giclas, H.L., Burnham, R., Jr. & Thomas, N.G., 1978, Lowell Observatory Bulletin, 8:89 * () Gonzalez, G., 1997, MNRAS, 285:403-412 * () Gonzalez, G., Laws, C., Tyagi, S. & Reddy, B.E., 2001, AJ, 121:432-452 * () Grether, D. & Lineweaver, C.H., 2006, ApJ, 640:1051-1062 * () Gunn, J.E. & Griffin, R.F., 1979, AJ, 87:572-773 * () Hauck, B. & Mermilliod, M., 1998, A&A, 129:431-433 * () Jones, H.R.A., Butler, P., Marcy, G.W., Tinney, C.G., Penny, A.J., McCarthy, C. & Carter, B.D., 2002, MNRAS, 337:1170-1178 * () Jones, H., Butler, P., Tinney, C.H., Marcy, G., Carter, B., Penny, A., McCarthy, C.H. & Bailey, J., 2006, MNRAS, 369:249-256 * () Kotoneva, E., Flynn, C. & Jimenez, R., 2002, MNRAS, 335, 1147-1157 * () Laughlin, G. & Adams, F.C., 1997, ApJ, 491:151-L54 * () Latham, D.W., Mazeh, P., Carney, B.W., McCrosky, R.E., Stefanki, R.P. & Davis, R.J., 1988, AJ, 96:567-587 * () Latham, D.W., Stefanik, R.P., Torres, G., Davis, R.J., Mazeh, T., Carney, B.W., Laird, J.B. & Morse, J.A., 2002, ApJ, 124:1144-1161 * () Latham, D.W., 2004, ASP Conference Series Vol 318, eds. Hilditch, R.W., Hensberge, H. & Pavlovski, K. * () Laws, C., Gonzalez, G., Walker, K.M., Tyagi, S., Dodworth, J., Snider, K. & Suntzeff, N.B., 2003, AJ, 125:2664-2677 * () Lineweaver, C.H. & Grether, D., 2003, ApJ, 589:1350-1360 * () Luyten, W.J., 1979, University of Minnesota, Minneapolis * () Luyten, W.J., 1980, University of Minnesota, Minneapolis * () Martelli, S. & Smith, G.H., 2004, PASP, 116:920-925 * () Nordstrom, B., Mayor, M., Anderson, J., Holmberg, J., Pont, F., Jorgensen, B.R., Olsen, E.H., Udry, S. & Mowlavi, N., 2004, A&A, 418:989-1019 * () Reddy, B.E., Lambert, D.L. & Allende Prieto, C., 2006, MNRAS, 367:1329-1366 * () Reid, I.N., 2002, PASP, 144:306-329 * () Robin, A.C., Reyle, C., Derriere, S. & Picaud, S., 2003, A&A, 409:523-540 * () Ryan, S.G., 1989, AJ 98:1693-1767 * () Santos, N.C., Israelian, Mayor, M., Bento, J.P., Almeida, P.C., Sousa, S.G & Ecuvillon, A., 2005, A&A, 437:1127-1133 * () Santos, N.C., Israelian, G, & Mayor, M., 2004, A&A, 415:1153-1166 * () Valenti, J.A. & Fischer, D.A., 2005, ApJ, 159:141-166
We examine the relationship between the frequency of close companions (stellar and planetary companions with orbital periods \\(<5\\) years) and the metallicity of their Sun-like (\\(\\sim\\) FGK) hosts. We confirm and quantify a \\(\\sim 4\\sigma\\) positive correlation between host metallicity and planetary companions. We find little or no dependence on spectral type or distance in this correlation. In contrast to the metallicity dependence of planetary companions, stellar companions tend to be more abundant around low metallicity hosts. At the \\(\\sim 2\\sigma\\) level we find an anti-correlation between host metallicity and the presence of a stellar companion. Upon dividing our sample into FG and K sub-samples, we find a negligible anti-correlation in the FG sub-sample and a \\(\\sim 3\\sigma\\) anti-correlation in the K sub-sample. A kinematic analysis suggests that this anti-correlation is produced by a combination of low-metallicity, high-binarity thick disk stars and higher-metallicity, lower-binarity thin disk stars. Subject headings:binaries: close - stars: abundances - stars: kinematics
Condense the content of the following passage.
arxiv-format/0612201v1.md
# Decoherence-induced geometric phase in a multilevel atomic system Shubhrangshu Dasgupta\\({}^{1}\\) Daniel A. Lidar\\({}^{1,2}\\) \\({}^{1}\\)Department of Chemistry, University of Southern California, Los Angeles, CA 90089, USA \\({}^{2}\\)Departments of Electrical Engineering and Physics, University of Southern California, Los Angeles, CA 90089, USA November 3, 2021 ## I Introduction Berry observed that quantum systems may retain a memory of their motion in Hilbert space through the acquisition of geometric phases [1]. Remarkably, these phase factors depend only on the geometry of the path traversed by the system during its evolution. Soon after this discovery, geometric phases became a subject of intense theoretical and experimental studies [2]. In recent years, renewed interest has arisen in the study of geometric phases in connection with quantum information processing [3; 4]. Indeed, geometric, or holonomic quantum computation (QC) may be useful in achieving fault tolerance, since the geometric character of the phase provides protection against certain classes of errors [5; 6; 7; 8]. However, a comprehensive investigation in this direction requires a generalization of the concept of geometric phases to the domain of _open_ quantum systems, i.e., quantum systems which may decohere due to their interaction with an external environment. Here we consider the following basic question: Is it possible for the environment to induce a geometric phase where there is none if the system is treated as closed? Apart from its fundamental nature, this question is of obvious practical importance to holonomic QC, since if the answer is affirmative the corresponding open-system geometric phase can either be detrimental (if it causes a deviation from the intended value) or beneficial, in the sense that the environment is acting as an amplifier for, or even generator of, the geometric phase. Geometric phases in open systems, and more recently their applications in holonomic QC, have been considered in a number of works, since the late 1980's. The first, phenomenological approach to the subject used the Schrodinger equation with non-Hermitian Hamiltonians [9; 10]. While a consistent non-Hermitian Hamiltonian description of an open system in general requires the theory of stochastic Schrodinger equations [11], this phenomenological approach for the first time indicated that complex Abelian geometric phases should appear for systems undergoing cyclic evolution. In Refs. [12; 13; 14; 15; 16; 17], geometric phases acquired by the density operator were analyzed for various explicit models within a master equation approach. In Refs. [8; 18], the quantum jumps method was employed to provide a definition of geometric phases in Markovian open systems (related difficulties with stochastic unravellings have been pointed out in Ref. [19]). In another approach the density operator, expressed in its eigenbasis, was lifted to a purified state [20; 21]. In Ref. [22], a formalism in terms of mean values of distributions was presented. An interferometric approach for evaluating geometric phases for mixed states evolving unitarily was introduced in Ref. [23] and extended to non-unitary evolution in Refs. [24; 25]. This interferometric approach can also be considered from a purification point of view [23; 25]. This multitude of different proposals revealed various interesting facets of the problem. Nevertheless, the concept of adiabatic geometric phases in open systems remained unresolved in general, since most of these treatments did not employ an adiabatic approximation genuinely developed for open systems. Note that the applicability of the closed systems adiabatic approximation [26] to open systems problems is not a priori clear and should be justified on a case-by-case basis. Moreover, almost all of the previous works on open systems geometric phases were concerned with the Abelian (Berry phase) case. Exceptions are the very recent Refs. [8; 27; 28], which discuss both non-adiabatic and adiabatic dynamics, but employ the standard adiabatic theorem for closed systems in the latter case. Recently, a fully self-consistent approach for both Abelian and non-Abelian adiabatic geometric phases in open systems was proposed by Sarandy and Lidar (SL) in Ref. [29]. It applies to the very general class of systems described by convolutionless master equations [30]. SL made use of the formalism they developed in Ref. [31] for adiabaticity in open systems, which relies on the Jordan normal form of the relevant Liouville (or Lindblad) super-operator. The geometric phase was then defined in terms of the left and right eigenvectors of this super-operator. This definition is a natural generalization of the one given by Berry for a closed system, and was shown to have a proper closed system limit. The formalism wasillustrated in Ref. [29] in the context of a spin system interacting with an adiabatically varying magnetic field. In order to address the basic question posed above, we study here the adiabatic geometric phase in a multi-level atomic system using the SL formalism. Specifically, we consider the process of stimulated Raman adiabatic passage (STIRAP) [32; 33] in a three-level atomic system in a \\(\\Lambda\\) configuration. We analyze a version of STIRAP where the closed system geometric phase is identically zero. We then show that when spontaneous emission and/or collisional relaxation are included, the same STIRAP process yields a non-vanishing geometric phase. This decoherence-induced geometric phase is an example of \"beneficial decoherence\", where the environment performs a potentially useful task. This is conceptually similar to the phenomenon of decoherence-induced entanglement [34; 35]. Since the SL formalism involves finding the Jordan normal form of a general matrix, which is an analytically difficult problem, we developed a numerically stable program to find the Jordan form of any complex square matrix and used it to find the geometric phase [36]. The structure of the paper is as follows. In Section II we briefly review the STIRAP process in a closed three level system in the \\(\\Lambda\\) configuration, and the corresponding calculation of the (vanishing) geometric phase. In Section III we revisit this problem in the open system setting and derive the solution of the STIRAP model. Our numerical results, along with a detailed analysis of the geometric phase, are presented in Section IV. We conclude in Section V. ## II Geometric phase under STIRAP: the closed system case We consider the process of stimulated Raman adiabatic passage (STIRAP) [32] in a three-level system in the \\(\\Lambda\\) configuration, as shown in Fig. 1. In this process, the initial atomic population in level \\(|1\\rangle\\) is completely transferred to level \\(|2\\rangle\\), while the pulses are applied in a \"counterintuitive\" sequence. The intermediate level \\(|3\\rangle\\) does not become substantially populated. The interaction picture Hamiltonian in one-photon resonance can be written in the rotating-wave approximation as follows: \\[H=g_{1}(t)|3\\rangle\\langle 1|+g_{2}(t)|3\\rangle\\langle 2|+\\text{h.c.}, \\tag{1}\\] where the real functions \\(g_{i}\\) are the time-dependent Rabi frequencies of the two laser pulses, interacting respectively with the transitions \\(|i\\rangle\\leftrightarrow|3\\rangle\\) (\\(i\\in 1,2\\)). The eigenvalues of \\(H\\) are given by \\[E_{0}=0,E_{\\pm}=\\pm\\sqrt{g_{1}^{2}+g_{2}^{2}} \\tag{2}\\] and the respective eigenvectors are given by \\[|0\\rangle = \\cos(\\theta)|1\\rangle-\\sin(\\theta)|2\\rangle,\\] \\[|+\\rangle = \\sin(\\theta)\\sin(\\phi)|1\\rangle+\\cos(\\theta)\\sin(\\phi)|2\\rangle+ \\cos(\\phi)|3\\rangle,\\] \\[|-\\rangle = \\sin(\\theta)\\cos(\\phi)|1\\rangle+\\cos(\\theta)\\cos(\\phi)|2\\rangle- \\sin(\\phi)|3\\rangle,\\] where \\(\\tan(\\theta)=g_{1}/g_{2}\\). Thus the time-dependence of the eigenfunctions is parameterized by that of \\(\\theta\\). In principle the \\(g_{i}\\)'s can be complex valued, which gives rise to a controllable phase \\(\\phi\\)[6]. Here we work with real valued \\(g_{i}\\)'s and set \\(\\phi=\\pi/4\\) for the remainder of this work. The state \\(|0\\rangle\\) is a dark state, i.e., it has eigenvalue \\(0\\). We choose a Gaussian time-dependent profile for the control pulses: \\[g_{1}(t)=g_{01}e^{-(t-t_{0})^{2}/\\tau^{2}},\\;g_{2}(t)=g_{02}e^{-t^{2}/\\tau^{2 }}, \\tag{4}\\] where \\(g_{01}\\) and \\(g_{02}\\) are the pulse amplitudes, and \\(t_{0}\\) is the time-delay between the pulses, with pulse \\(g_{2}\\) preceding pulse \\(g_{1}\\). All time-scales are normalized in terms of the pulse-width \\(\\tau\\). The closed-system adiabaticity condition is satisfied provided \\(t_{0}\\sim\\tau\\) and [32] \\[\\frac{|\\frac{\\partial\\theta}{\\partial t}|}{\\sqrt{g_{1}(t)^{2}+g_{2}(t)^{2}}} \\ll 1\\quad\\forall t. \\tag{5}\\] In this limit, the evolution of the system strictly follows the evolution of either of the adiabatic states. Due to the ordering of the pulses as in (4), the atom initially in the level \\(|1\\rangle\\) is prepared in the adiabatic state \\(|0\\rangle\\). The population in level \\(|1\\rangle\\) is then completely transferred to level \\(|2\\rangle\\) adiabatically, following the evolution of state \\(|0\\rangle\\) under the action of the pulses (4). Note that as the system follows the evolution of the state \\(|0\\rangle\\) in the adiabatic limit and the excited level \\(|3\\rangle\\) does not contribute to \\(|0\\rangle\\), the traditional view of the process is that it remains unaffected by spontaneous emission. Below we will show how this view must be modified in a consistent treatment of the process as evolution of an open system. In addition, incoherent processes such as dephasing of the ground state levels will affect the population transfer process. The geometric phases acquired by each adiabatic state \\(|n\\rangle\\), as acquired during the evolution between \\(t_{0}\\) and \\(t\\) can be easily calculated from [1] \\[\\beta_{n}=i\\int_{t_{0}}^{t}dt^{\\prime}\\langle n|\\frac{d}{dt^{\\prime}}|n\\rangle. \\tag{6}\\] Figure 1: Three-level atomic configuration with degenerate ground state levels \\(|1\\rangle\\), \\(|2\\rangle\\), and excited state \\(|3\\rangle\\). The atom interacts with two resonant classical fields with time-dependent Rabi frequencies \\(g_{1}(t)\\) (probe laser) and \\(g_{2}(t)\\) (Stokes laser). In terms of a vector \\(\\vec{R}(t)\\) in parameter space undergoing cyclic evolution, this phase can be rewritten as \\[\\beta_{n}=i\\oint\\langle n(\\vec{R})|\\frac{d}{d\\vec{R}}|n(\\vec{R})\\rangle\\cdot d \\vec{R}. \\tag{7}\\] In the case of the three-level system depicted in Fig. 1, the parameter space is defined by \\(g_{1}(t)\\) and \\(g_{2}(t)\\), i.e., \\[\\beta_{n}=i\\sum_{j=1}^{2}\\oint\\langle n(g_{1},g_{2})|\\frac{\\partial}{\\partial g _{j}}|n(g_{1},g_{2})\\rangle dg_{j}. \\tag{8}\\] We consider a cyclic evolution in this parameter space, which takes place as \\(t\\) varies from \\(-\\infty\\) to \\(+\\infty\\), i.e., \\(g_{1}(-\\infty)=g_{2}(-\\infty)=0\\), \\(g_{1}(\\infty)=g_{2}(\\infty)=0\\). This is shown in Fig. 2. One can also parametrize the time-dependence of the pulses in terms of an angle \\(\\theta\\), using (4), such that \\[\\tan\\theta(t)=\\frac{g_{1}(t)}{g_{2}(t)}=\\frac{g_{01}}{g_{02}}e^{(2tt_{0}-t_{0 }^{2})/\\tau^{2}}. \\tag{9}\\] Then as \\(t\\) varies from \\(-\\infty\\) to \\(+\\infty\\) we have that \\(\\tan\\theta(t)\\) varies from \\(0\\) to \\(\\infty\\), and hence \\(\\theta(t)\\) varies from \\(0\\) to \\(\\pi/2\\). Changing variables in Eq. (7), the geometric phase becomes in our case: \\[\\beta_{n}=i\\int_{0}^{\\pi/2}\\langle n(\\theta)|\\frac{d}{d\\theta}|n(\\theta) \\rangle d\\theta. \\tag{10}\\] Note that the relevant parameter space for our problem is that with coordinates \\((g_{1},g_{2})\\), not \\((\\theta,\\varphi)\\) of Eq. (3); indeed, Eq. (10) does not even describe a cycle in the \\((\\theta,\\varphi)\\) space, whereas the expression (8) along with Fig. 2 show clearly that there is a cycle in the \\((g_{1},g_{2})\\) space. Let us now show that the geometric phase vanishes for all three adiabatic eigenstates \\(|n\\rangle\\) of (3), because the integrand \\(\\langle n(\\theta)|\\frac{d}{d\\theta}|n(\\theta)\\rangle\\equiv 0\\). Indeed, consider the adiabatic eigenstates of Eq. (3). Then \\(\\langle+|\\frac{d}{d\\theta}|+\\rangle=\\frac{1}{2}(\\sin(\\theta)(1|+\\cos(\\theta) \\langle 2|+\\cos(\\phi)\\langle 3|)(\\cos(\\theta)|1)-\\sin(\\theta)|2))=0\\), and \\(\\langle-|\\frac{d}{d\\theta}|-\\rangle=\\frac{1}{2}(\\sin(\\theta)\\langle 1|+\\cos( \\theta)\\langle 2|-\\sin(\\phi)\\langle 3|)(\\cos(\\theta)|1)-\\sin(\\theta)|2))=0\\), irrespective of the value of \\(\\phi\\). Also, \\(\\langle 0|\\frac{d}{d\\theta}|0\\rangle=(\\cos(\\theta)\\langle 1|-\\sin(\\theta) \\langle 2|)(-\\sin(\\theta)|1)-\\cos(\\theta)|2))=0\\). Thus the STI-RAP process under consideration does not give rise to a closed-system geometric phase. We note that the analysis above is a special case of the four-level model considered in Ref. [33]. ## III Geometric phase under STIRP: the open system case ### The model We now analyze the effect on the geometric phase of interaction of the atomic system with a bath causing spontaneous emission and collisional relaxation. We describe these processes in the Markovian limit for the bath, using time-independent Lindblad operators and neglecting Lamb and Stark shift contributions [30]. Thus the time-dependence appears only in the control Hamiltonian \\(H\\) [Eq. (1)], and the evolution of the system density matrix \\(\\rho\\) is given by the Lindblad equation (in \\(\\hbar=1\\) units): \\[\\partial\\rho/\\partial t = L\\rho=-i[H,\\rho]+\\mathcal{L}\\rho,\\] \\[\\mathcal{L}\\rho = \\frac{1}{2}\\sum_{i=1}^{n}(2\\Gamma_{i}\\rho\\Gamma_{i}^{\\dagger}- \\rho\\Gamma_{i}^{\\dagger}\\Gamma_{i}-\\Gamma_{i}^{\\dagger}\\Gamma_{i}\\rho), \\tag{11}\\] where the dissipator \\(\\mathcal{L}\\) describes the incoherent processes, arising from system-bath interaction. We include spontaneous emission from level \\(|3\\rangle\\) at rates \\(\\gamma_{13}\\) and \\(\\gamma_{23}\\) via Lindblad operators \\[\\Gamma_{1}=\\gamma_{13}|1\\rangle\\langle 3|\\;,\\Gamma_{2}=\\gamma_{23}|2\\rangle \\langle 3|. \\tag{12}\\] We also include collisional relaxation between levels \\(|1\\rangle\\) and \\(|2\\rangle\\) at rates \\(\\gamma_{12}\\) and \\(\\gamma_{21}\\) via Lindblad operators \\[\\Gamma_{3}=\\gamma_{12}|1\\rangle\\langle 2|\\;,\\Gamma_{4}=\\gamma_{21}|2\\rangle \\langle 1|. \\tag{13}\\] ### Review of open systems geometric phase To see how a geometric phase can be associated with the master equation evolution, we follow Ref. [29] and write the master equation as \\[\\partial\\rho/\\partial t=L[\\vec{R}(t)]\\rho(t), \\tag{14}\\] where \\(L\\) depends on time only through a set of parameters \\(\\vec{R}(t)\\equiv\\vec{R}\\). These parameters will undergo adiabatic cyclic evolution in our problem. In the superoperator formalism, the density matrix for a quantum state in a \\(D\\)-dimensional Hilbert space is represented by a \\(D^{2}\\)-dimensional \"coherence vector\" Figure 2: A closed curve in the \\((g_{1},g_{2})\\) parameter space, for \\(t_{0}=\\tau=g_{01}=g_{02}=1\\). At \\(t=-\\infty\\) the curve is at the origin, then rises steeply, and eventually returns to the origin at \\(t=+\\infty\\). \\(|\\rho\\rangle)=(\\rho_{1},\\rho_{2},\\cdots,\\rho_{D^{2}})^{t}\\) (where \\(t\\) denotes the transpose) and the Lindblad superoperator \\(L\\) becomes a \\(D^{2}\\times D^{2}\\)-dimensional supermatrix [37], so that the master equation (14) can be written as linear vector equation in \\(D^{2}\\)-dimensional Hilbert-Schmidt space, in the form \\(\\partial|\\rho\\rangle)/\\partial t=L[\\vec{R}(t)|\\rho\\rangle)\\). Such a representation can be generated, e.g., by introducing a basis of Hermitian, trace-orthogonal, and traceless operators [e.g., the \\(D\\)-dimensional irreducible representation of the generators of \\(\\mathrm{su}(D)\\)], whence the \\(\\rho_{i}\\) are the expansion coefficients of \\(\\rho\\) in this basis [37], with \\(\\rho_{1}\\) the coefficient of \\(I\\) (the identity matrix). The master equation generates a non-unitary evolution since \\(L\\) is non-Hermitian. In fact, \\(L\\) need not even be a normal operator (\\(L^{\\dagger}L\ eq LL^{\\dagger}\\)). Therefore \\(L\\) is generally not diagonalizable, i.e., it does not possess a complete set of linearly independent eigenvectors. Equivalently, it cannot be put into diagonal form via a similarity transformation. However, one can always apply a similarity transformation \\(S\\) to \\(L\\) which puts it into the (block-diagonal) Jordan canonical form [38], namely, \\(L_{\\mathrm{J}}=S^{-1}LS\\). The Jordan form \\(L_{\\mathrm{J}}\\) of a \\(D^{2}\\times D^{2}\\) matrix \\(L\\) is a direct sum of blocks of the form \\(L_{\\mathrm{J}}=\\oplus_{\\alpha=1}^{m}J_{\\alpha}\\) (\\(\\alpha\\) enumerates Jordan blocks), where \\(m\\leq D^{2}\\) is the number of linearly independent eigenvectors of \\(L,\\sum_{\\alpha=1}^{m}n_{\\alpha}=D^{2}\\) where \\(n_{\\alpha}\\equiv\\dim J_{\\alpha}\\) is the dimension of the \\(\\alpha\\)th Jordan block, and \\(J_{\\alpha}=\\lambda_{\\alpha}I_{n_{\\alpha}}+K_{a}\\) where \\(\\lambda_{\\alpha}\\) is the \\(\\alpha\\)th (generally complex-valued) Lindblad-Jordan (LJ) eigenvalue of \\(L\\) (obtained as roots of the characteristic polynomial), \\(I_{n_{\\alpha}}\\) is the \\(n_{\\alpha}\\times n_{\\alpha}\\) dimensional identity matrix, and \\(K_{a}\\) is a nilpotent matrix with elements \\((K_{a})_{ij}=\\delta_{i,j-1}\\) (\\(1\\)'s above the main diagonal), where \\(\\delta\\) is the Kronecker symbol. Since the sets of left and right eigenvectors of \\(L\\) are incomplete (they do not span the vector space), they must be completed to form a basis. Instantaneous right \\(\\{|\\mathcal{D}_{\\beta}^{(j)}[\\vec{R}(t)]\\rangle\\}\\) and left \\(\\{\\langle\\langle\\mathcal{E}_{\\alpha}^{(i)}[\\vec{R}(t)]|\\}\\) bi-orthonormal bases in Hilbert-Schmidt space can always be systematically constructed by adding \\(n_{\\alpha}-1\\) new orthonormal vectors to the \\(\\alpha\\)th left or right eigenvector, such that they obey the orthonormality condition \\(\\langle\\langle\\mathcal{E}_{\\alpha}^{(i)}|\\mathcal{D}_{\\beta}^{(j)}\\rangle \\rangle=\\delta_{\\alpha\\beta}\\delta^{ij}\\)[31]. Here superscripts enumerate basis states inside a given Jordan block (\\(i,j\\in\\{0, ,n_{\\alpha}-1\\}\\)). When \\(L\\) is diagonalizable, \\(\\{|\\mathcal{D}_{\\beta}^{(j)}[\\vec{R}(t)]\\rangle\\}\\) and \\(\\{\\langle\\langle\\mathcal{E}_{\\alpha}^{(i)}[\\vec{R}(t)]|\\}\\) are simply the bases of right and left eigenvectors of \\(L\\), respectively. If \\(L\\) is not diagonalizable, these right and left bases can be constructed by suitably completing the set of right and left eigenvectors of \\(L\\) (which can be identified with columns of \\(S\\) and \\(S^{T}\\), respectively, associated with distinct eigenvalues \\(\\lambda_{\\alpha}\\)). Then for all times \\(t\\) \\[L|\\mathcal{D}_{\\alpha}^{(j)}\\rangle\\rangle = |\\mathcal{D}_{\\alpha}^{(j-1)}\\rangle\\rangle+\\lambda_{\\alpha}| \\mathcal{D}_{\\alpha}^{(j)}\\rangle\\rangle,\\] \\[\\langle\\langle\\mathcal{E}_{\\alpha}^{(i)}|L = \\langle\\langle\\mathcal{E}_{\\alpha}^{(i+1)}|+\\lambda_{\\alpha} \\langle\\langle\\mathcal{E}_{\\alpha}^{(i)}|,\\] so that the \\(\\{|\\mathcal{D}_{\\alpha}^{(j)}\\rangle\\}\\) and \\(\\{\\langle\\langle\\mathcal{E}_{\\alpha}^{(i)}|\\}\\) preserve the Jordan block structure (see Appendix A of Ref. [29] for a detailed discussion of these issues). In order to define geometric phases in open systems, the coherence vector is expanded in the instantaneous right vector basis \\(\\{|\\mathcal{D}_{\\beta}^{(j)}[\\vec{R}(t)]\\rangle\\rangle\\}\\) as \\[|\\rho(t)\\rangle)=\\sum_{\\beta=1}^{m}\\sum_{j=0}^{n_{\\beta}-1}p_{\\beta}^{(j)}(t) \\,e^{\\int_{0}^{t}\\lambda_{\\beta}(t^{\\prime})dt^{\\prime}}\\,|\\mathcal{D}_{\\beta} ^{(j)}[\\vec{R}(t)]\\rangle\\rangle, \\tag{15}\\] where the dynamical phase \\(\\exp[\\int_{0}^{t}\\lambda_{\\beta}(t^{\\prime})dt^{\\prime}]\\) is explicitly factored out. The coefficients \\(\\{p_{\\beta}^{(j)}(t)\\}\\) play the role of \"geometric\" (non-dynamical) amplitudes. We assume that the open system is in the adiabatic regime, i.e., _Jordan blocks associated to distinct eigenvalues evolve in a decoupled manner_[31]. Then: \\[\\dot{p}_{\\alpha}^{(i)} = p_{\\alpha}^{(i+1)}-\\sum_{\\beta\\,|\\,\\lambda_{\\beta}=\\lambda_{ \\alpha}}\\sum_{j=0}^{n_{\\beta}-1}p_{\\beta}^{(j)}\\langle\\langle\\mathcal{E}_{ \\alpha}^{(i)}|\\hat{\\mathcal{D}}_{\\beta}^{(j)}\\rangle\\rangle. \\tag{16}\\] Note that, due to the restriction \\(\\lambda_{\\beta}=\\lambda_{\\alpha}\\), the dynamical phase has disappeared. A condition on the total evolution time, which allows for the neglect of coupling between Jordan blocks used in deriving Eq. (16), was given in Ref. [31]. This condition generalizes the standard closed-system adiabaticity condition [26], from which Eq. (5) is derived. Nevertheless, we have used the simpler condition (5) in our simulations below, as it is rather accurate in the present open system case. For closed systems, Abelian geometric phases are associated with non-degenerate levels of the Hamiltonian, while non-Abelian phases appear in the case of degeneracy. In the latter case, a subspace of the Hilbert space acquires a geometric phase which is given by a matrix rather than a scalar. For open systems, one-dimensional Jordan blocks are associated with Abelian geometric phases in the absence of degeneracy, or with non-Abelian geometric phases in case of degeneracy. Multi-dimensional Jordan blocks are always tied to a non-Abelian phase [29]. #### ii.2.1 The Abelian case: generalized Berry phase Consider the simple case of a non-degenerate one-dimensional Jordan block (a block that that is a \\(1\\times 1\\) submatrix containing an eigenvalue of \\(L\\)). In this case, the absence of degeneracy implies in Eq. (16) that \\(\\lambda_{\\beta}=\\lambda_{\\alpha}\\Rightarrow\\alpha=\\beta\\) (non-degenerate blocks). Moreover, since the blocks are assumed to be one-dimensional we have \\(n_{\\alpha}=1\\), which allows for removal of the upper indices in Eq. (16), resulting in \\(\\dot{p}_{\\alpha}=-p_{\\alpha}\\langle\\langle\\mathcal{E}_{\\alpha}|\\mathcal{D}_{ \\alpha}\\rangle\\rangle\\). The solution of this equation is \\(p_{\\alpha}(t)=p_{\\alpha}(0)\\exp{[i\\beta_{\\alpha}(t)]}\\), with \\(\\beta_{\\alpha}(t)=i\\int_{0}^{t}\\langle\\langle\\mathcal{E}_{\\alpha}(t^{\\prime})| \\hat{\\mathcal{D}}_{\\alpha}(t^{\\prime})\\rangle\\rangle dt^{\\prime}\\). For a cyclic evolution in parameter space along a closed curve \\(C\\), one then obtains the Abelian geometric phase associated with the Jordan block \\(\\alpha\\)[29]: \\[\\beta_{\\alpha}(C)=i\\oint_{C}\\langle\\langle\\mathcal{E}_{\\alpha}(\\vec{R})|\\vec{ \\triangledown}|\\mathcal{D}_{\\alpha}(\\vec{R})\\rangle\\rangle\\cdot d\\vec{R}. \\tag{17}\\]This expression for the geometric phase bears clear similarity to the original Berry formula, Eq. (7). Note that in general \\(\\beta_{\\alpha}(C)\\) can be complex, since \\(\\langle\\langle{\\cal E}_{\\alpha}|\\) and \\(|{\\cal D}_{\\alpha}\\rangle\\rangle\\) are not related by transpose conjugation. Thus, the geometric phase may have real and imaginary contributions, the latter affecting the visibility of the phase. As shown in Ref. [29], the expression above for \\(\\beta_{\\alpha}(C)\\) satisfies a number of desirable properties: it is _geometric_ (i.e., depends only on the path traversed in parameter space), it is _gauge invariant_ (i.e., one cannot modify the geometric phase by redefining \\(\\langle\\langle{\\cal E}_{\\alpha}|\\) or \\(|{\\cal D}_{\\alpha}\\rangle\\)) via multiplication of one of them by a complex factor); it has the proper _closed system limit_ (if the interaction with the bath vanishes, \\(\\beta_{\\alpha}(C)\\) reduces to the usual difference of geometric phases acquired by the density operator in the closed case). #### iii.2.2 The non-Abelian case Ref. [29] also derived the non-Abelian open systems geometric phase, for the case of degenerate one-dimensional Jordan blocks. A non-Abelian geometric phase in fact arises in our STIRAP model when the spontaneous emission rates are equal. However, we shall not treat this case in the present paper. ### Solution of the STIRAP model Returning to the STIRAP model, let us represent the density matrix \\(\\rho\\) in terms of the coherence vector \\(\\vec{v}\\) as \\[\\rho=\\frac{1}{N}\\left[{\\bf 1}+\\sqrt{\\frac{N(N-1)}{2}}\\sum_{\\alpha}v_{\\alpha} \\Omega_{\\alpha}\\right], \\tag{18}\\] where the \\(\\Omega_{\\alpha}\\) are the Gell-Mann matrices [39]. Writing the Lindblad equation \\(\\dot{\\rho}=L\\rho\\) in the \\(\\{\\Omega_{\\alpha}\\}\\) basis, we obtain \\(\\ddot{\\vec{v}}=L\\vec{v}\\), where \\(\\vec{v}=\\frac{1}{3}[1,\\sqrt{3}v_{1},\\cdots,\\sqrt{3}v_{8}]^{t}\\) is a nine-component coherence vector. In the same basis we can express the Liouville operator \\(L\\) in the following form: \\[L=\\left(\\begin{array}{cccccccc}0&0&0&0&0&0&0&0&0\\\\ 0&-\\gamma^{\\prime}_{+}&0&0&0&g_{2}&0&g_{1}&0\\\\ 0&0&-\\gamma^{\\prime}_{+}&0&-g_{2}&0&g_{1}&0&0\\\\ \\frac{\\gamma_{-}}{2}+\\gamma^{\\prime}_{-}&0&0&-2\\gamma^{\\prime}_{+}&0&g_{1}&0&- g_{2}&-\\frac{\\gamma_{-}-\\gamma^{\\prime}_{-}}{\\sqrt{3}}\\\\ 0&0&g_{2}&0&-\\gamma_{+}-\\frac{\\gamma^{2}_{+1}}{2}&0&0&0&0\\\\ 0&-g_{2}&0&-g_{1}&0&-\\gamma_{+}-\\frac{\\gamma^{2}_{+2}}{2}&0&0&-\\sqrt{3}g_{1}\\\\ 0&0&-g_{1}&0&0&0&-\\gamma_{+}-\\frac{\\gamma^{2}_{+1}}{2}&0&0\\\\ 0&-g_{1}&0&g_{2}&0&0&0&-\\gamma_{+}-\\frac{\\gamma^{2}_{+2}}{2}&-\\sqrt{3}g_{2}\\\\ \\sqrt{3}\\gamma_{+}&0&0&0&0&\\sqrt{3}g_{1}&0&\\sqrt{3}g_{2}&-2\\gamma_{+}\\\\ \\end{array}\\right)\\;, \\tag{19}\\] where we have used the Hamiltonian (1). Here \\[\\gamma_{+}=(\\gamma^{2}_{13}+\\gamma^{2}_{23})/2,\\quad\\gamma_{-}=(\\gamma^{2}_{13 }-\\gamma^{2}_{23}),\\quad\\gamma^{\\prime}_{\\pm}=(\\gamma^{2}_{12}\\pm\\gamma^{2}_ {21}), \\tag{20}\\] and \\(g_{1,2}\\) are given in Eq. (4). The eigenvalues and left and right eigenvectors of \\(L\\) can be found in terms of the parameters \\(\\gamma_{\\pm},\\gamma^{\\prime}_{\\pm}\\) and \\(g_{1,2}\\), but the expressions are very complicated. The analytic determination of the corresponding left and right eigenvectors is cumbersome, so instead we have used a numerical procedure, which is based on the discussion presented in subsection III.2 above [36]. To exhibit some of the analytic structure, we temporarily make the further simplification that the spontaneous emission rates are equal: \\(\\gamma_{13}=\\gamma_{23}\\equiv\\gamma\\). This is the case, e.g., for \\(D_{2}\\) transitions in \\({}^{23}\\)Na [40]. In addition we assume temporarily that the collisional relaxation rates vanish: \\(\\gamma_{12}=\\gamma_{21}=0\\). With these simplifications \\(L\\) has the following three sets of eigenvalues (the ordering of subscripts is explained below): \\[\\{\\lambda_{4},\\lambda_{5},\\lambda_{6}\\}=\\{0,-\\gamma^{2},-\\gamma^{2}-\\frac{Q}{ 3P}+P\\}\\;,\\] \\[\\lambda_{1}=(-\\gamma^{2}+\\frac{Q}{6P}-\\frac{P}{2})+\\frac{i\\sqrt{3}}{2}(\\frac{Q }{3P}+P);\\] \\[\\lambda_{9}=(-\\gamma^{2}+\\frac{Q}{6P}-\\frac{P}{2})-\\frac{i\\sqrt{3}}{2}(\\frac{Q }{3P}+P)\\] \\[\\lambda_{2}=\\lambda_{3}=\\frac{1}{2}(-\\gamma^{2}+i\\sqrt{Q});\\;\\lambda_{7}= \\lambda_{8}=\\frac{1}{2}(-\\gamma^{2}-i\\sqrt{Q}) \\tag{21}\\] where \\[P = \\left(x+\\sqrt{x^{2}+(Q/3)^{3}}\\right)^{1/3}\\;,\\] \\[Q = 4(g_{1}^{2}+g_{2}^{2})-\\gamma^{4}\\;,\\] \\[x = \\gamma^{2}(g_{1}^{2}+g_{2}^{2}) \\tag{22}\\] and the last set of four eigenvalues appears in two degenerate pairs. Because of this, the corresponding open systems geometric phase is non-Abelian (recall the discussion above), but we do not consider this case here. In the closed system limit \\((\\gamma\\to 0)\\)\\(L\\) becomes \\(-i[H,\\cdot]\\) and its eigenvalues are \\(\\epsilon_{nm}=i(E_{n}-E_{m})\\)\\((n,m\\in 0,+,-)\\), where \\(E_{n,m}\\) are the eigenvalues of the control Hamiltonian \\(H\\) as given in Eq. (2). The grouping in Eq. (21) represents this limit in the following sense: \\[\\lambda_{4},\\lambda_{5},\\lambda_{6}\\rightarrow\\epsilon_{nn}=0;\\] \\[\\lambda_{1}\\rightarrow\\epsilon_{+-}=2i\\sqrt{g_{1}^{2}+g_{2}^{2}};\\] \\[\\lambda_{9}\\rightarrow\\epsilon_{-+}=-2i\\sqrt{g_{1}^{2}+g_{2}^{2}};\\] \\[(\\lambda_{2}\\rightarrow\\epsilon_{+0})=(\\lambda_{3}\\rightarrow \\epsilon_{0-})=i\\sqrt{g_{1}^{2}+g_{2}^{2}};\\] \\[(\\lambda_{7}\\rightarrow\\epsilon_{0+})=(\\lambda_{8}\\rightarrow \\epsilon_{-0})=-i\\sqrt{g_{1}^{2}+g_{2}^{2}}. \\tag{23}\\] The corresponding eigenvectors of \\(L\\) reduce to \\(|n\\rangle\\langle m|\\). The subscripts of the \\(\\lambda_{\\alpha}\\) represents the ordering of the eigenvalues in the closed system limit. We find that the degeneracy leading to a non-Abelian geometric open system phase appears only when \\(\\gamma_{13}=\\gamma_{23}\\)_and_\\(\\gamma_{12}=\\gamma_{21}=0\\), or in the closed system limit. By a coordinate transformation from the control fields \\(g_{1,2}\\) to the angle \\(\\theta=\\arctan(g_{1}/g_{2})\\) we have, similarly to the closed system case, from the generalized geometric phase formula Eq. (17): \\[\\beta_{\\alpha}=i\\int_{0}^{\\pi/2}d\\theta\\langle\\langle{\\cal E}_{\\alpha}|\\frac{ d}{d\\theta}|{\\cal D}_{\\alpha}\\rangle\\rangle\\;. \\tag{24}\\] This expression for the phase associated with the \\(\\alpha\\)th eigenvector of \\(L\\) yields, in the closed system limit, not the absolute phase of each of the adiabatic eigenstates of the system Hamiltonian, but rather their phase _differences_. This is natural as only a phase difference is an experimentally measurable quantity. ## IV Results and discussion We plot the real part of the open system Abelian geometric phase, i.e., Eq. (24), for various combinations of the spontaneous emission and collisional relaxation rates in Figs. 3-10. _The main finding is that the answer to the question we posed in the introduction, \"Is it possible for the environment to induce a geometric phase where there is none if the system is treated as closed?\", is affirmative_. Indeed, a glance at Figs. 3-10 reveals that the geometric phase is non-zero, and in fact increases with the decay rates. Moreover only spontaneous emission (\\(\\gamma_{13},\\gamma_{23}\ eq 0\\)). Clearly, the phases increase monotonically with the emission rates. It is interesting to note that (to within our numerical accuracy) \\(\\mathrm{Re}\\beta_{1}=-\\mathrm{Re}\\beta_{9}\\) when \\(\\gamma_{13}=\\gamma_{23}\\). Recalling that \\(\\lambda_{1}\\to i(E_{+}-E_{-})\\) and \\(\\lambda_{9}\\to i(E_{-}-E_{+})\\) [Eq. (23)], this symmetry can be traced back to the difference between the adiabatic eigenstates \\(|+\\rangle\\) and \\(|-\\rangle\\), which differ only in the sign of the coefficient in front of the excited state \\(|3\\rangle\\) [recall Eq. (3) and that \\(\\phi=\\pi/4\\)]. When the spontaneous emission rates are equal this difference in sign between the (\\(|3\\rangle\\) component of the) states \\(|+\\rangle\\) and \\(|-\\rangle\\) generates only a difference in sign between the corresponding geometric phases, but not in magnitude, i.e., \\(\\beta_{1}=-\\beta_{9}\\). We also note that, in spite of the symmetry between the states \\(|1\\rangle\\) and \\(|2\\rangle\\) in our model, there is an asymmetry between the curves \\(\\gamma_{23}=2\\gamma_{13}\\) and \\(\\gamma_{23}=\\frac{1}{2}\\gamma_{13}\\) in Fig. 3 for a given geometric phase, e.g., \\(\\beta_{1}\\). Indeed, one might have expected a symmetry under interchange of the indices \\(1\\) and \\(2\\), in the sense that, e.g., the points \\(\\beta_{1}(\\gamma_{23}=2)\\) and \\(\\beta_{1}(\\gamma_{23}=1)\\) on the curves \\(\\gamma_{23}=2\\gamma_{13}\\) and \\(\\gamma_{23}=\\frac{1}{2}\\gamma_{13}\\) respectively, should have overlapped. That this is not the case is because the order of the pulses \\(g_{2}\\) (first) and \\(g_{1}\\) (second) breaks the symmetry between states \\(|1\\rangle\\) and \\(|2\\rangle\\). Indeed, Fig. 4 shows the results for \\(\\beta_{1}\\) when the pulse order is reversed (now \\(g_{1}\\) precedes \\(g_{2}\\)), and as a consequence the order of the curves \\(\\gamma_{23}=2\\gamma_{13}\\) and \\(\\gamma_{23}=\\frac{1}{2}\\gamma_{13}\\) is now reversed as well. In other words, swapping the pulse order is equivalent to swapping the spontaneous emission rates \\(\\gamma_{23}\\) and \\(\\gamma_{13}\\). In Fig. 5 we show the real part of the open system geometric phases \\(\\beta_{1}\\) and \\(-\\beta_{9}\\), for the case when spontaneous emission vanishes (\\(\\gamma_{13}=\\gamma_{23}=0\\)) and there is only collisional relaxation (\\(\\gamma_{12},\\gamma_{21}\ eq 0\\)). The results are qualitatively similar to those in Fig. 3, with the exception that now \\(\\mathrm{Re}\\beta_{1}\ eq-\\mathrm{Re}\\beta_{9}\\) when \\(\\gamma_{12}=\\gamma_{21}\\). This symmetry breaking can be attributed to the fact that the collisional relaxation operators directly connect the states \\(|1\\rangle\\) and \\(|2\\rangle\\), whereas these states are only connected to second order under spontaneous emission and under the control Hamiltonian (1). The other interesting difference between Figs. 3 and 5 is that spontaneous emission only leads to larger values of the geometric phase than Figure 7: Spontaneous emission without collisional relaxation. Parameters the same as in Fig. 6, except that \\(\\gamma_{23}=\\gamma_{13}/2\\). Figure 6: Spontaneous emission without collisional relaxation: Variation of real part of phases in units of \\(2\\pi\\) with respect to \\(\\gamma_{23}\\tau\\) for \\(\\gamma_{23}=2\\gamma_{13}\\). Other parameters: \\(t_{0}=4\\tau/3\\), \\(g_{01}\\tau=g_{02}\\tau=15\\), and \\(\\tau=1\\). Figure 9: Collisional relaxation without spontaneous emission. Parameters the same as in Fig. 6, except that \\(\\gamma_{12}=\\gamma_{21}/2\\). Conclusions Our study of STIRAP in an open three level quantum system reveals that the interaction with the environment can endow a system with a geometric phase, where none existed without the interaction with the environment. Mathematically, the vanishing geometric phase in the closed system case is attributable to the vanishing integrand in the Berry formula. In a certain sense this is easily understood as the result of having a geometric phase determined by only a single parameter (\\(\\theta\\)), whence no solid angle is traced out in parameter space. It would then be natural to conclude that, by including the interaction with the environment a non-zero solid angle is created, implying that in the presence of decoherence motion along an orthogonal direction in parameter space must have taken place. However, one must be careful in accepting this explanation, since in fact the polar angles \\(\\theta\\) and \\(\\phi\\) do not properly describe the parameter space in our problem: indeed, \\(\\theta\\) varies from \\(0\\) to \\(\\pi/2\\) (while \\(\\phi\\) is constant) and thus does not describe a closed path, while the correct parameter space is that defined by the pulse amplitudes \\(g_{1}\\) and \\(g_{2}\\) (see Fig. 2). Thus a proper explanation of the intriguing effect of an environmentally induced geometric phase is still lacking and will be undertaken in a future publication. Here we conjecture that this is due to the non-commutativity of the driving Hamiltonian and the decohering processes we have considered. It should be possible to test this by using the quantum trajectories approach to the open systems geometric phase [8]. Another interesting open question is to what extent the finding presented here can be made useful in the context of holonomic quantum computing [3], i.e., whether can one constructively exploit the environmentally induced geometric phase for the generation of quantum logic gates. ## References * (1) M.V. Berry, Proc. Roy. Soc. London Ser. A **392**, 45 (1984). * (2)_Geometric Phases in Physics_, edited by A. Shapere and F. Wilczek (World Scientific, Singapore, 1989). * (3) P. Zanardi, Phys. Lett. A **264**, 94 (1999). * (4) J. A. Jones, V. Vedral, A. Ekert, and G. Castagnoli, Nature **403**, 869 (2000). * (5) P. Solinas, P. Zanardi, and N. Zangh, Phys. Rev. A **70**, 042316 (2004). * (6) L.-A. Wu, P. Zanardi, and D.A. Lidar, Phys. Rev. Lett. **95**, 130501 (2005). * (7) S.-L. Zhu and P. Zanardi, Phys. Rev. A **72**, 020301(R) (2005). * (8) I. Fuentes-Guridi, F. Girelli, and E. Livine, Phys. Rev. Lett. **94**, 020503 (2005). * (9) J. C. Garrison and E. M. Wright, Phys. Lett. A **128**, 177 (1988). * (10) G. Dattoli, R. Mignani, and A. Torre, J. Phys. A **23**, 5795 (1990). * (11) C.W. Gardiner and P. Zoller, _Quantum Noise_, Vol. 56 of _Springer Series in Synergetics_ (Springer, Berlin, 2000). * (12) D. Ellinas, S. M. Barnett, and M. A. Dupertuis, Phys. Rev. A **39**, 3228 (1989). * (13) D. Gamliel and J. H. Freed, Phys. Rev. A **39**, 3238 (1989). * (14) K. M. F. Romero, A. C. A. Pinto, and M. T. Thomaz, Physica A **307**, 142 (2002). * (15) R. S. Whitney and Y. Gefen, Phys. Rev. Lett. **90**, 190402 (2003). * (16) R. S. Whitney, Y. Makhlin, A. Shnirman, and Y. Gefen, Phys. Rev. Lett. **94**, 070407 (2005). * (17) I. Kamleitner, J. D. Cresser, and B. C. Sanders, Phys. Rev. A **70**, 044103 (2004). * (18) A. Carollo, I. Fuentes-Guridi, M. Franca Santos, and V. Vedral, Phys. Rev. Lett. **90**, 160402 (2003). * (19) A. Bassi and E. Ippoliti, Phys. Rev. A **73**, 062104 (2006). * (20) D. M. Tong, L. C. Kwek, C. H. Oh, J.-L. Chen, and L. Ma, Phys. Rev. A **69**, 054102 (2004). * (21) A. T. Rezakhani and P. Zanardi, Phys. Rev. A **73**, 012107 (2006). * (22) K.-P. Marzlin, S. Ghose, and B. C. Sanders, Phys. Rev. Lett. **93**, 260402 (2004). * (23) E. Sjoqvist, A. K. Pati, A. Ekert, J. S. Anandan, M. Ericsson, D. K. L. Oi, and V. Vedral, Phys. Rev. Lett. **85**, 2845 (2000). * (24) J. G. P. de Faria, A. F. R. de Toledo Piza, and M. C. Nemes, Europhys. Lett. **62**, 782 (2003). * (25) M. Ericsson, E. Sjoqvist, J. Brannlund, D. K. L. Oi, A. K. Pati, Phys. Rev. A **67**, 020101(R) (2003). * (26) A. Messiah, _Quantum Mechanics_ (North-Holland, Amsterdam, 1962), Vol. 2. * (27) G. Florio, P. Facchi, R. Fazio, V. Giovannetti, and S. Pascazio, Phys. Rev. A **73**, 022327 (2006). * (28) A. Trullo. P. Facchi, R. Fazio, G. Florio, V. Giovannetti, S. Pascazio, eprint quant-ph/0604180. * (29) M.S. Sarandy and D.A. Lidar, Phys. Rev. A **73**, 062101 (2006). * (30) H.-P. Breuer and F. Petruccione, _The Theory of Open Quantum Systems_ (Oxford University Press, Oxford, 2002). * (31) M.S. Sarandy and D.A. Lidar, Phys. Rev. A **71**, 012331 (2005). * (32) K. Bergmann, H. Theuer, and B. W. Shore, Rev. Mod. Phys. **70**, 1003 (1998). * (33) R.G. Unanyan, B.W. Shore, and K. Bergmann, Phys. Rev. A **59**, 2910 (1999). * (34) M.B. Plenio, S.F. Huelga, A. Beige and P.L. Knight, Phys. Rev. A **59**, 2468 (1999). * (35) M.S. Kim, J. Lee, D. Ahn, and P.L. Knight, Phys. Rev. A **65**, 040101 (2002). * (36) Our Matlab code for the Jordan form of an arbitrary square matrix is available upon request. * (37) R. Alicki and K. Lendi, _Quantum Dynamical Semigroups and Applications_, No. 286 in _Lecture Notes in Physics_ (Springer-Verlag, Berlin, 1987). * (38) R.A. Horn and C.R. Johnson, _Matrix Analysis_ (Cambridge University Press, Cambridge, UK, 1999). * (39) M. Gell-Mann and Y. Ne'eman, _The Eightfold Way_ (Benjamin, New York, 1964). * (40) Details of atomic properties of Sodium\\(D\\) lines can be found in, e.g., george.ph.utexas.edu/~dsteck/alkalidata/sodiumnumbers.pdf.
We consider the STIRAP process in a three-level atom. Viewed as a closed system, no geometric phase is acquired. But in the presence of spontaneous emission and/or collisional relaxation we show numerically that a non-vanishing, purely real, geometric phase is acquired during STIRAP, whose magnitude grows with the decay rates. Rather than viewing this decoherence-induced geometric phase as a nuisance, it can be considered an example of \"beneficial decoherence\": the environment provides a mechanism for the generation of geometric phases which would otherwise require an extra experimental control knob.
Write a summary of the passage below.
arxiv-format/0612753v2.md
# Hard X-Ray Properties of Groups of Galaxies as Observed with ASCA Kazuhiro Nakazawa Institute of Space and Astronautical Science, JAXA, 3-1-1 Yoshino-dai, Sagamihara, Kanagawa 229-8510 [email protected] Kazuo Makishima1 Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 Yasushi Fukazawa Department of Physical Science, Hiroshima University, 1-3-1 Kagamiyama, Higashi-hiroshima, Hiroshima 739-8526 Footnote 1: Also with RIKEN, Wakou-shi, Saitama 351-0198, Japan ## 1 Introduction Clusters of galaxies shows rich evidence for huge energy input within their vast inter-galactic space. Mega-parsec scale radio halos observed in many rich clusters provide direct evidence of GeV electrons accelerated within them (e.g. Feretti, Giovannini 1996). Significant temperature variations in the cluster hot gas, detected by ASCA (e.g. Furusho et al. 2001) as well as Chandra and XMM-Newton (e.g. Markevitch et al. 2003 ; Briel et al. 2004), can be interpreted as relics of energy dissipation associated with cluster mergers. Another striking feature is so called \"cavity\" in the cluster centers (e.g. Birzan et al. 2004), which is suggestive of an energy input at a level as high as \\(10^{58-60}\\) erg. These new violent features of clusters of galaxies challenge the classical view of a single phase hot plasma hydrostatically filling the gravitational potential formed by a dark matter halo. The intra-cluster volume, for example, may harbor plasma components much hotter than the virial temperature and/or a significant amount of non-thermal particle population. The same stray may apply also to groups of galaxies, the subject of the present paper. They hosts a fair amount of hot gas, called inter-galactic medium (IGM) with a temperature of about 1 keV (see e.g. Mulchaey et al. 1996). X-ray emission from the IGM is dominated by Fe-L shell lines which appear in the soft X-ray range around 0.6-1.4 keV. Since the emissivity of IGM is very low in energies above 2 keV, we can search this \"hard\" energy region of the X-ray spectra of groups of galaxies for any additional harder emission component, such as thermal signals from hotter plasmas mixed in the IGM or non-thermal emission from accelerated particles. The ASCA mission (Tanaka et al. 1994), operated from 1993 to 2000, was equipped with four X-ray mirror optics covering the energy range up to \\(\\sim 10\\) keV. Although their angular resolution was limited, the total effective area of ASCA at 6 keV is larger than that of Chandra. Furthermore, the GIS experiment (Makishima et al. 1996; Ohashi et al. 1996) onboard ASCA is characterized by its very low and stable background, together with a wide field of view (\\(\\sim 45^{\\prime}\\) in diameter) and a high quantum efficiency toward \\(\\sim 10\\) keV. Thanks to these properties, the GIS background level normalized to the effective area and sky area, shown in figure 1, has been the lowest among the X-ray detectors with imaging spectroscopic capabilities up to 10 keV. Therefore, data from the GIS is least affected by background uncertainties (both statistical and systematic), which strongly limit the sensitivity to largely (\\(\\gtrsim 5^{\\prime}\\)) extended emission in the hard X-ray band above \\(\\sim 4\\) keV. The power of the GIS detector is demonstrated, for example, by the detection of non-thermal inverse Compton emission from the radio lobe of Fonax-A (Kaneda et al. 1995). As a result, the ASCA GIS archive is expected, even today, to provide the best opportunity to search for excess hard X-ray signals from groups of galaxies. Its main drawback, namely the poor sensitivity for exclusion of contaminating point sources, can be compensated for by referring to the public data with higher angular resolution, such as ROSAT and Chandra. Based on this idea, Fukazawa et al. (2001) analyzed the ASCA data of a compact galaxy group HCG 62, and discovered evidence for a hard X-ray excess above the thermal IGM emission. Their approach is considered applicable not only to this particular group, but also to almost all near-by low temperature groups. In this paper, we hence study 18 near-by low temperature galaxy groups in the ASCA archive data. In section 2, we present the list of targets with their selection criteria. Section 3 describes the analysis method we employed. Results from the analysis is summarized in section 4. Discussion is presented in section 5, followed by a brief summary in the last section. Throughout the present paper, we assume the Hubble constant to be \\(H_{0}=75\\) km s\\({}^{-1}\\) Mpc\\({}^{-1}\\). All the errors refer to 90 % confidence levels, unless otherwise noted. ## 2 Target selection We surveyed all the ASCA archival data for groups of galaxies that are appropriate for our purpose. To separate any additional hard components from the IGM emission within the ASCA band-pass, we need targets with soft and bright IGM emission. In addition, targets having near-by hard X-ray sources must be avoided, because the wide wing of the point spread function (PSF) of the ASCA mirror degrades the sensitivity. We must avoid even hard sources outside the GIS field of view, because they produce stray lights. We selected objects which has an IGM temperature lower than 1.7 keV, and the 0.5-10 keV flux higher than \\(\\sim 1\\times 10^{-12}\\) erg s\\({}^{-1}\\) cm\\({}^{-2}\\). We excluded objects with a luminosity less than \\(5\\times 10^{41}\\) erg s\\({}^{-1}\\), most of which are isolated elliptical galaxies rather than groups: this is to reduce the hard X-ray contribution from unresolved discrete sources, such as low mass X-ray binaries (LMXBs). We also examined the ROSAT all sky map and other literature for the absence of nearby (\\(<2\\ ^{\\circ}\\)) active galactic nuclei (AGN). The sample thus selected consists of 18 groups, including HCG 62. We list them in table 1 together with their optical properties. Although the sample is far from being complete, it includes variety of objects. There are three Hickson's compact groups, another three (NGC 1132, NGC 1550 and NGC 6521) X-ray selected groups discovered by the ROSAT survey, and 11 relatively loose groups. Their velocity dispersion ranges from \\(\\sigma=169\\) to 474 km s\\({}^{-1}\\). Table 2 summarizes ASCA observation log of the objects in our sample. The 18 groups are covered by 39 observations in total, of which about one third are offset pointings. We analyzed the GIS data in all observations. In contrast, the SIS data are utilized only in the earliest observation of the relevant object, unless otherwise noted. This is to avoid the significant changes of the SIS response with time (e.g. Hwang et al. 1999). ## 3 Data screening and background estimation Our strategy is to use the GIS and SIS in combination, making the best use of their characteristics. The GIS has a higher stopping power, together with a lower and stabler detector non X-ray background (NXB). In addition, the background of the GIS has been extensively studied, including both the Cosmic X-ray background (CXB) and the NXB (Ishisaki 1997, Kushino et al. 2002). Thus, the GIS data provide the principal probe with which to search for hard emission. However, the GIS is not as good as the SIS in diagnostics of soft thermal X-ray spectra, because of a lower energy resolution and a poorer low-energy quantum efficiency than those of the SIS. Therefore, we employ the SIS data in order to accurately fix the thermal IGM emission. The data from SIS0 and SIS1 were screened with \"_rev.2 standard processing_\", which selects those data taken under the cut-off rigidity higher then 6 GV, and with the source angle above the bright and night earth rim higher than \\(20^{\\circ}\\) and \\(10^{\\circ}\\), respectively. We utilized the bright mode data and coadded the two (SIS0 and SIS1) spectra into a single SIS data. The background spectra were obtained from blank-sky observations, filtered through the same procedure. Figure 1: Background spectra of imaging X-ray detectors per sky area, normalized also to the mirror effective area. Results of the ASCA GIS and the SIS (this work) are compared with those of the Chandra ACIS I 3 and the XMM-Newton EPIC mos 1 (from template background files). Spectra are extracted from a central \\(15^{\\prime}\\) circular region for the GIS, one full chip for the SIS and ACIS I 3, and a central \\(8^{\\prime}.5\\) circular region for EPIC mos. The effective area is for a point source on the nominal aim-point of each detector. The cosmic X-ray background is included. Similarly, we added the data from the two GIS detectors (GIS2 and GIS3) of each observation, and further added (if multiple pointings) all observations to obtain an average GIS spectrum of each target. In order to subtract the background with the highest accuracy, in this paper we adopted the procedure used by Ishisaki (1997), rather than simply using the data from \"_rev.2 standard_\" screening. This method, called \"H02 method\", not only employs a tighter set of data screening conditions, but also models the GIS NXB according to the distribution of counting rates of the events rejected in the on-board anti-coincidence circuit. It allows us an NXB estimation with a 1\\(\\sigma\\) systematic error of 6% and 3%, for 10 ks and 40 ks observations, respectively. To estimate the NXB in each on-source GIS dataset, the H02 method needs NXB templates. For this purpose, we prepared a data base consisting of all the GIS events detected when the ASCA telescope was pointing to the night Earth, over a period from 1993 July through 2000 July. The total exposure amounts to 5.2 Ms. All the events in the data base were sorted into 8 subsets, according to instantaneous values of \"H02 scalar\" which counts the GIS anti-coincidence rate. Each subset then defines a GIS background template, and the actual template to be subtracted from a particular on-source spectrum is constructed as a weighted mean of these templates. Combined with this method, we estimated the CXB using data from 4 different blank sky regions at high galactic latitudes with \\(|b|\\) \\(>\\) 29\\({}^{\\circ}\\), after eliminating discrete sources brighter than \\(\\sim\\) \\(2\\times 10^{-13}\\) erg s\\({}^{-1}\\) cm\\({}^{-2}\\) in the 2-10 keV band. In our analysis, the NXB spectra around 4-8 keV is very important. In order to further improve the accuracy of the NXB estimation, we utilize the 5.9-10.6 keV counts in the GIS detector periphery (15\\({}^{\\prime}\\) - 22\\({}^{\\prime}\\) from the detector center), hereafter denoted \\(N_{\\rm out}^{\\rm hard}\\), to adjust residual systematic differences in the estimated and actual NXB. In calculating \\(N_{\\rm out}^{\\rm hard}\\), we excluded a region of 3\\({}^{\\prime}\\) radius around each bright point source, and corrected \\(N_{\\rm out}^{\\rm hard}\\) for the decrease of the detector area. About 80% of \\(N_{\\rm out}^{\\rm hard}\\) is due to the NXB, while the remaining 20% the CXB. Any IGM signal is considered to contribute \\(\\leq\\) 0.1 % to \\(N_{\\rm out}^{\\rm hard}\\) assuming typical spectral and spatial parameters of the IGM emission. Then, we calculated \"correction factor\", as a ratio of \\(N_{\\rm out}^{\\rm hard}\\) between the on-source data, and the background model in which the CXB contribution is fixed to those from the blank sky observations. Since a total of 80 ks exposure results in about \\(N_{\\rm out}^{\\rm hard}\\) = 2000 counts, we can estimate this factor with a statistical accuracy of \\(1/\\sqrt{\\big{(}2000\\big{)}/0.8}\\) = 2.8% (1\\(\\sigma\\)). Using the night-earth event data base utilized to construct the NXB templates, we evaluated the accuracy with which the NXB can be reproduced. The entire events in the same data base were again sorted into subsets, but this \\begin{table} \\begin{tabular}{l c c c c c c c} \\hline \\hline target & \\multicolumn{2}{c}{position & z & \\(L_{\\rm B}\\) & another name \\\\ & ( \\(\\alpha\\), \\(\\delta\\) ) & Mpc & & km s\\({}^{-1}\\) & \\(10^{11}L_{\\odot}\\) & \\(10^{20}\\) cm\\({}^{-2}\\) & \\\\ \\hline HCG 51 & 170.614, & 24.294 & 103 & 0.0258 & 240\\({}^{h}\\) & 1.05 & 1.27 & \\\\ HCG 62 & 193.277, & \\(-\\)9.209 & 58.4 & 0.0146 & 376\\({}^{+52}_{-46}\\) & 0.6 & 3.01 & \\\\ HCG 97 & 356.844, & \\(-\\)2.303 & 87.2 & 0.0218 & 372\\({}^{h}\\) & 0.62 & 3.65 & \\\\ NGC 507 & 20.901, & 33.257 & 65.8 & 0.0165 & 595\\({}^{w}\\) & 1.73 & 5.24 & \\\\ NGC 533 & 21.397, & 1.772 & 73.7 & 0.0184 & 464\\({}^{+58}_{-52}\\) & 0.86 & 3.10 & \\\\ NGC 1132 & 43.223, & \\(-\\)1.275 & 92.8 & 0.0232 & – & 0.47 & 5.17 & \\\\ NGC 1399 & 54.622, & \\(-\\)35.450 & 19.0 & 0.0048 & 374\\({}^{d}\\) & 0.45 & 1.34 & Fornax cluster \\\\ NGC 1550 & 64.908, & 2.410 & 49.6 & 0.0123 & – & 0.21 & 11.5 & RX J0419.6+0225 \\\\ NGC 2563 & 125.149, & 21.068 & 59.8 & 0.0149 & 336\\({}^{+44}_{-40}\\) & 0.55 & 4.23 & \\\\ NGC 4325 & 185.796, & 10.606 & 102 & 0.0257 & 265\\({}^{+50}_{-44}\\) & 0.48 & 2.22 & \\\\ NGC 5044 & 198.859, & \\(-\\)16.398 & 36.1 & 0.0090 & 474\\({}^{fs}\\) & 0.73 & 4.93 & WP 23 \\\\ NGC 5846 & 226.622, & 1.606 & 24.3 & 0.0061 & 368\\({}^{+72}_{-61}\\) & 0.57 & 4.26 & \\\\ NGC 6329 & 258.562, & 43.684 & 110 & 0.0276 & – & 1.03 & 2.12 & \\\\ NGC 6521 & 268.942, & 62.604 & 106 & 0.0266 & 387\\({}^{z}\\) & 0.67 & 3.39 & RX J1755.8+6236 \\\\ NGC 7619 & 350.060, & 8.206 & 50.1 & 0.0125 & 780\\({}^{f}\\) & 1.08 & 0.50 & Pegasus group \\\\ Pavo & 304.628, & \\(-\\)70.859 & 56.0 & 0.0137 & 169\\({}^{rc}\\) & 1.71 & 7.00 & \\\\ RGH 80 & 200.058, & 33.146 & 148 & 0.0370 & 467\\({}^{r}\\) & 0.32 & 1.05 & USGC U530 \\\\ S49-147 & 5.375, & 22.402 & 76.0 & 0.0190 & 464\\({}^{+59}_{-21}\\) & 1.13 & 4.06 & \\\\ \\hline \\end{tabular} \\end{table} Table 1: The sample objects selected for the present study. time, with each subset covering one month. The monthly exposure scatters between 18 ks and 150 ks, with an average of 85 ks; those months with the exposure less than 40 ks were discarded. From each monthly-accumulated night Earth spectrum, we subtracted the template background synthesized in the same way as before using the H02 method with the correction using \\(N_{\\rm out}^{\\rm hard}\\). (Each monthly spectrum and the template are both derived from the same data base, but using different sortings.) Figure 2 shows the ratio of the count rate of each monthly spectrum, to that dictated by the synthesized template. While the \\(N_{\\rm out}^{\\rm hard}\\) correction utilizes the 5.9-10.6 keV band, the comparison in figure 2 (left) is carried out in the 4-8 keV band, which is used in the following sections to search for excess hard X-ray emission. Including the statistical error of the 4-8 keV counts, our method provides an rms scatter of 3.3%, which is better than what is achieved by the original H02 method alone (4.0%). In figure 2 (left), the ratio exhibits a gradual decrease by about 1-2%, after the month 66 (1999 July); this is due to orbit decay of ASCA, as is clear from figure 2 (right). By representing this trend by two linear segments as shown by a dashed line in the figure, the reproducibility of our NXB estimation method was improved to 3.0%. In the following analysis, the systematic error in the NXB estimation in each target is defined as a quadrature sum of the statistical error associated with \\(N_{\\rm out}^{\\rm hard}\\), and a 1% systematic error representing any residual unknown factors. ## 4 Data Analysis and Results ### X-ray images Figure 3 and figure 4 show 0.5-10 keV GIS images of the 18 objects, obtained after subtracting the background (NXB and CXB) as derived above. Every object thus exhibits diffuse IGM emission with a roughly circular profile, which is detected up to a radius of \\(10^{\\prime}-25^{\\prime}\\). The X-ray centroid is generally coincident in position with a bright (often the brightest) elliptical galaxy in the system. For each object, we then defined a spectral integration region, as indicated in these figures (dashed circles) and listed in table 2. These regions are fully covered by the GIS, but only partially by the SIS. We eliminated regions around bright contaminating sources, such as the NGC 1404 galaxy close to NGC 1399 and the NGC 499 galaxy near NGC 507. Similarly, we removed regions of \\(3^{\\prime}\\) radius around point sources in the 2RXP catalog1, if their 0.1-2.5 keV count rates are higher than \\(4\\times 10^{-3}\\) counts s\\({}^{-1}\\). From independent analysis using our GIS data, the 2-10 keV fluxes of these sources are confirmed to be \\(\\lesssim 2\\times 10^{-13}\\) erg s\\({}^{-1}\\) cm\\({}^{-2}\\). Positions of all removed sources are also presented in figure 4. Footnote 1: [http://wave.xray.mpe.mpg.de/rosat/rra/rospspc](http://wave.xray.mpe.mpg.de/rosat/rra/rospspc) ### Basic characteristics of the X-ray spectra In this section, we present the GIS and SIS spectra of our sample objects, and quantify them through spectral \\begin{table} \\begin{tabular}{l l l l l} \\hline \\hline target & r\\({}^{e}\\) & sequence ID\\({}^{\\dagger}\\) (year) & & exposure\\({}^{\\ddagger}\\) \\\\ & & & GIS & SIS \\\\ \\hline HCG 51 & \\(10^{\\prime}\\) & 82028000(’94)\\({}^{s2}\\) & 62 & 72 \\\\ HCG 62 & \\(15^{\\prime}\\) & 81012000(’94)\\({}^{s2}\\), 86008000(’98),86008010(’98), 86008020(’98),86008020(’98) & 121 & 29 \\\\ HCG 97 & \\(10^{\\prime}\\) & 84006000(’96)\\({}^{s1}\\) & 79 & 81 \\\\ NGC 507 & \\(15^{\\prime}\\) & 6100700(’94)\\({}^{s2}\\)\\({}^{s2}\\), 61007010(’95), 63026000(’95)\\({}^{o}\\) & 80 & 25 \\\\ NGC 533 & \\(10^{\\prime}\\) & 62009000(’94)\\({}^{\\parallel}\\)\\({}^{\\parallel}\\), 62009010(’96)\\({}^{s2}\\) & 35 & 18 \\\\ NGC 1132 & \\(10^{\\prime}\\) & 65021000(’97)\\({}^{s1}\\) & 27 & 20 \\\\ NGC 1550 & \\(15^{\\prime}\\) & 87005000(’99)\\({}^{s1}\\) & 70 & 23 \\\\ NGC 1399 & \\(20^{\\prime}\\) & 80038000(’93)\\({}^{s4}\\), 80039000(’93), 81021000(’94)\\({}^{o}\\), 87006000(’99)\\({}^{o}\\), & 145 & 17 \\\\ & 87006010(’99)\\({}^{o}\\), 8700620(’99)\\({}^{o}\\),87006030(’99)\\({}^{o}\\), 87006040(’99)\\({}^{o}\\) & & \\\\ NGC 2563 & \\(12^{\\prime}\\) & 63008000(’95)\\({}^{s1}\\) & 46 & 52 \\\\ NGC 4325 & \\(10^{\\prime}\\) & 85066000(’97)\\({}^{s2}\\) & 27 & 25 \\\\ NGC 5044 & \\(15^{\\prime}\\) & 80026000-10(’93)\\({}^{s4}\\), 87002000(’99)\\({}^{o}\\), 87002010(’99)\\({}^{o}\\), 87002020(’99)\\({}^{o}\\), 87002030(’99)\\({}^{o}\\) & 111 & 19 \\\\ NGC 5846 & \\(15^{\\prime}\\) & 61012000(’94)\\({}^{s4}\\) & 36 & 28 \\\\ NGC 6329 & \\(12^{\\prime}\\) & 84047000(’96)\\({}^{s2}\\) & 37 & 34 \\\\ NGC 6521 & \\(10^{\\prime}\\) & 85034000(’97)\\({}^{s1}\\) & 36 & 19 \\\\ NGC 7619 & \\(12^{\\prime}\\) & 63017000(’95)\\({}^{s2}\\) & 56 & 59 \\\\ Pavo & \\(10^{\\prime}\\) & 81020000(’94)\\({}^{s4}\\) & 29 & 26 \\\\ RGH 80 & \\(10^{\\prime}\\) & 830120000(’95)\\({}^{s2}\\), 93007040(’95)\\({}^{o}\\),93007080(’95)\\({}^{o}\\), 93007070(’95)\\({}^{o}\\) & 67 & 42 \\\\ S49-147 & \\(15^{\\prime}\\) & 81001000(’93)\\({}^{s4}\\) & 32 & 29 \\\\ \\hline \\end{tabular} \\({}^{\\ddagger}\\) Radius of the spectral integration region. \\({}^{\\ddagger}\\) Observation IDs. The SIS data are extracted only from the observations with “\\(s\\)”. Associated number represents the CCD mode, such as 1, 2 and 4 CCD modes. Offset pointing are labeled as “\\(o\\)”. \\({}^{\\ddagger}\\) The total effective exposure of the observation, including the offset pointings. \\({}^{\\lx@sectionsign}\\) The data only from the SISO is used. The shape of the SIS1 spectra was odd, and inconsistent with the short supplemental observation (ID=61007010). \\({}^{\\parallel}\\) The SIS data from the first observation is not used. The CCD temperature was too high for the 4 CCD mode operation and the shape of the spectra is severely distorted and unreliable (e.g. Finogenov et a. 2002). \\end{table} Table 2: Log of the ASCA observations utilized in the present work. Figure 3: Background-subtracted 0.5–10 keV GIS (GIS2 plus GIS3) mosaic images of the groups of galaxies in the present sample, observed under multiple pointings. Each image is corrected for overlapping exposure, and is smoothed by a Gaussian function with \\(\\sigma=1^{\\prime}\\). Contours are logarithmically spaced with a factor of 1.7, starting from \\(3\\times 10^{-5}\\) cts s\\({}^{-1}\\) cm\\({}^{-2}\\) arcmin\\({}^{-2}\\). In NGC 1399 and NGC 5044, the least significant contour is deleted for simplicity. The gray thin line indicates the combined GIS field of view. Dashed circle shows the region used for spectral accumulation. Boxes represent point sources eliminated in the spectral analysis. Figure 4: The same as figure 3, but for the groups with single pointing. model fittings. A particular care is needed here, because the soft part and hard part (typically below and above \\(\\sim 2.5\\) keV, respectively) of the spectra are subject to distinct sources of errors. Obviously, the hard-band data are strongly affected by the statistical and background uncertainties. In contrast, the soft-band spectra have such high signal statistics that their uncertainties are dominated by those in the instrumental responses rather than in the background. Given these, we first quantify the IGM emission using the soft-band spectra, and then examine whether the results can explain the hard-band data or not. #### 4.2.1 Fitting procedure Figure 5 shows background-subtracted GIS (black) and SIS (red) spectra of our 18 targets, derived from the spectral accumulation regions defined above. The GIS background has been subtracted as detailed in section 3, while the SIS background using blank-sky observations (also section 3). Errors associated with the GIS data points include the systematic background uncertainties noted in section 3. In contrast, those assigned to the SIS spectra are statistical only, because in this case Poisson errors of the signal and background counts are dominant, in softer and harder energies, respectively. The SIS spectra thus reveal Fe-L, Mg-K, and Si-K lines, indicating the dominance of thermal IGM emission, whereas the GIS signals are generally detectable to energies beyond \\(\\sim 5\\) keV. Below, we apply various spectral models simultaneously to the GIS and SIS spectra of each object, utilizing energies above 1.0 keV and 0.7 keV, respectively. To handle the difference between the GIS and SIS fields of view, the model normalization (of individual components if the model is composite) is allowed to differ between the two instruments. When dealing with optically-thin thermal emission models, we classify major heavy elements into two groups in view of their origin (e.g., Matsushita 1997); one group comprises O, Ne, Na, Mg, Al, Si, S, Ar and Ca, which are mainly so-called \\(\\alpha\\)-elements, while the other group consists of Fe and Ni. Abundances of the first and second metal groups are denoted as \\(Z_{\\alpha}\\) and \\(Z_{\\rm Fe}\\), respectively. For the IGM emission calculations, we used both the vMEKAL and vAPEC codes provided by the XSPEC package. Since the two models give only minor differences, below we refer only to the vMEKAL results. The absorption column density was fixed to the value derived from HI observations (Dickey, Lockman, 1990). These parameters are also listed in table 1. The SIS and GIS responses have some residual uncertainties, which are dominated by their gain calibration errors as a function of the detector position; we quote those of the SIS and GIS as \\(\\sim 0.5\\%\\) and \\(\\sim 1\\%\\), respectively2. In some of the brightest objects in our sample, however, these values turned out to be insufficient to fully represent the instrumental calibration uncertainties in the soft energy band where the signals have very high statistics. As a conventional way to solve this problem, we allowed the model red-shift parameter to vary independently within \\(\\pm\\) 0.5% and \\(\\pm\\) 1%, for the SIS and the GIS, respectively, from that taken from the NED data base3. When analyzing the SIS data taken in the 4-CCD mode, the SIS gain tolerance was slightly relaxed to \\(\\pm\\) 1%, to incorporate additional gain uncertainties. Footnote 2: [http://heasarc.gsfc.nasa.gov/docs/asca/cal_probs.html](http://heasarc.gsfc.nasa.gov/docs/asca/cal_probs.html) Footnote 3: [http://nedwww.ipac.caltech.edu/](http://nedwww.ipac.caltech.edu/) For the NGC 1550 group, we set free the absorption column of the SIS data, because the earliest data of this group was obtained in 1999 when the SIS degradation had already been significant; the additional absorption is expected to emulate changes in the SIS response (e.g. Hwang et al. 1999). For the NGC 6329 group, we limited our analysis to energies above 0.85 keV for the SIS. This is because a significant soft hump is observed below this energy, which is not detected by, e.g., ROSAT (e.g. Mukchaey et al. 2003) and supposed to be instrumental. Even if the humps is included in the fit, it requires very soft (\\(kT<0.1\\) keV) component, which does not affect the results in the following sections. #### 4.2.2 Single-temperature fits to the soft-band spectra As the first attempt, we applied a common vMEKAL model to the GIS and SIS spectra of each object, over a limited soft energy range below 2.5 keV. The results are summarized in table 3, and the obtained best-fit models are shown as a histogram in figure 5. There, the models determined in the energy range below 2.5 keV are extrapo Figure 2: (left) The 4-8 keV GIS counting rate in the \\(r<15^{\\prime}\\) region averaged over each month, normalized to the background modeled with our method. See text for detail. The dashed line is an analytic model fit. (right) Changes of the apogee and perigee heights of ASCA. lated to higher energies, to be compared with the observed data. These soft-band fits gave the IGM temperature in the range \\(kT_{\\rm S}=0.7-1.7\\) keV, together with sub-solar metal abundances. The 0.7-2.5 keV band flux, \\(F_{\\rm soft}\\), was obtained in the range of 1 - 20 \\(\\times 10^{-12}\\) erg s\\({}^{-1}\\) cm\\({}^{-2}\\). In nearly half the objects, the single-temperature fits are in fact formally unacceptable. In NGC 507, NGC 1399, NGC 1550, NGC 5044, NGC 5846, and Pavo, the fit leaves significant residuals around atomic emission lines. This suggest that their spatially-integrated IGM emission cannot be described adequately with single-temperature thermal models: later in section 4.2.4, we hence introduce two-temperature models. In figure 5, we also notice that the data at energies above \\(\\sim 3\\) keV often exceed the model extrapolation, in at least 6 objects such as HCG 62, NGC 507, NGC 1399, and RGH 80. This suggests the presence of additional harder emission components in these objects. Actually, if we fit the GIS/SIS spectra of these objects over the whole energy band including data points above 2.5 keV, the reduced chi-squared further increases by \\(\\sim\\)0.3 or more. #### 4.2.3 Properties of the hard-band spectra In order to characterize the spectra in the hard band, we next analyzed the 2.5-8 keV GIS data of our sample objects. The SIS data were not incorporated, since the SIS background systematics above 2.5 keV is not well understood. We employed the same single-temperature vMEKAL model, but fixed \\(Z_{\\alpha}\\) and \\(Z_{\\rm Fe}\\) both to 0.5 solar for simplicity, because this energy range is relatively devoid of strong metal lines when the plasma temperature is \\(\\lesssim 2\\) keV. The red-shift is also fixed at the optical value. Then, the temperature in this energy range, denoted \\(kT_{\\rm H}\\), is to be determined by the continuum shape. Since the GIS data above 2.5 keV are dominated by the backtround (NXB+CXB), errors associated with \\(kT_{\\rm H}\\) in Figure 5: The GIS (black) and SIS (red) spectra of our sample objects, jointly fitted with a vMEKAL model in the energy below 2.5 keV. The models are extrapolated to higher energies. All spectra are plotted to the same scale. Figure 6: The best fit temperature from the hard-band fitting (\\(kT_{\\rm H}\\)), compared to that from the soft-band fitting (\\(kT_{\\rm S}\\)). Statistical 90% errors are plotted. The solid line indicates \\(kT_{\\rm H}=kT_{\\rm S}\\), while the dashed line \\(kT_{\\rm H}=2kT_{\\rm S}\\). this case are expected to come from three major sources; photon counting statistics, systematic errors in the NXB subtraction, and those of the CXB. The formal \"fitting error\" takes into account the first two factors, because the NXB estimation errors are already included in the error bars of individual GIS data points (section 4.2.1). However, the third factor is not. Accordingly, we estimated the CXB contribution as described in the end of this subsubsection, and present them in addition to the fitting errors for \\(kT_{\\rm H}\\). The hard-band temperatures \\(kT_{\\rm H}\\), obtained in this way, are listed in table 3, and are compared in figure 6 with the soft-band temperature \\(kT_{\\rm S}\\). Although the errors (taking into account all the three sources) are large, we find that \\(kT_{\\rm H}\\) is generally higher than \\(kT_{\\rm S}\\), even reaching \\(\\sim 2kT_{\\rm S}\\) in some objects (e.g., HCG 62 and RGH 80). This result reinforces the positive data excess toward higher energies, which are suggested by figure 5. For reference, the obtained \\(kT_{\\rm H}\\) does not change significantly even if the 2.5-4.0 keV SIS data are included to the fit. The third factor causing errors in \\(kT_{\\rm H}\\), namely sky-to-sky fluctuations in the CXB surface brightness \\(I_{\\rm CXB}\\), occurs as the number of faint objects that constitute the CXB fluctuates. Assuming a simple Euclidean log\\(N\\)-log\\(S\\) relation as \\(N(>S)\\propto S^{-1.5}\\), where \\(S\\) is the source flux and \\(N(>S)\\) is the number of sources with fluxes higher than \\(S\\), the CXB brightness fluctuation \\(\\sigma_{\\rm CXB}\\) is described as \\(\\sigma_{\\rm CXB}/I_{\\rm CXB}\\propto\\Omega_{\\rm e}^{-0.5}S_{\\rm c}^{0.25}\\). Here, \\(S_{\\rm c}\\) is the upper flux bound of the individually eliminated sources, which is \\(\\sim 2\\times 10^{-13}\\) erg s\\({}^{-1}\\) cm\\({}^{-2}\\) (2-10 keV) in this analysis, and \\(\\Omega_{\\rm e}\\) is the effective solid angle of the detector. For the ASCA GIS, \\(\\Omega_{\\rm e}\\) has been calculated by Ishisaki (1997), by taking into account the ASCA vignetting function. In the present case, a typical data accumulation radius of \\(15^{\\prime}\\) gives \\(\\Omega_{\\rm e}=0.1\\) deg\\({}^{2}\\). By scaling the HEAO-1 A2 result (Shafer, Fabian 1998) with these values, we typically find that the 2.5-8 keV CXB fluctuates from field to field by 7.9% (1\\(\\sigma\\)) in the present case. Then, by fitting the GIS spectra with the CXB level artificially changed (by a level corresponding to the 90% range of its fluctuation), we evaluated the error propagation from the CXB brightness to \\(kT_{\\rm H}\\). #### 4.2.4 Two-temperature IGM modeling The single-temperature vMEKAL fit to the soft band spectra ended up unacceptable in 10 objects (section 4.2.2): before examining the hard excess suggested by the large difference between \\(kT_{\\rm S}\\) and \\(kT_{\\rm H}\\) in several objects, we need to arrive at an acceptable IGM modeling of all objects. These fit failures are most likely caused by their slight deviation from the assumed isothermality, because temperature gradients in the X-ray emitting plasma are rather common among groups and clusters of galaxies. Even in such a case, two temperatures are generally known to be sufficient to describe the integrated X-ray spectrum (e.g., Fukazawa 1997). We applied a two-temperature vMEKAL model (2- Figure 5: (continued) vMEKAL model) to the GIS and SIS spectra of the 10 objects, again in the energy range below 2.5 keV. The two vMEKAL components were allowed to take free temperatures and normalizations, but were constrained to have the same abundance parameters. To avoid strong couplings between them, we constrained the hotter temperature (\\(kT_{2}\\)) to stay between 0.8 and 25 keV; the former is a typical cooler component temperature \\(kT_{1}\\), while the latter is 10 times the upper bound of the employed energy range. Results of the 2-vMEKAL fits are summarized in table 4. Spectra of HCG 62, NGC 5846 and Pavo have been fitted successfully by the 2-vMEKAL model when the abundances are grouped as \\(Z_{\\alpha}\\) and \\(Z_{\\rm Fe}\\). When the abundances of Mg, Al, Si, and S are set free, S49-147 gave acceptable fit. With the same model, HCG 51, NGC 507 and NGC 533 gave marginally (at 99% confidence) acceptable fits, although \\(kT_{2}\\) is not well determined in the latter two objects. The remaining 3 objects (NGC 1399, NGC 1550 and NGC 5044) have such high statistics around 1 keV that the fits remained still unacceptable even setting the Mg, Al, Si, S abundances free. However, the fit itself has greatly improved by adding the second component, implying that their spectra clearly prefer the 2-vMEKAL model to the vMEKAL model. Adding a third emission component does not improve the fit significantly. The fit failure in these objects is partially due to slight discrepancy between the SIS and the GIS data, at \\(\\sim 1.2\\) keV and \\(\\sim 2.1\\) keV, presumably due to calibration errors. (The same features are often present in other observations, but usually negligible compared to data statistics.) We hence decided to simply ignore the energy bands of 1.15-1.25 keV and 2.1-2.2 keV. Then the fit improved significantly, and became marginally acceptable by 99% in NGC 1399 and NGC 1550, while still unacceptable by 99.6% in NGC 5044. ### Excess Hard X-ray Signals Having fixed the IGM emission using the soft-band data, let us examine the significance of the suggested excess hard X-ray emission by counting excess photons, employing the energy band from 4 to 8 keV. In this range, the GIS still has an effective area of \\(40-80\\) cm\\({}^{2}\\), but contribution from the IGM emission is expected to be very small. In fact, 70-99 % of the raw count rate in this band is from the background, of which the CXB and the NXB have comparable shares. In table 5, the 3rd column represents the raw (background inclusive) 4-8 keV GIS count rate of each object accumulated over the specified data integration region (figure 3 and figure 4), together with the purely statistical \\(1\\sigma\\) errors. The 4th column shows the expected CXB contribution, of which the errors refer to \\(1\\sigma\\) sky-to-sky fluctuation calculated in the same way as in section 4.2.3 (e.g., 7.9% for a region of \\(15^{\\prime}\\) in radius). The 5th column represents the estimated NXB count rate, derived as described in section 3. Since the NXB templates have high photon \\begin{table} \\begin{tabular}{l|c c c c c c} \\hline \\hline target & \\(kT_{\\rm S}\\)* & Abundance \\({}^{\\dagger}\\) & \\(\\chi^{2}\\)/dof & \\(F_{\\rm soft}\\)\\({}^{\\ddagger}\\) & \\(kT_{\\rm H}\\)\\({}^{\\lx@sectionsign}\\) & note \\({}^{\\dagger}\\) \\\\ & (keV) & \\(Z_{\\alpha}(Z_{\\odot})\\) & \\(Z_{\\rm Fe}(Z_{\\odot})\\) & & & \\\\ \\hline HCG 51 & \\(1.37^{+0.03}_{-0.03}\\) & \\(0.44^{+0.13}_{-0.12}\\) & \\(0.36^{+0.07}_{-0.05}\\) & 72.0/52 & 1.9 & \\(0.96^{+0.50+0.74}_{-0.31-0.41}\\) & \\(\\star\\) \\\\ HCG 62 & \\(0.95^{+0.03}_{-0.04}\\) & \\(0.31^{+0.09}_{-0.07}\\) & \\(0.15^{+0.03}_{-0.03}\\) & 70.0/52 & 4.0 & \\(2.47^{+0.61+0.55}_{-0.46-0.57}\\) & \\(\\star\\) \\\\ HCG 97 & \\(1.03^{+0.04}_{-0.06}\\) & \\(0.31^{+0.16}_{-0.12}\\) & \\(0.19^{+0.05}_{-0.04}\\) & 59.5/52 & 0.9 & \\(1.88^{+1.74+1.25}_{-0.84-1.15}\\) & \\\\ NGC 507 & \\(1.35^{+0.03}_{-0.03}\\) & \\(0.62^{+0.12}_{-0.10}\\) & \\(0.41^{+0.06}_{-0.05}\\) & 128.8/96 & 7.8 & \\(1.86^{+0.19+0.17}_{-0.19}\\) & \\(\\star\\) \\\\ NGC 533 & \\(1.23^{+0.08}_{-0.10}\\) & \\(0.52^{+0.26}_{-0.18}\\) & \\(0.31^{+0.11}_{-0.08}\\) & 80.7/52 & 2.7 & \\(1.30^{+0.65+0.66}_{-0.46-0.57}\\) & \\(\\star\\)\\(\\star\\) \\\\ NGC 1132 & \\(1.08^{+0.04}_{-0.04}\\) & \\(0.33^{+0.15}_{-0.11}\\) & \\(0.27^{+0.06}_{-0.05}\\) & 53.9/52 & 2.7 & \\(1.09^{+0.51+0.48}_{-0.36-0.37}\\) & \\\\ NGC 1399 & \\(1.31^{+0.02}_{-0.04}\\) & \\(0.56^{+0.03}_{-0.04}\\) & \\(0.35^{+0.03}_{-0.04}\\) & 239.9/104 & 14.3 & \\(1.84^{+0.09+0.09}_{-0.11-0.11}\\) & \\(\\star\\)\\(\\star\\) \\\\ NGC 1550 & \\(1.38^{+0.02}_{-0.02}\\) & \\(0.58^{+0.07}_{-0.06}\\) & \\(0.42^{+0.04}_{-0.03}\\) & 191.9/103 & 19.1 & \\(1.52^{+0.07+0.09}_{-0.06-0.08}\\) & \\(\\star\\)\\(\\star\\) \\\\ NGC 2563 & \\(1.32^{+0.10}_{-0.07}\\) & \\(0.64^{+0.31}_{-0.21}\\) & \\(0.28^{+0.14}_{-0.07}\\) & 45.7/52 & 1.6 & \\(1.73^{+0.94+0.84}_{-0.57-0.80}\\) & \\\\ NGC 4325 & \\(1.02^{+0.02}_{-0.02}\\) & \\(0.42^{+0.16}_{-0.12}\\) & \\(0.32^{+0.07}_{-0.05}\\) & 61.0/52 & 4.6 & \\(1.20^{+0.60+0.59}_{-0.41-0.43}\\) & \\\\ NGC 5044 & \\(0.99^{+0.01}_{-0.01}\\) & \\(0.39^{+0.04}_{-0.04}\\) & \\(0.27^{+0.02}_{-0.02}\\) & 261.3/104 & 23.4 & \\(1.36^{+0.11+0.12}_{-0.11-0.12}\\) & \\(\\star\\)\\(\\star\\) \\\\ NGC 5846 & \\(0.67^{+0.02}_{-0.02}\\) & \\(0.40^{+2.00}_{-0.11}\\) & \\(0.18^{+0.05}_{-0.03}\\) & 136.8/98 & 7.0 & \\(2.18^{+2.34+1.24}_{-0.89-1.21}\\) & \\(\\star\\)\\(\\star\\) \\\\ NGC 6329 & \\(1.56^{+0.15}_{-0.15}\\) & \\(0.63^{+0.43}_{-0.29}\\) & \\(0.39^{+0.22}_{-0.17}\\) & 38.2/47 & 1.6 & \\(2.42^{+1.56+0.92}_{-0.86-1.00}\\) & \\\\ NGC 6521 & \\(1.74^{+0.35}_{-0.29}\\) & \\(0.69^{+0.45}_{-0.33}\\) & \\(0.38^{+0.24}_{-0.18}\\) & 36.5/52 & 1.6 & \\(1.85^{+0.42+0.50}_{-0.45}\\) & \\\\ NGC 7619 & \\(0.95^{+0.04}_{-0.04}\\) & \\(0.42^{+0.15}_{-0.13}\\) & \\(0.23^{+0.05}_{-0.05}\\) & 55.9/52 & 2.7 & \\(1.69^{+1.06+0.96}_{-0.63-0.92}\\) & \\\\ Pavo & \\(0.54^{+0.06}_{-0.05}\\) & \\(0.17^{+0.18}_{-0.10}\\) & \\(0.06^{+0.02}_{-0.03}\\) & 70.1/52 & 1.9 & \\(0.73^{+1.13+1.30}_{-0.43-0.36}\\) & \\(\\star\\) \\\\ RGH 80 & \\(1.09^{+0.08}_{-0.03}\\) & \\(0.50^{+0.17}_{-0.14}\\) & \\(0.18^{+0.04}_{-0.03}\\) & 50.9/52 & 1.3 & \\(2.62^{+1.09statistics (section 3), the quoted errors are 1\\(\\sigma\\) systematic, which is typically 3.2 % and 2.0% for 40 ks and 120 ks observations, respectively. The 6th column gives the IGM contribution to the 4-8 keV band, calculated by extrapolating the soft-band determined IGM model. We used the 1-vMEKAL fit if it is acceptable, whereas the 2-MEKAL fit otherwise. Although uncertainties in the IGM contribution have composite origin, they are dominated by statistical errors in the soft-band temperature determination, because the error is amplified as the model prediction is extrapolated toward the higher 4-8 keV range. We hence calculated how the errors associated with the soft-band temperature determinations propagate into those in the 4-8 keV counts. The original 90% confidence errors were convert into 1\\(\\sigma\\) values assuming a Gaussian distribution. The 7th column in table 5 lists the 4-8 keV excess count rates of the 18 objects, derived by subtracting the CXB, NXB, and IGM from the raw data. There, the statistical and systematic errors are given separately; the former comes from that of the raw data, while the latter is the quadrature sum of those associated with the three components subtracted. The last column represent significance of the excess counts, calculated against the overall uncertainty which is the quadrature sum of the statistical and systematic errors. These values are also presented in figure 7. Thus, the hard-band excess above the soft-band determined IGM model is insignificant (\\(\\lesssim 1\\sigma\\) level) in 10 out of the 18 objects, and is marginal (\\(\\sim 1.5\\sigma\\) level) in 4 objects (NGC 1550, NGC 5846, NGC 6329 and S49-147). In contrast, the significance is higher than 2\\(\\sigma\\) in 3 groups of galaxies, namely HCG 62, NGC 1399, and RGH 80. Among them, the soft band IGM modeling of the first and the last objects is statistically acceptable. In the case of NGC 1399, the somewhat poor IGM modeling (acceptable at 98%) hampers us to draw a firm conclusion, but the hard excess could also be present because its nominal significance is rather high (4\\(\\sigma\\)). Properties of these 3 objects are discussed in detail in the next sub-section. The NGC 5044 group also shows excess significance as 2.6\\(\\sigma\\). The IGM modeling in this source, however, is unacceptable even by 99% confidence (with \\(\\chi^{2}/{\\rm dof}=116.7/79\\)), so that the errors associated with the IGM contribution could be underestimated. In addition, the excess hard emission itself is weak. We estimated the 2-10 keV hard band flux \\(F_{\\rm hard}\\) assuming a power-law model with photon index \\(\\Gamma\\) fixed at 2.0, from the excess hard counts in table 5. The value of \\(F_{\\rm hard}\\) thus obtained amounts to only 4% of the \\(F_{\\rm soft}\\). Thus, we conclude that the hard excess signal suggested in NGC 5044 is not significant enough in this study. ### Detailed analysis of selected objects In this section, we analyze in detail the 3 objects, HCG 62, NGC 1399 and RGH 80, which have been found in section 4.3.2 to require a significant hard X-ray component, in addition to the thermal IGM model (either vMEKAL or 2-vMEKAL). As already reported by Fukazawa et al. (2001), the compact group HCG 62 shows the strong excess signal, and the IGM contribution to its 4-8 keV excess counts is only \\(\\sim\\) 25 %. Therefore, this object provides an ideal benchmark test for our work, particularly in comparison with Fukazawa et al. (2001). We study RGH 80 as well, because it also shows evidence of strong hard excess signal, including the high value of \\(kT_{\\rm H}=2.62\\) keV. In the case of NGC 1399, the soft band IGM modeling is only marginally acceptable and also the value of \\(kT_{\\rm H}\\) is less than 2 keV, while the excess is still significant in the previous analysis. Thus, this objects also requires detailed analysis. \\begin{table} \\begin{tabular}{l c c c c c c c} \\hline \\hline target & Model\\({}^{\\rm a}\\) & \\(kT_{1}\\)\\({}^{\\dagger}\\) & \\(kT_{2}\\)\\({}^{\\ddagger}\\) & Abundance & \\(\\chi^{2}/{\\rm dof}\\) & note\\({}^{\\lx@sectionsign}\\) & \\(-\\Delta\\chi^{2}\\|\\) \\\\ & & (keV) & (keV) & \\(Z_{\\rm Fe}(Z_{\\odot})\\) & & & \\\\ \\hline HCG 51 & 6Z & \\(0.98^{+0.15}_{-0.23}\\) & \\(1.76^{+0.31}_{-0.38}\\) & \\(0.70^{+0.60}_{-0.33}\\) & 59.7/45 & \\(\\star\\) & (12.3) \\\\ HCG 62 & 2Z & \\(0.78^{+0.07}_{-0.23}\\) & \\(1.27^{+0.33}_{-0.80}\\) & \\(0.24^{+0.08}_{-0.07}\\) & 50.6/49 & & 19.4 \\\\ NGC 507 & 6Z & \\(0.71^{+0.70}_{-0.29}\\) & \\(1.36^{+2.50}_{-0.07}\\) & \\(0.46^{+0.33}_{-0.11}\\) & 109.7/86 & \\(\\star\\) & (19.1) \\\\ NGC 533 & 6Z & \\(1.12^{+0.11}_{-0.78}\\) & \\(1.42^{+1.52}_{-0.80}\\) & \\(0.34^{+0.16}_{-0.08}\\) & 68.1/45 & \\(\\star\\) & (12.6) \\\\ NGC 1399 & 6Z- & \\(0.82^{+0.07}_{-0.14}\\) & \\(1.43^{+0.17}_{-0.10}\\) & \\(0.44^{+0.14}_{-0.05}\\) & 107.4/79 & \\(\\star\\) & 33.5 \\\\ NGC 1550 & 6Z- & \\(0.95^{+0.33}_{-0.35}\\) & \\(1.44^{+0.14}_{-0.10}\\) & \\(0.52^{+0.04}_{-0.14}\\) & 105.9/78 & \\(\\star\\) & (15.9) \\\\ NGC 5044 & 6Z- & \\(0.67^{+0.05}_{-0.11}\\) & \\(1.11^{+0.05}_{-0.05}\\) & \\(0.35^{+0.05}_{-0.02}\\) & 116.7/79 & \\(\\star\\star\\) & 61.0 \\\\ NGC 5846 & 2Z & \\(0.63^{+0.02}_{-0.02}\\) & \\(1.28^{+0.07}_{-0.45}\\) & \\(0.36^{+0.04}_{-0.07}\\) & 92.3/92 & & 44.5 \\\\ Pavo & 2Z & \\(0.39^{+0.05}_{-0.07}\\) & \\(1.56^{+0.84}_{-0.24}\\) & \\(0.58^{+8.35}_{-0.37}\\) & 40.9/49 & & 29.2 \\\\ S49-147 & 6Z & \\(0.60^{+0.19}_{-0.60}\\) & \\(1.15^{+0.21}_{-0.80}\\) & \\(0.09^{+0.06}_{-0.05}\\) & 54.3/45 & & 27.4 \\\\ \\hline \\end{tabular} \\({}^{\\star}\\) Model ID. “2Z” means that the abundances are grouped into two (\\(Z_{\\alpha}\\) and \\(Z_{\\rm Fe}\\)). “6Z” means models with Mg, Al, Si and S set free, in addition to “2Z”. “6Z” means after subtracting the two energy bands. See text. \\({}^{\\dagger}\\) Temperature of the cooler component. \\({}^{\\ddagger}\\) Temperature of the hotter component. Parameters hitting the limit (0.8–25 keV) are presented with \\(>\\) or \\(<\\). \\({}^{\\lx@sectionsign}\\) Targets in which the soft-band fitting is not statistically acceptable in 90% and 99% confidence are indicated with “\\(\\star\\)” and “\\(\\star\\)”, respectively. \\({}^{\\lx@sectionsign}\\) Improvement in \\(\\chi^{2}\\) by introducing the second thermal component. Presented with parentheses, if the improvement is not significant with F-test by 1%. \\end{table} Table 4: Results of the 2-vMEKAL model fitting in energies below 2.5 keV. #### 4.4.1 The HCG 62 group This is a bright group of galaxies, emitting a 0.7-2.5 keV flux of \\(4.7\\times 10^{-12}\\) erg s\\({}^{-1}\\) cm\\({}^{-2}\\). To investigate the nature of the excess hard emission in this source, we analyzed the full band spectra. The GIS spectra were fitted over the 1-8 keV range, while those of the SIS were used up to 4.5 keV determined from the source brightness and background uncertainty. The 2-vMEKAL model provided unacceptable fit to the full band data with \\(\\chi^{2}/\\)dof = 84.4/66. By adding a power-law component with a photon index \\(\\Gamma\\) fixed at 2.0 (2-vMEKAL+PL model), an acceptable fit was obtained with \\(\\chi^{2}/\\)dof = 61.3/64. According to an \\(F\\)-test, the probability of this improvement being by chance is less than \\(10^{-4}\\). The results of both fits are presented in table 6, and the fit with the latter model is presented in figure 8. The power-law flux in the 2-10 keV band, \\(F_{\\rm hard}\\), amounts to \\(\\sim 30\\%\\) of the IGM component flux in the 0.7-2.5 keV band, \\(F_{\\rm soft}^{\\rm IGM}\\). These results generally agree with Fukazawa et al. (2001). The excess above the 2-vMEKAL fit can also be reproduced by a bremsstrahlung model instead of the power-law, although the bremsstrahlung temperature becomes rather high as \\(4.0^{+8.3}_{-1.3}\\) keV. Thus, the data clearly requires a non-thermal, or less likely, very hot thermal \\begin{table} \\begin{tabular}{l l l l l l l l l} \\hline \\hline target & Model\\({}^{\\star}\\) & \\(kT_{1}\\)\\({}^{\\dagger}\\) & \\(kT_{2}\\)\\({}^{\\ddagger}\\) & Abundance & \\(F_{\\rm hard}\\)\\({}^{\\lx@sectionsign}\\) & \\(F_{\\rm soft}^{\\rm IGM}\\) & \\(\\chi^{2}/\\)dof & \\(\\Delta\\chi^{2}\\)\\({}^{\\#}\\) \\\\ & & (keV) & (keV) & \\(Z_{\\rm Fe}(Z_{\\odot})\\) & & & & \\\\ \\hline HCG 62 & 2T/2Z & \\(0.94^{+0.04}_{-0.04}\\) & \\(17.5^{+0.05}_{-11.8}\\) & \\(0.17^{+0.04}_{-0.01}\\) & - & 4.7 & 84.4/66 & - \\\\ & 2T/2Z+PL & \\(0.71^{+0.12}_{-0.08}\\) & \\(1.20^{+0.13}_{-0.14}\\) & \\(0.30^{+0.15}_{-0.07}\\) & \\(1.34^{+0.25}_{-0.26}\\pm 0.53\\) & 4.4 & 61.3/64 & 23.1 \\\\ NGC 1399 & 2T/6Z & \\(0.88^{+0.19}_{-0.03}\\) & \\(1.79^{+0.21}_{-0.09}\\) & \\(0.78^{+0.08}_{-0.15}\\) & - & 17.3 & 224.8/173 & - \\\\ & 2T/6Z+PL & \\(0.84^{+0.04}_{-0.06}\\) & \\(1.58^{+0.11}_{-0.13}\\) & \\(0.63^{+0.12}_{-0.11}\\) & \\(1.90^{+0.64}_{-0.61}\\pm 0.45\\) & 16.5 & 195.2/171 & 29.6 \\\\ RGH 80 & 1T/2Z & \\(1.26^{+0.05}_{-0.05}\\) & - & \\(0.22^{+0.06}_{-0.04}\\) & - & 1.8 & 111.5/66 & - \\\\ & 1T/2Z+PL & \\(1.08^{+0.03}_{-0.04}\\) & - & \\(0.32^{+0.21}_{-0.10}\\) & \\(0.69^{+0.13}_{-0.14}\\pm 0.37\\) & 1.3 & 58.2/64 & 53.3 \\\\ \\hline \\end{tabular} \\({}^{\\star}\\) Model ID. “1T” means vMEKAL, and “2T” means 2-vMEKAL models. “2Z” means that the metal abudances are grouped into two (\\(Z_{\\alpha}\\) and \\(Z_{\\rm Fe}\\)). “6Z” means models with Mg, Al, Si and S set free, in addition to ”2Z”. “PL” mean the \\(\\Gamma=2.0\\) fixed power-law component. \\({}^{\\dagger}\\) Temperature of the cooler component. \\({}^{\\ddagger}\\) Temperature of the hotter component, when fitted with the 2-vMEKAL model. \\({}^{\\lx@sectionsign}\\) 2-10 keV flux of the power-law component, in \\(10^{-12}\\) erg s\\({}^{-1}\\) cm\\({}^{-2}\\). The second error represents the value of 90% CXB fluctuation. \\({}^{\\dagger}\\) 0.7-2.5 keV flux of the IGM component. (\\(10^{-12}\\) erg s\\({}^{-1}\\) cm\\({}^{-2}\\)) \\({}^{\\#}\\) Improvement of \\(\\chi^{2}\\) by adding a power-law component (addition of 2 parameters). \\end{table} Table 6: Results from fitting to the full band spectra with a model with and without a power-law. \\begin{table} \\begin{tabular}{l l l l l l l l} \\hline \\hline target & Model\\({}^{\\star}\\) & Data\\({}^{\\dagger}\\) & CXB \\({}^{\\ddagger}\\) & NXB \\({}^{\\lx@sectionsign}\\) & IGM \\({}^{\\dagger}\\) & Excess \\({}^{\\#}\\) & Sigma\\({}^{\\star\\star}\\) \\\\ \\hline HCG 51 & 2T & \\(10.96\\pm 0.30\\) & \\(6.12\\pm 0.73\\) & \\(4.76\\pm 0.12\\) & \\(1.15\\pm 0.45\\) & \\(1.07\\pm 0.30\\pm 0.87\\) & \\(-1.17\\) \\\\ HCG 62 & 2T & \\(20.26\\pm 0.29\\) & \\(8.24\\pm 0.66\\) & \\(9.21\\pm 0.18\\) & \\(0.81\\pm 0.15\\) & \\(2.00\\pm 0.29\\pm 0.70\\) & 2.64 \\\\ HCG 97 & 1T & \\(11.46\\pm 0.27\\) & \\(5.50\\pm 0.66\\) & \\(5.48\\pm 0.13\\) & \\(0.17\\pm 0.01\\) & \\(0.32\\pm 0.27\\pm 0.67\\) & 0.44 \\\\ NGC 507 & 2T & \\(20.91\\pm 0.36\\) & \\(7.42\\pm 0.59\\) & \\(8.97\\pm 0.21\\) & \\(3.07\\pm 1.54\\) & \\(1.45\\pm 0.36\\pm 1.66\\) & 0.85 \\\\ NGC 533 & 2T & \\(12.01\\pm 0.42\\) & \\(5.76\\pm 0.69\\) & \\(5.28\\pm 0.17\\) & \\(0.55\\pm 0.09\\) & \\(0.42\\pm 0.42\\pm 0.72\\) & 0.51 \\\\ NGC 1132 & 1T & \\(10.21\\pm 0.43\\) & \\(5.64\\pm 0.68\\) & \\(4.53\\pm 0.17\\) & \\(0.52\\pm 0.03\\) & \\(-0.48\\pm 0.43\\pm 0.70\\) & \\(-0.58\\) \\\\ NGC 1399 & 2T & \\(26.75\\pm 0.30\\) & \\(8.68\\pm 0.52\\) & \\(9.84\\pm 0.18\\) & \\(4.40\\pm 0.74\\) & \\(3.83\\pm 0.30\\pm 0.92\\) & 3.95 \\\\ NGC 1550 & 2T & \\(33.85\\pm 0.49\\) & \\(11.61\\pm 0.93\\) & \\(10.63\\pm 0.27\\) & \\(9.38\\pm 1.28\\) & \\(2.24\\pm 0.49\\pm 1.61\\) & 1.33 \\\\ NGC 2563 & 1T & \\(17.13\\pm 0.43\\) & \\(7.85\\pm 0.79\\) & \\(7.70\\pm 0.22\\) & \\(0.75\\pm 0.06\\) & \\(0.82\\pm 0.43\\pm 0.82\\) & 0.89 \\\\ NGC 4325 & 1T & \\(13.07\\pm 0.49\\) & \\(6.62\\pm 0.79\\) & \\(5.69\\pm 0.21\\) & \\(0.71\\pm 0.03\\) & \\(0.04\\pm 0.49\\pm 0.82\\) & 0.04 \\\\ NGC 5044 & 2T & \\(22.40\\pm 0.33\\) & \\(8.10\\pm 0.65\\) & \\(9.01\\pm 0.19\\) & \\(3.33\\pm 0.11\\) & \\(1.96\\pm 0.33\\pm 0.68\\) & (2.58) \\\\ NGC 5846 & 2T & \\(25.15\\pm 0.59\\) & \\(12.59\\pm 1.01\\) & \\(10.47\\pm 0.34\\) & \\(0.35\\pm 0.03\\) & \\(1.74\\pm 0.59\\pm 1.06\\) & 1.43 \\\\ NGC 6329 & 1T & \\(17.47\\pm 0.49\\) & \\(8.09\\pm 0.81\\) & \\(7.02\\pm 0.23\\) & \\(1.14\\pm 0.17\\) & \\(1.22\\component. If \\(\\Gamma\\) is set free in the above 2-vMEKAL+PL fit, we obtain \\(\\Gamma=2.17^{+0.28}_{-0.53}\\) with \\(\\chi^{2}/{\\rm dof}=60.8/63\\), implying that the improvement is insignificant. For reference, the 90% fluctuation level of the CXB brightness is separately presented in the error column of \\(F_{\\rm hard}\\) in table 6. Again, the CXB fluctuation is insufficient to explain the hard excess, confirming the results presented in section 4.3. Figure 9 shows the azimuthally-averaged radial profile of the 4.0-8.0 keV GIS count rate from HCG 62, compared with the background profile obtained as described in section 3. Regions around the five point sources in the HCG 62 field are excluded. Thus, the comparison reconfirms the highly significant hard X-ray signal, of which only \\(\\sim 25\\)% can be accounted for by the thermal IGM emission as revealed by figure 8. The emission is clearly extended and detectable up to \\(\\sim 15^{\\prime}\\), beyond which it vanishes in agreement with the background scaling correction employed in section 3. To further examine the consistency between the spectral (figure 8) and spatial (figure 9) results, we sorted the spectra into three concentric annuli, and performed the 2-vMEKAL+PL fit to each. As described in table 7, the power-law component is required in all annuli. The central \\(3^{\\prime}\\) of the group is well fitted with a vMEKAL+PL model, but 2-vMEKAL+PL fit provides significantly improved results in view of an \\(F\\)-test. The outer regions can be fitted with a single vMEKAL model on condition that the \\(\\Gamma=2.0\\) power-law is added. Therefore, the extended nature of the excess hard X-ray emission, indicated by figure 9, is supported by the spectral analysis as well. Buote (2000) analyzed the same ASCA data accumulated from the central \\(\\sim 3^{\\prime}\\) region, and fitted the spectra with a combination of 0.7 keV and 1.4 keV thermal components with \\(\\sim 1\\) solar metal abundances. While the reported temperatures agree with our measurements, he did not mention the excess hard component in his paper, possibly because he used the energy range below \\(\\sim 5\\) keV. This limited energy range, in turn, was required presumably by a more conventional background subtraction method employed. Actually, we can reproduce his results if we ignore the energy range above 4 keV and set the absorption column free, as he did. Recently, Morita et al. (2006) analyzed the XMM-Newton and Chandra data of HCG 62, and obtained a radial temperature profile which is \\(0.7-0.8\\) keV within \\(1^{\\prime}\\) and is increasing to 1.4 keV at \\(2^{\\prime}-4^{\\prime}\\). Again, the measured temperatures are consistent with our 2-vMEKAL+PL results from the \\(r<3^{\\prime}\\) spectra. They did not detect the excess hard X-rays in their data. Instead, they included the emission by scaling the parameters of Fukazawa et al. (2000) as a background in their analysis. They argue that their results did not significantly change with and without this component. Non detection by Chandra and XMM is not surprising, since their 4-8 keV background count rate, normalized to the effective area and the sky area, is 8 and 4 times higher than that of the GIS, respectively (figure 1). Specifically, the hard excess emission from HCG 62 amounts to 12% of the 4-8 keV GIS background (CXB+NXB), whereas it is only \\(\\sim 3\\)% of the total background of XMM; this is smaller than the background modeling uncertainty of \\(\\sim 5\\)%, obtained after sophisticated screening (e.g. Nevalainen et al. 2005). #### 4.4.2 The NGC 1399 group Since the NGC 1399 group (the Fornax cluster) is one of the X-ray brightest groups in the sky, the data have very high statistics, requiring careful analyses. The soft-band spectra can be approximated by two temperatures of 0.8 keV and 1.4 keV (table 4), which are consistent with the virial temperature of \\(\\sim 1\\) keV implied by the galaxy velocity dispersion of 374 km s\\({}^{-1}\\) (Drinkwater et al. 2001). Figure 7: Significance of the excess count rate in the GIS 4–8 keV spectra, above the IGM model determined in the energy range below 2.5 keV. Open boxes show those objects of which the soft-band data are reproduced with the vMEKAL model, while stars those which require 2-vMEKAL modeling. Error bars refer to the quadrature sum of statistical and systematic 1-sigma uncertainties. Result of NGC 5044, in which the soft-band fit is not acceptable in 99% confidence, is shown with a diamond. Figure 9: The 4.0–8.0 keV GIS radial count-rate profile, shown as a function of two-dimensional radius from the group center. The on-source data are shown in red, and the background in black. Abscissa is in arcmin, while ordinate is in cts s\\({}^{-1}\\) arcmin\\({}^{-2}\\). Compared to these, the hard-band derived temperature, \\(kT_{\\rm H}\\) = 1.84\\({}^{+0.09}_{-0.11}\\) keV (table 3), is significantly higher, suggesting the presence of a harder emission component. As an attempt to further examine the issue, we started with fitting the 2-vMEKAL model to the full band spectra. The abundances of Mg, Al, Si, and S were set free in addition to \\(Z_{\\alpha}\\) and \\(Z_{\\rm Fe}\\). The fit was far from acceptable (with \\(\\chi^{2}/{\\rm dof}\\) = 292.1/191), due to the possible calibration discrepancies around 1.2 and 2.1 keV mentioned in section 4.2.4. By ignoring the energy bands of 1.15-1.25 keV and 2.1-2.2 keV, the 2-vMEKAL model fit improved to \\(\\chi^{2}/{\\rm dof}\\)=224.8/173, but was still unacceptable by 99.5%. By further adding a power-law with fixed \\(\\Gamma\\) = 2.0 (2-vMEKAL+PL model), we have finally obtained an acceptable fit by 90% level (with \\(\\chi^{2}/{\\rm dof}\\)=195.2/171). The results are presented in table 6 and figure 8. (In the figure, the two ignored energy bands are restored for clarity.) The power-law flux \\(F_{\\rm hard}\\) is 12% of \\(F_{\\rm soft}^{\\rm IGM}\\). In the 2-vMEKAL fit after ignoring the two energy bands, the best fit value of the hotter component temperature is derived as \\(kT_{2}=1.79\\) keV. Since it is close to \\(kT_{\\rm H}\\) = 1.84, and since the 2-vMEKAL fit is almost suc Figure 8: Results of the joint 2-vMEKAL+PL model fitting to the GIS and SIS spectra of HCG 62 and NGC 1399. Those of RGH 80 fitted with vMEKAL+PL model is also presented. See text for detail. \\begin{table} \\begin{tabular}{l l l l l l l l} \\hline \\hline Region & \\(kT_{1}\\) 1 & \\(kT_{2}\\)2 & Abundance & \\(F_{\\rm hard}\\)3 & \\(\\chi^{2}/{\\rm dof}\\) & \\(\\Delta\\chi^{2}\\) \\\\ & (keV) & (keV) & \\(Z_{\\alpha}\\) & \\(Z_{\\rm Fe}\\) & & & \\\\ \\hline \\(0^{\\prime}<r<3^{\\prime}\\) & \\(0.87^{+0.03}_{-0.01}\\) & - & \\(0.69^{+0.25}_{-0.24}\\) & \\(0.34^{+0.19}_{-0.10}\\) & \\(0.26^{+0.04}_{-0.04}\\) & 1.11 & 77.3/66 & 79.3 \\\\ & \\(0.77^{+0.05}_{-0.14}\\) & \\(1.42^{+0.27}_{-0.34}\\) & \\(1.26^{+2.43}_{-0.63}\\) & \\(0.69^{+1.11}_{-0.27}\\) & \\(0.25^{+0.07}_{-0.04}\\) & 1.24 & 55.8/63 & 23.9 \\\\ \\(3^{\\prime}<r<7^{\\prime}.5\\) & \\(1.10^{+0.18}_{-0.05}\\) & - & \\(0.26^{+0.13}_{-0.11}\\) & \\(0.16^{+0.04}_{-0.08}\\) & \\(0.30^{+0.08}_{-0.08}\\) & 1.06 & 58.6/64 & 33.2 \\\\ \\(7^{\\prime}.5<r<15^{\\prime}\\) & \\(0.84^{+0.16}_{-0.09}\\) & - & \\(0.13^{+0.26}_{-0.13}\\) & \\(0.15^{+0.11}_{-0.06}\\) & \\(0.66^{+0.14}_{-0.16}\\) & 1.72 & 59.8/58 & 37.6 \\\\ \\hline \\(3^{\\prime}<r<15^{\\prime}\\) & \\(0.97^{+0.06}_{-0.09}\\) & - & \\(0.19^{+0.15}_{-0.11}\\) & \\(0.17^{+0.06}_{-0.04}\\) & \\(0.92^{+0.18}_{-0.17}\\) & 2.71 & 73.2/65 & 67.8 \\\\ \\hline \\end{tabular} \\end{table} Table 7: Fit results to the radially sorted SIS+GIS spectra of HCG 62, with the model including power-law. cessful, a careful examination is needed to judge if the data is really requiring the hard component. The improvement by adding a power-law component is \\(\\Delta\\chi^{2}=-29.6\\), which is significant with the chance probability of less than \\(10^{-5}\\) in terms of an \\(F\\)-test. Furthermore, the resultant fit with 2-vMEKAL+PL model is statistically acceptable. Therefore, the excess hard X-ray emission is likely to be present in the NGC 1399 group as well. If we replace the power-law with a third thermal component, a similarly good (\\(\\chi^{2}\\)/dof=189.6/170) full-band fit can be obtained. In this case, however, the temperature become as high as \\(3.2^{+2.6}_{-0.9}\\) keV. Thus, the excess hard component in this object can be explained by a \\(\\Gamma=2.0\\) power-law emission which is possibly of non-thermal nature, or a very hot thermal emission with a temperature of \\(\\gtrsim 3\\) keV. #### 4.4.3 The RGH 80 group The last of the 3 selected objects, RGH 80 is the most distant one in our sample, yet showing rather strong hard X-ray signals above the extrapolated IGM contribution. The full band fit with a single vMEKAL model was not acceptable (with \\(\\chi^{2}/\\mathrm{dof}=111.5/66\\)). When we added a power-law component with \\(\\Gamma\\) again fixed at 2.0, the fit greatly improved and became acceptable with \\(\\chi^{2}/\\mathrm{dof}=58.2/64\\) (see table 6 and figure 8). Although the absolute value of \\(F_{\\mathrm{hard}}\\) itself is rather low, it amounts to \\(\\sim 50\\%\\) of \\(F_{\\mathrm{soft}}^{\\mathrm{IGM}}\\). When we added a second thermal component in place of the power-law (2-vMEKAL model), a similarly good fit was obtained with \\(\\chi^{2}/\\mathrm{dof}=58.7/63\\). However, the obtained hotter temperature is \\(kT_{2}=11.0^{+\\infty}_{-8.3}\\) keV, that is, higher than 2.7 keV. Since this value is considerably higher than the cooler one (\\(kT_{1}=1.09^{+0.07}_{-0.04}\\) keV) and much exceeding typical virial temperatures of galaxy groups, the excess hard signals are likely to be of non-thermal origin. These properties make this object resemble HCG 62, but with poorer statistics. Boute (2000) analyzed the ASCA spectra of RGH 80 derived from the central \\(\\sim 3\\arcmin.6\\), using also a two temperature thermal emission model, to obtain a hotter temperature of \\(kT_{2}=1.64^{+0.21}_{-0.17}\\) keV. Although this is apparently inconsistent with our results, we confirmed that we can reproduce the Buote's result by extracting the spectra from a similar region. Therefore, the hard emission is inferred to be stronger in the \\(3\\arcmin<r<10\\arcmin\\) region. ## 5 Discussion ### Summary of analysis results Through ASCA observations of 18 near-by low temperature galaxy groups, indication of excess hard component was obtained from 3 objects. The excess first manifested itself as large differences between the temperatures inferred in the soft band below 2.5 keV (\\(kT_{\\mathrm{S}}\\)) and in the hard band above 2.5 keV (\\(kT_{\\mathrm{H}}\\)), which are determined mainly by the Fe-L emission lines and the bremsstrahlung continuum, respectively. In order to quantify the suggested hard-band excess, we represented the IGM emission by vMEKAL or 2-vMEKAL model determined in the energy range below 2.5 keV, and extrapolated it to the 4-8 keV GIS range. Then, even taking fully into account the CXB fluctuation and the NXB estimation error, three objects (HCG 62, NGC 1399, and RGH 80) showed significant (\\(>2\\sigma\\)) excess counts above the expected IGM contribution (section 4.3). From the full band fitting to these objects (section 4.4), we found that the excess can be successfully represented by a \\(\\Gamma=2.0\\) power-law model, and hence it is likely to be of non-thermal origin, particularly in HCG 62 and RGH 80. The spectra of NGC 1399 can be explained either by adding a non-thermal emission with a flux of about 10% of that of the IGM, or the third thermal component having a temperature \\(\\gtrsim 3\\) keV. ### Hard X-ray excess compared to other parameters In section 4.4, we derived the value of \\(F_{\\mathrm{hard}}\\) and the associated errors for HCG 62, NGC 1399 and RGH 80. In this section, we briefly compare it to other parameters of these objects. The calculated 2-10 keV luminosity of the power-law component, \\(L_{\\mathrm{hard}}\\), is presented in table 8, together with the 0.7-2.5 keV luminosity of the IGM component (\\(L_{\\mathrm{soft}}^{\\mathrm{IGM}}\\)). Both the values of \\(F_{\\mathrm{soft}}\\) and \\(kT_{\\mathrm{S}}\\) of the three objects are typical in our sample. In addition, some other objects in our sample have similar temperatures and IGM luminosities to the three object, yet without excess hard signals. The lack of correlations to these parameters suggests that the phenomenon we have been studying is not likely to be artifacts caused by wrong background subtraction, or incorrect modeling of the IGM contribution. An elliptical galaxy is known to emit an X-ray component with a rather hard spectrum, as a sum of discrete sources in it, such as LMXBs in particular (Canizares et al. 1987; Matsushita et al. 1994; Matsushita et al. 2000). This component could provide a possible explanation to the hard excess emission, because LMXB spectra, approximated by a thermal bremsstrahlung with \\(kT\\sim 10\\) keV, are generally consistent with what has been observed in Figure 10: Excess hard luminosity, \\(L_{\\mathrm{hard}}\\), compared with \\(L_{\\mathrm{LMXB}}\\) which is derived from the X-ray to optical flux ratios of elliptical galaxies. Solid, dashed and dotted lines represent, 1, 10 and 100 times of \\(L_{\\mathrm{LMXB}}\\), respectively. the present study. The integrated 2-10 keV discrete-source luminosity, \\(L_{\\rm LMXB}\\), of each elliptical galaxy is known to be approximately proportional to its B-band luminosity, \\(L_{B}\\). We hence estimated \\(L_{\\rm LMXB}\\) in our sample objects, using \\(L_{B}\\) given in table 1 and the relation of \\(L_{\\rm LMXB}=4\\times 10^{39}(L_{B}/10^{10}L_{\\odot})\\) erg s\\({}^{-1}\\) (converted from Matsushita et al. 2000). The result, presented in figure 10, shows that \\(L_{\\rm hard}\\) of HCG 62, NGC 1399 and RGH 80 is 20, 5 and 140 times higher than the estimated \\(L_{\\rm LMXB}\\), respectively. Thus, the discrete-source contribution cannot explain away the hard-excess phenomenon. Similar excess hard X-ray fluxes from a few Virgo elliptical galaxies were reported by Loewenstein et al. (2001). ### Possible Emission Mechanisms Even in the case of HCG 62 which shows the most significant hard X-ray excess, the spectral shape of the excess emission is not well constrained; \\(\\Gamma=2.17^{+0.28}_{-0.53}\\). Therefore, it is rather difficult to tell whether the emission is of thermal or non-thermal origin. In this section, we briefly discuss both scenarios, using HCG 62 as a representative case. #### 5.3.1 Non-thermal interpretation In the non-thermal scenario, two possibilities are generally discussed: inverse Compton (IC) emission from GeV electrons as they scatter off the cosmic microwave background photons, and non-thermal bremsstrahlung from sub-relativistic particles interacting with ambient plasmas (see e.g. Sarazin 1999 and Sarazin, Kempner 2000). However, the latter is unlikely, because of too low an efficiency (e.g. Petrosian 2001); sub-relativistic electrons suffer from 4 to 5 orders of magnitude larger energy loss in their Coulomb interactions with ambient ions, than their bremsstrahlung loss, making the energetics unrealistic (e.g. Fukazawa et al. 2001). Therefore, below we consider only the IC interpretation. In the IC scenario, the postulated GeV electrons should also emit synchrotron photons in the radio band. There is however no reported radio halo detection of HCG 62, and the 365 MHz Texas catalog (Douglas et al. 1996) gives an upper limit of \\(\\sim 0.4\\) Jy on the radio flux density from HCG 62. The comparison of this upper limit with our \\(F_{\\rm hard}\\sim 1.3\\times 10^{-12}\\) erg s\\({}^{-1}\\) cm\\({}^{-2}\\) in the 2-10 keV band yields an upper limit on the volume-averaged magnetic field as \\(B\\sim 0.1\\)\\(\\mu\\)G, assuming that a single population of electrons with an energy index of 3.0 are emitting both IC and synchrotron components under a uniform magnetic field. As already mentioned in Fukazawa et al. (2001), this limit appears too low for intra-group magnetic fields. Introduction of non-uniform magnetic fields and/or time evolution of the electron energy distribution (e.g. high-energy cutoff) may solve this discrepancy. For example, Brunetti et al. (2001) proposed a model explaining the non-thermal signature of the Coma cluster. In this cluster, observed radio halo flux and the hard X-ray flux suggested by Beppo-SAX (e.g. Fuso-Femiano et al. 1999) leads to a similarly low magnetic field (\\(\\sim 0.1\\)\\(\\mu\\)G) when a simple model for electron population is employed. By incorporating the radial dependence of magnetic field and assumed (re)-acceleration power, as well as the time evolution, in particular introducing the re-acceleration phase which modifies the electron spectra flatter with distinctive cut-off, they successfully reproduced the Coma results. Similar models may be able to explain the HCG 62 results. When \\(B<3\\)\\(\\mu\\)G, the electrons are expected to lose their energies predominantly in the IC channel. Therefore, to sustain \\(L_{\\rm hard}\\sim 4\\times 10^{41}\\) erg s\\({}^{-1}\\) under a steady-state condition, a comparable energy input should be supplied to the electrons. The recent Chandra detection of a pair of \"X-ray cavities\" near the central galaxy (NGC 4761) of HCG 62 (Vrtilek 2000 4, Morita et al. 2006) suggests a past AGN activity, which may well have supplied the needed energy input. Although NGC 4761 currently shows little evidence of AGN activity, the scenario remains intact if the putative AGN activity continued till 1 Gyr ago or later, because the cooling time of a 1 GeV electron due to the IC process is \\(\\sim 1\\) Gyr. Footnote 4: [http://chandra.harvard.edu/photo/2001/hcg62/](http://chandra.harvard.edu/photo/2001/hcg62/) Since HCG 62 is a compact group with a high galaxy density and a rather large velocity dispersion of \\(376^{+52}_{-46}\\) km s\\({}^{-1}\\) (Zabludoff, Mulchaey 1998), energy inputs may also be possible from magneto-hydrodynamic interactions of the member galaxies with the IGM (Makishima et al. 2001). Thus, the non-thermal interpretation is promising from the energetics view point. #### 5.3.2 Thermal interpretation If the hard excess in our sample objects are interpreted as thermal emission, the inferred temperature ranges from \\(\\sim 3\\) keV or higher. In order to explain \\(L_{\\rm hard}\\) that amounts \\begin{table} \\begin{tabular}{l l l l} \\hline \\hline target & \\(L_{\\rm soft}^{\\rm IGM}\\)\\({}^{*}\\) & \\(L_{\\rm hard}\\)\\({}^{\\dagger}\\) & \\(L_{\\rm LMXB}\\)\\({}^{\\ddagger}\\) \\\\ \\hline HCG 62 & 181.2 & \\(55.3\\pm 14.6\\) & 2.5 \\\\ NGC 1399 & 73.7 & \\(8.5\\pm 2.1\\) & 1.8 \\\\ RGH 80 & 341.1 & \\(182.4\\pm 63.7\\) & 1.3 \\\\ \\hline \\multicolumn{4}{l}{\\({}^{*}\\)Luminosity of the IGM emission in 0.7-2.5 keV, in unit of \\(10^{40}\\) erg s\\({}^{-1}\\).} \\\\ \\multicolumn{4}{l}{\\({}^{\\dagger}\\) Luminosity of the excess hard emission in 2-10 keV, in unit of \\(10^{40}\\) erg s\\({}^{-1}\\). Errors are 1\\(\\sigma\\) including both the statistical and systematic origins.} \\\\ \\multicolumn{4}{l}{\\({}^{\\ddagger}\\) Luminosity of the expected LMXB emission in 0.7-2.5 keV, in unit of \\(10^{40}\\) erg s\\({}^{-1}\\).} \\\\ \\end{tabular} \\end{table} Table 8: Luminosity of the IGM and excess emission of HCG 62, NGC 1399 and RGH 80. Estimated contribution from LMXBs in the member galaxy is also shown. to up to 25% of \\(L_{\\rm soft}\\), then the putative hotter gas must fill \\(>70\\%\\) of the group volume, assuming that it is in a pressure balance with the \\(\\sim 1\\) keV IGM. That is, the hotter gas is implied to be energetically dominant in the intragroup space. Since the velocity dispersion of HCG 62 as quoted above translates to a virial temperature of only \\(\\sim\\!1\\) keV (e.g. Xue, Wu, 2000), the postulated gas is concluded to be significantly hotter than the gravitational potential felt by the galaxies. However, if the gas were gravitationally unbound and freely escaping with sound velocity, the necessary energy input would become enormous, because the escape time of a 2 keV gas is two orders of magnitude shorter than its radiative cooling time, assuming a representative density of \\(\\sim 3\\times 10^{-4}\\) cm\\({}^{-3}\\). Another possibility is that such an object is surrounded by a much deeper gravitational potential halo with a considerably larger scale. Actually, the presence of such a large-scale halo has been suggested by Matsushita et al. (1998) around the elliptical galaxy NGC 4636. In any case, the presence of such a hot gaseous component would have a profound impact on the structure and formation of galaxy groups. ## 6 Conclusion and future prospect From the detailed analysis of the hard (\\(>2.5\\) keV) band X-ray spectra of groups of galaxies obtained with ASCA, evidence of excess hard X-ray emission has been suggested from 3 out of 18 objects investigated. They are HCG 62, NGC 1399 and RGH 80. The emission cannot be explained by either fluctuations in the CXB brightness, the NXB estimation error, or contributions of point sources in the member galaxies. The excess cannot be explained away by assuming a moderate temperature gradient in the IGM, either. At least in HCG 62, the hard X-ray emission is as extended as the thermal IGM emission. The observed excess hard X-ray emission can be modeled by a power law with photon index fixed at 2.0, or a high-temperature thermal component, although it is difficult to distinguish them. If considered to be of non-thermal origin, the observed hard X-ray emission can be most reasonably interpreted as inverse-Compton emission by GeV electrons accelerated in these systems. In contrast, thermal interpretation of the phenomenon leads to an inference that some of groups of galaxies are surrounded by a much deeper (and probably of larger-scale) gravitational halo than those felt by the member galaxies. In order to further promote the study, we may utilize the Suzaku mission, launched into orbit on 10th July, 2005. Actually, the X-ray Imaging Spectrometer onboard the satellite has a 6 times larger effective area than the ASCA GIS at 7 keV in total, and its background, when normalized to the solid angle and effective area, is only slightly higher than that of the GIS. The authors a greatly thankful to the anonymous referee for critical reading and providing fruitful comments on this work. ## References * () Birzan, L., Rafferty, D. A., McNamara, B. R., Wise, M. W., & Nulsen, P. E. J. 2004, ApJ, 607, 800 * () Briel, U. G., Finoguenov, A., & Henry, J. P. 2004, A&A, 426, 1 * () Brunetti, G., Setti, G., Feretti, L., & Giovannini, G. 2001, MNRAS, 320, 365 * () Buote, D. A. 2000, MNRAS, 311, 176 * () Drinkwater, M. J., Gregg, M. D.& Colless, M. 2001, ApJ, 548, L139 * () Canizares, C. R., Fabbiano, G., & Trinchieri, G. 1987, ApJ, 312, 503 * () de Vaucouleurs, G., de Vaucouleurs, A., Corwin, H. G., Jr., Buta, R. J., Paturel, G., & Fouque, P. 1991, Third Reference Catalogue of Bright Galaxies (Springer-Verlag Berlin Heidelberg New York) * () Dickey, J. M. & Lockman, F. J. 1990, ARA&A, 28, 215 * () Douglas, J. N., Bash, F. N., Bozyan, F. A., Torrence, G. W., & Wolfe, C. 1996, AJ, 111, 1945 * () Fadda, D., Girardi, M., Giuricin, G., Mardirossian, F., & Mezzetti, M. 1996, ApJ, 473, 670 * () Feretti, L., & Giovannini, G. 1996, in Extragalactic radio sources, IAUS, ed. Ron D. Ekers, C. Fanti, & L. Padrielli (Kluwer Academic Publishers), 333 * () Ferguson, H. C., & Sandage, A. 1990, ApJ, 100, 1 * () Finoguenov, A., Jones, C., Bohringer, H., & Ponman, T. J. 2002, ApJ, 578, 74 * () Fukazawa, Y. 1997, Ph.D. thesis, University of Tokyo * () Fukazawa, Y., Nakazawa, K., Isobe, N., Makishima, K., Matsushita, K., Ohashi, T., & Kamae, T. 2001, ApJ, 546, L87 * () Furusho, T., Yamasaki, N. Y., Ohashi, T., Shibata, R., & Ezawa, H. 2001, ApJ, 561, L165 * () Fusco-Femiano, R., Fiume, D. D., Feretti, L., Giovannini, G., Grandi, P., Matt, G., Molendi, S. & Santangelo, A. ApJ, 513, L21 * () Hickson, P. 1982, ApJ, 255, 382 * () Hwang, U., Mushotzky, R. F., Burns, J. O., Fukazawa, Y., & White, R. A. 1999, ApJ, 516, 604 * () Ishisaki, Y. 1997, Ph.D. thesis, University of Tokyo * () Kaneda, H. et al. 1995, ApJ, 453, L13 * () Kushino, A., Ishisaki, Y., Morita, U., Yamasaki, N. Y., Ishida, M., Ohashi, T., & Ueda, Y. 2002, PASJ, 54, 327 * () Ledlow, M. J., Loken, C., Burns, J. O., Hill, J. M., & White, R. A. 1996, AJ, 112, 388 * () Loewenstein, M., Valinia, A. & Mushotzky, R. F. 2001,ApJ, 547, 722 * () Makishima, K., et al. 1996, PASJ, 48, 171 * () Makishima, K. et al. 2001, PASJ, 53, 401 * () Markevitch, M., et al. 2003, ApJ, 586, L19 * () Matsushita, K. et al. 1994, ApJ, 436, 41 * () Matsushita, K., Makishima, K., Rokutanda, E., Yamasaki, N. Y., & Ohashi, T. 1997, ApJ, 488, L125 * () Matsushita, K. 1997, Ph. D. thesis, University of Tokyo * () Matsushita, K., Makishima, K., Ikebe, Y., Rokutanda, E., Yamasaki, N. Y. & Ohashi, T. 1998 ApJ, 499, L13 * () Matsushita, K., Ohashi, T., & Makishima, K. 2000, PASJ, 52, 685 * () Morita, U., Ishisaki, Y., Yamasaki, N. Y., Ota, N., Kawano, N., Fukazawa, Y., & Ohashi, T. 2006, PASJ, 58, 719 * () Mulchaey, J. S., Davis, D. S., Mushotzky, R. F., & Burstein, D. 1996, ApJ, 456, 80Mulchaey, J. S., Davis, D. S., Mushotzky, R. F., & Burstein, D. 2003, ApJS, 145, 39 * () Nevalainen, J., Markevitch, M, & Lumb, D. 2005, ApJ, 629, 172 * () Ohashi, T. et al. 1996, PASJ, 48, 157 * () Petrosian, V. 2001, ApJ, 557, 560 * () Ramella, M., Geller, M. J., Huchra, J. P., & Thorstensen, J. R. 1995, AJ, 109, 145 * () Sarazin, C. L. 1999, ApJ, 520, 529 * () Sarazin, C. L., & Kempner, J. C. 2000, ApJ, 533, 73 * () Shafer, R. A., & Fabian, A. C. 1983, in Early evolution of the universe and its present structure, IAUS, (Dordrecht and Boston, D. Reidel Publishing Co.), 333 * () Tanaka, Y., Inoue, H., & Holt, S. S. 1994, PASJ, 46, L37 * () Wegner, G., Haynes, M. P., & Giovanelli, R. 1993, AJ, 105, 1251 * () Zabludoff, A. I., & Mulchaey, J. S. 1998, ApJ, 496, 39
X-ray spectra of groups of galaxies, obtained with the GIS instrument onboard ASCA, were investigated for diffuse hard X-rays in excess of the soft thermal emission from their inter-galactic medium (IGM). In total, 18 objects with the IGM temperature of 0.7-1.7 keV were studied, including HCG 62 in particular. Non X-ray backgrounds in the GIS spectra were carefully estimated and subtracted. The IGM emission was represented by up to two temperature thermal models, which was determined in a soft energy band below 2.5 keV mainly by the SIS data. When extrapolated to a higher energy range of 4-8 keV, this thermal model under-predicted the background-subtracted GIS counts in HCG 62 and RGH 80 by \\(>2\\sigma\\) significance, even though the background uncertainties and the IGM modeling errors are carefully accounted. A hard excess could be also present in NGC 1399. The excess was successfully explained by a power-law model with a photon index \\(\\sim 2\\), or a thermal emission with a temperature exceeding \\(\\sim 3\\) keV. In HCG 62, the 2-10 keV luminosity of the excess hard component was found to be \\(5.5\\times 10^{41}\\) erg s\\({}^{-1}\\) at 2-10 keV, which is \\(\\sim 30\\) percent of the thermal IGM luminosity in 0.7-2.5 keV. Non-thermal and thermal interpretations of this excess components are discussed. Kazuhiro Nakazawa, Makishima 2006/00/2006/252006/00/21 galaxies: intergalactic medium -- galaxies: clusters: general -- X-rays: galaxies: clusters
Summarize the following text.
arxiv-format/0612783v1.md
# Compact star constraints on the high-density EoS H. Grigorian 1Institut fur Physik, Universitat Rostock, 18051 Rostock, Germany 12Department of Physics, Yerevan State University, 375047 Yerevan, Armenia 23Laboratory for Information Technologies, JINR Dubna, 141980 Dubna, Russia 3 D. Blaschke 1Institut fur Physik, Universitat Rostock, 18051 Rostock, Germany 14Bogoliubov Laboratory for Theoretical Physics, JINR Dubna, 141980 Dubna, Russia 45Instytut Fizyki Teoretycznej, Uniwersyt Wroclawski, 50-204 Wroclaw, Poland 5 T. Klahn 1Institut fur Physik, Universitat Rostock, 18051 Rostock, Germany 16Gesellschaft fur Schwerionenforschung mbH (GSI), 64291 Darmstadt, Germany 6 ## 1 Introduction Recently, new observational limits for the mass and the mass-radius relationship of CSs have been obtained which provide stringent constraints on the equation of state of strongly interacting matter at high densities, see Klahn et al. (2006) and references therein. In this latter work several modern nuclear EsoS have been tested regarding their compatibility with phenomenology. It turned out that none of these nuclear EsoS meets all constraints whereas every constraint could have been fulfilled by some EsoS. As we will point out in this contribution, a phase transition to quark matter in the interior of CSs might resolve this problem. In the following we will apply an exemplary EoS for NM obtained from the ab-initio relativistic Dirac-Brueckner-Hartree-Fock (DBHF) approach using the Bonn A potential (van Dalen et al., 2005). There is not yet an ab-initio approach to the high-density EoS formulated in quark and gluon degrees of freedom, since it would require an essentially nonperturbative treatment of QCD at finite chemical potentials. For some promising steps in the direction of a unified QM-NM description on the quark level, we refer to the nonrelativistic potential model approach by Ropke et al. (1986) and the NJL model one by Lawley et al. (2006). Simulations of QCD on the Lattice meet serious problems in the low-temperature finite-density domain of the QCD phase diagram relevant for CS studies. However, there are modern effective approaches to high-density QM which, albeit still simplified, focus on specific nonperturbative aspects of QCD. They differ from the traditional bag model approach and allow for CS configurations with sufficiently large masses, see Alford et al. (2006). For our QM description we employ a three-flavor chiral quark model of the NJL type with selfconsistent mean fields in the scalar meson (coupling \\(G_{S}\\)) and scalar diquark (coupling \\(G_{D}=\\eta_{D}\\)\\(G_{S}\\)) channels (Blaschke et al., 2005), generalized by including a vector meson mean field (coupling \\(G_{V}=\\eta_{V}\\)\\(G_{S}\\)), see Klahn et al. (2006a). We show that the presence of a QM core in the interior of CSs does not contradict any of the discussed constraints. Moreover, CSs with a QM interior would be assigned to the fast coolers in the CS temperature-age diagram. Another interesting outcome of our investigations is the prediction of a small latent heat for the deconfinement phase transition in both, symmetric and asymmetric NM. Such a behavior leads to hybrid stars that \"masquerade\" as neutron stars and has been discussed earlier by Alford et al. (2005) for a different EoS. This finding is of relevance for future heavy-ion collision programs at FAIR Darmstadt. ## 2 The flow constraint from HICs The behaviour of elliptic flow in heavy-ion collisions is related to the EoS of isospin symmetric matter. The upper and lower limits for the stiffness deduced from such analyses (Danielewicz et al., 2002) are indicated in Fig. 1 as a shaded region. The nuclear DBHF EoS is soft at moderate densities with a compressibility \\(K=230\\)MeV (van Dalen et al. 2004, Gross-Boelting et al. 1999), but tends to violate the flow constraint for densities above 2-3 times nuclear saturation. As a possible solution to this problem we adopt a phase transition to QM with an EoS fixed to sketch the upper boundary of the flow constraint. In order to obtain an EoS as stiff as possible we use a vector coupling of \\(\\eta_{V}=0.50\\) and a diquark coupling of \\(\\eta_{D}=1.03\\). Herewith the EoS is completely fixed. ## 3 Constraints from astrophysics ### Maximum mass and mass-radius constraints These most severe constraints come in particular from the mass measurement for PSR J0751+1807 (Nice et al. 2005) giving a lower limit for the maximum mass \\(\\approx 1.9~{}M_{\\odot}\\) at \\(1\\sigma\\) level, and from the thermal emission of RX J1856-3754 (Trumper et al. 2004) providing a lower limit in the mass-radius plane with minimal radii \\(R>12\\) km. These constraints can only be fulfilled by a rather stiff EoS. The most stiff quark matter contribution to the EoS which still fulfills the flow constraint in symmetric matter corresponds to \\(\\eta_{V}=0.5\\) with a maximum mass for hybrid stars \\(\\approx 2.1~{}M_{\\odot}\\), rather independent of the choice of \\(\\eta_{D}\\) which fixes the critical mass for the onset of deconfinement, see Figs. 2, 3. For a more detailed discussion, see Klahn et al. (2006), Klahn et al. (2006a). ### Cooling constraints _Direct Urca (DU) processes_ are flavor-changing processes with the prototype being \\(n\\to p+e^{-}+\\bar{\ u}_{e}\\) (Gamow and Schoenberg 1941), providing the most effective cooling mechanism in the hadronic layer of compact stars. It acts if the proton fraction \\(x\\) exceeds the DU threshold \\(x_{DU}\\), \\(x=n_{p}/(n_{n}+n_{p})\\geq x_{DU}\\). The threshold is given by \\(x_{DU}=0.11\\) (Lattimer et al. 1991) and rises up to \\(x_{DU}=0.14\\) upon inclusion of muons. Although the onset of the DU process entails a sensible dependence of cooling curves on the star masses, hadronic cooling with realistic pairing gaps is not sufficient to explain young, nearby X-ray dim objects, like Vela, with typical CS masses, not exceeding 1.5 \\(M_{\\odot}\\) (Blaschke et al. 2004, Grigorian et al. 2005). The point on the stability curve in Fig. 2 marks the DU threshold density for the DBHF EoS. Quark matter DU processes provide en Figure 1: Constraint on the high-density behavior of the EoS from simulations of flow data from heavy-ion collision experiments (shaded area from Danielewicz et al. 2002) compared to the nuclear matter and hybrid EsoS discussed in the text. Figure 3: Mass-radius relations for CSs with possible phase transition to deconfined quark matter, see Kläh et al. (2006a). Figure 2: Stable CS configurations for neutron stars (DBHF) and hybrid stars, characterized by the parameters \\(\\eta_{D}\\) and \\(\\eta_{V}\\) of the quark matter EoS. hanced cooling, characterized by the diquark pairing gaps (Blaschke et al. 2000, Page et al. 2000) and their density dependence (Grigorian et al. 2005, Popov et al. 2006a). For a recent review, see Sedrakian (2007). To verify this rather heuristic approach we apply explicit calculations of the cooling of hybrid configurations which shall describe present data of the _temperature-age_ distribution of CSs. The main processes in nuclear matter that we accounted for are the direct Urca, the medium modified Urca and the pair breaking and formation processes. Furthermore we accounted for the \\(1S_{0}\\) neutron and proton gaps and the suppression of the \\(3P_{2}\\) neutron gap. For the calculation of the cooling of the quark core we incorporated the most efficient processes, namely the quark modified Urca process, the quark bremsstrahlung, the electron bremsstrahlung and the massive gluon-photon decay. In the 2-flavor superconducting phase one color of quarks remains unpaired. Here we assume a small residual pairing (\\(\\Delta_{X}\\)) of the hitherto unpaired quarks. For detailed discussions of cooling calculations and the required ingredients see Blaschke et al. 2004, Popov et al. 2006a, 1 and references therein. The resulting temperature-age relations for the introduced hybrid EoS are shown in Fig. 4. The critical density for the transition from nuclear to quark matter has been set to a corresponding CS mass of \\(M_{\\rm crit}=1.22~{}M_{\\odot}\\). All cooling data points are covered and correspond to CS configurations with reasonable masses. In this picture slow coolers correspond to light, pure neutron stars (\\(M<M_{\\rm crit}\\)), whereas fast coolers are rather massive CSs (\\(M>M_{\\rm crit}\\)) with a QM core. Another constraint on the temperature-age relation is given by the _maximum brightness_ of CSs, as discussed by Grigorian (2006). It is based on the fact that despite many observational efforts one has not observed very hot NSs (\\(\\log\\)T \\(>6.3-6.4\\) K) with ages of \\(10^{3}\\) - \\(10^{4.5}\\) years. Since it would be very easy to find them - if they exist in the galaxy - one has to conclude that at least their fraction is very small. Therefore a realistic model should not predict CSs with typical masses at temperatures noticeable higher than the observed ones. The region of avoidance is the hatched trapezoidal region in Fig. 4. The final CS cooling constraint in our scheme is given by the _Log N-Log S_ distribution, where \\(N\\) is the number of sources with observed fluxes larger than \\(S\\). This integral distribution grows towards lower fluxes and is inferred, e.g., from the ROSAT all-sky survey (Neuhauser and Trumper 1999). The observed _Log N-Log S_ distribution is compared with the ones calculated in the framework of a population synthesis approach in Fig. 5. A detailed discussion of merits and drawbacks can be found in Popov et al. (2006). Altogether, the hybrid star cooling behavior obtained for our EoS fits all of the sketched constraints under the assumption of the existence of a 2SC phase with X-gaps. ## 4 Outlook: The QCD phase-diagram Within the previous sections we exemplified how to apply the testing scheme introduced in Klahn et al. (2006) to the modeling of a reliable hybrid EoS with a NM-QM phase transition that fulfills a wide range of constraints from HICs and astrophysics. In a next step we extend the description to finite temperatures focusing on the behaviour at the transition line. For this purpose we apply a relativistic mean-field model with density-dependent masses and couplings (Typel 2005) adapted such as to mimick the DBHF-EoS and generalize to finite temper Figure 4: Cooling evolution for hybrid stars of different masses given in units of \\(M_{\\odot}\\). Note that Vela is described with a typical CS mass not exceeding 1.45 \\(M_{\\odot}\\). From Popov et al. 2006a. Figure 5: Comparison of observational data for the LogN-LogS distribution with results from population synthesis using hybrid star cooling according to Popov et al. 2006a. atures (DD-F4). Fig. 6 shows the resulting phase diagram including the transition from nuclear to quark matter (\\(\\eta_{D}=1.030,\\eta_{V}=0.50\\)) which exhibits almost a crossover transition with a negligibly small coexistence region and a tiny density jump. At temperatures beyond \\(T\\sim 45\\) MeV our NM description is not reliable any more since contributions from mesons, hyperons and nuclear resonances are missing. This will be amended in future studies. ## 5 Conclusions We have presented a new scheme for testing nuclear matter equations of state at supernuclear densities using constraints from neutron star and HIC phenomenology. Modern constraints from the mass and mass-radius-relation measurements require stiff EsoS at high densities, whereas flow data from heavy-ion collisions seem to disfavor too stiff behavior of the EoS. As a compromise we have presented a hybrid EoS with a phase transition to color superconducting quark matter which, due to a vector meson meanfield, is stiff enough at high densities to allow compact stars with a mass of 2 \\(M_{\\odot}\\). Such a hybrid EoS could be superior to a purely hadronic one as it allows a faster cooling of objects within the typical CS mass region. This way, young nearby X-ray dim objects such as Vela could be explained with masses not exceeding 1.5 \\(M_{\\odot}\\). The present hybrid EoS predicts hybrid stars that \"masquerade\" as neutron stars, suggesting only a tiny density jump at the phase transition. This characteristics is also present for the symmetric matter case and persists at higher temperatures in the QCD phase diagram. It is suggested that the CBM experiment at FAIR might softly enter the quark matter domain without extraordinary hydrodynamical effects from the deconfinement transition. ###### Acknowledgements. We thank all our collaborators who have contributed to these results, in particular D. Aguilera, J. Berdermann, C. Fuchs, S. Popov, F. Sandin, S. Typel, and D.N. Voskresensky. The work is supported by DFG under grant 436 ARM 17/4/05 and by the Virtual Institute VH-VI-041 of the Helmholtz Association. We also gratefully acknowledge the support by J. E. Trumper and the organizers of the 363\\({}^{\\rm rd}\\) Heraeus seminar on \"Neutron Stars and Pulsars\". ## References * (1) Alford M., Blaschke D., Drago A., Klahn T., Pagliara G., and Schaffner-Bielich J., 2006, arXiv:astro-ph/0606524. * (2) Alford M., Braby M., Paris M. W., and Reddy S., 2005, ApJ 629, 969 * (3) Blaschke D., Klahn T. and Voskresensky D. N., 2000, ApJ 533, 406, * (4) Blaschke D., Grigorian H., and Voskresensky D.N., 2004, A&A 424, 979 * (5) Blaschke D., Fredriksson S., Grigorian H., Oztas A.M. and Sandin F., 2005, PRD 72, 065020 * (6) Blaschke D., 2006, PoS JHW2005, 003 * (7) Danielewicz P., Lacey R., and Lynch W. G., 2002, Science 298, 1592 * (8) Gamow G., and Schoenberg M., 1941, Phys. Rev 59, 539 * (9) Grigorian H., Blaschke D., and Aguilera D.N., 2004, PRC 69, 065802 * (10) Grigorian H., Blaschke D., and Voskresensky D.N., 2005, PRC 71, 045801 * (11) Grigorian H., 2006, PRC 74, 025801 * (12) Grigorian H., 2006a, Phys. Part. Nucl. Lett. 3, in press; arXiv:hep-ph/0602238. * (13) Gross-Boelting T., Fuchs C., and Faessler A., 1999, Nucl. Phys. A 648, 105 * (14) Klahn T. et al., 2006, PRC 74, 035802 * (15) Klahn T. et al., 2006a, arXiv:nucl-th/0609067 * (16) Lattimer J. M. et al., 1991, Phys. Rev. Lett. 66, 2701 * (17) Lawley S., Bentz W., and Thomas A. W., 2006, J. Phys. G 32, 667 * (18) Neuhauser R., and Trumper J., 1999, A&A 343, 151 * (19) Nice D.J., et al., 2005, AJ 634, 1242 * (20) Popov S., Grigorian H., Turolla R., and Blaschke D., 2006, A&A 448, 327 * (21) Popov S., Grigorian H. and Blaschke D., 2006a, PRC 74, 025803 * (22) Page D., Prakash M., Lattimer J. M., and Steiner A., PRL 85, 2048 * (23) Ropke G., Blaschke D., and Schulz H., 1986, PRD 34, 3499. * (24) Sedrakian A., 2007, Prog. Part. Nucl. Phys. 58, 168 * (25) Trumper J.E., Burwitz V., Haberl F., and Zavlin V.E., 2004, Nucl. Phys. Proc. Suppl. 132, 560 * (26) Typel S., 2005, PRC 71, 064301 * (27) van Dalen E.N.E., Fuchs C., and Faessler A., 2004, Nucl. Phys. A 744, 227; 2005, PRC 72, 065803 * (28) van Dalen E.N.E., Fuchs C., and Faessler A., 2005, PRL 95, 022302 Figure 6: Phase diagram for isospin symmetry using the most favorable hybrid EoS of the present study. The NM-2SC phase transition is almost a crossover. The model DD-F4 is used as a finite-temperature extension of DBHF. For the parameter set (\\(\\eta_{D}=0.75\\), \\(\\eta_{V}=0.0\\)) the flow constraint is fulfilled but no stable hybrid stars are obtained.
A new scheme for testing the nuclear matter (NM) equation of state (EoS) at high densities using constraints from compact star (CS) phenomenology is applied to neutron stars with a core of deconfined quark matter (QM). An acceptable EoS shall not to be in conflict with the mass measurement of 2.1 \\(\\pm\\) 0.2 M\\({}_{\\odot}\\) (1 \\(\\sigma\\) level) for PSR J0751+1807 and the mass radius relation deduced from the thermal emission of RX J1856-3754. Further constraints for the state of matter in CS interiors come from temperature-age data for young, nearby objects. The CS cooling theory shall agree not only with these data, but also with the mass distribution inferred via population synthesis models as well as with LogN-LogS data. The scheme is applied to a set of hybrid EoS with a phase transition to stiff, color superconducting QM which fulfills all above constraints and is constrained otherwise from NM saturation properties and flow data of heavy-ion collisions. We extrapolate our description to low temperatures and draw conclusions for the QCD phase diagram to be explored in heavy-ion collision experiments.
Give a concise overview of the text below.
arxiv-format/0701065v1.md
# Vacuum Energy, EoS, and the Gluon Condensate at Finite Baryon Density in QCD Ariel R. Zhitnitsky ## 1 Introduction This talk is based on few recent publications with Max Metlitski [1]. Neutron stars represent one of the densest concentrations of matter in our universe. The properties of super dense matter are fundamental to our understanding of nature of nuclear forces as well as the underlying theory of strong interactions, QCD. Unfortunately, at present time, we are not in a position to answer many important questions starting from fundamental QCD lagrangian. Instead, this problem is usually attacked by using some phenomenological models such as MIT Bag model or NJL model. Dimensional parameters (e.g. the vacuum energy) for these models are typically fixed by using available experimental data at zero baryon density. Once the parameters are fixed, the analysis of EoS or other quantities is typically performed by assuming that the parameters of the models (e.g. bag constant) at nonzero \\(\\mu\\) are the same as at \\(\\mu=0\\). The main lesson to be learned from the calculations presented below can be formulated as follows: the standard assumption (fixing the parameters of a model at \\(\\mu=0\\) while calculating the observables at nonzero \\(\\mu\\)) may be badly violated in QCD. The problem of density dependence of the chiral and gluon condensates in QCD has been addressed long ago in[2]. The main motivation of ref.[2] was the application of the QCD sum rules technique to study some hadronic properties in the nuclear matter environment. The main result of that studies is- the effect is small. More precisely, at nuclear matter saturation density the change of the gluon condensate is only about 5%. Indeed, in the chiral limit the variation of the gluon condensate with density can be expressed as follows[2], \\[\\langle\\frac{bg^{2}}{32\\pi^{2}}G^{a}_{\\mu\ u}G^{\\mu\ u a}\\rangle_{\\rho_{B}}- \\langle\\frac{bg^{2}}{32\\pi^{2}}G^{a}_{\\mu\ u}G^{\\mu\ u a}\\rangle_{0}=-m_{N} \\rho_{B}+0(\\rho_{B}^{2}),\\ \\ b=\\frac{11N_{c}-2N_{f}}{3}, \\tag{1}\\]where the standard expression for the conformal anomaly is used, \\(\\Theta^{\\mu}_{\\mu}=-\\frac{bg^{2}}{32\\pi^{2}}G^{a}_{\\mu\ u}G^{a\\mu\ u}\\). We should note here that the variation of the gluon condensate is well defined observable (in contrast with the gluon condensate itself) because the perturbative (divergent) contribution cancels in eq.(1). The most important consequences of this formula: a) the variation of the gluon condensate is small numerically, and b) the absolute value of the condensate decreases when the baryon density increases. Such a behavior can be interpreted as due to the suppression of the non-perturbative QCD fluctuations with increase of the baryon density. Our ultimate goal here is to understand the behavior of the vacuum energy (gluon condensate ) as a function of \\(\\mu\\) for color superconducting (CS) phases[3], [4]. It is clear that the problem in this case is drastically different from nuclear matter analysis [2] because the system becomes relativistic and binding energy (\\(\\sim\\Delta\\)) per baryon charge is order of \\(\\Lambda_{QCD}\\) in contrast with \\(\\leq 2\\%\\) of the nucleon mass at nuclear saturation density. The quark-quark interaction also becomes essential in CS phases such that the small density expansion (valid for dilute noninteracting nuclear matter) used to derive (1) can not be justified any more. Unfortunately, we can not answer the questions on \\(\\mu\\) dependence of the vacuum energy in real \\(QCD(N_{c}=3)\\). However, these questions can be formulated and can be answered in more simple model \\(QCD(N_{c}=2)\\) due to the extended symmetry of this model. Some lessons for the real life with \\(N_{c}=3\\) can be learned from our analysis, see below. ## 2 Gluon Condensate for \\(Qcd(N_{c}=2)\\) We start from the equation for the conformal anomaly, \\[\\Theta^{\\mu}_{\\mu}=-\\frac{bg^{2}}{32\\pi^{2}}G^{a}_{\\mu\ u}G^{a\\mu\ u}+\\bar{\\psi }M\\psi,\\ \\ \\ \\ \\ \\ b=\\frac{11}{3}N_{c}-\\frac{2}{3}N_{f}=6 \\tag{2}\\] For massless quarks and in the absence of chemical potential, eq. (2) implies that the QCD vacuum carries a negative non-perturbative vacuum energy due to the gluon condensate. Now, we can use the effective Lagrangian [5] \\[{\\cal L}=\\frac{F^{2}}{2}Tr\ abla_{\ u}\\Sigma\ abla_{\ u}\\Sigma^{\\dagger},\\ \\ \ abla_{0}\\Sigma=\\partial_{0}\\Sigma-\\mu\\left[B\\Sigma+\\Sigma B^{T}\\right] \\tag{3}\\] to calculate the change in the trace of the energy-momentum tensor \\(\\langle\\theta^{\\mu}_{\\mu}\\rangle\\) due to a finite chemical potential \\(\\mu\\ll\\Lambda_{QCD}\\). The energy density \\(\\epsilon\\) and pressure \\(p\\) are obtained from the free energy density \\({\\cal F}\\), \\[\\epsilon={\\cal F}+\\mu n_{B},\\ \\ \\ p=-{\\cal F}. \\tag{4}\\] Therefore, the conformal anomaly implies, \\[\\langle\\frac{bg^{2}}{32\\pi^{2}}G^{a}_{\\mu\ u}G^{\\mu\ u a}\\rangle_ {\\mu,m}-\\langle\\frac{bg^{2}}{32\\pi^{2}}G^{a}_{\\mu\ u}G^{\\mu\ u a}\\rangle_{0}=\\] \\[-4\\left({\\cal F}(\\mu,m)-{\\cal F}_{0}\\right)-\\mu n_{B}(\\mu,m)+ \\langle\\bar{\\psi}M\\psi\\rangle_{\\mu,m}, \\tag{5}\\]where the subscript 0 on an expectation value means that it is evaluated at \\(\\mu=m=0\\). Now we notice that all quantities on the right hand side are known from the previous calculations [5], therefore the variation of \\(\\langle G_{\\mu\ u}^{2}\\rangle\\) with \\(\\mu\\) can be explicitly calculated. As expected, \\(\\langle G_{\\mu\ u}^{2}\\rangle\\) does not depend on \\(\\mu\\) in the normal phase \\(\\mu<m_{\\pi}\\) while in the superfluid phase \\(\\mu>m_{\\pi}\\) this dependence can be represented as follows [1], \\[\\langle\\frac{bg^{2}}{32\\pi^{2}}G_{\\mu\ u}^{a}G^{\\mu\ u a}\\rangle_{\\mu,m}- \\langle\\frac{bg^{2}}{32\\pi^{2}}G_{\\mu\ u}^{a}G^{\\mu\ u a}\\rangle_{\\mu=0,m}=4F^{ 2}(\\mu^{2}-m_{\\pi}^{2})\\left(1-2\\frac{m_{\\pi}^{2}}{\\mu^{2}}\\right). \\tag{6}\\] The behavior of the condensate is quite interesting: it decreases with \\(\\mu\\) for \\(m_{\\pi}<\\mu<2^{1/4}m_{\\pi}\\) and increases afterwards. The qualitative difference in the behaviour of the gluon condensate for \\(\\mu\\approx m_{\\pi}\\) and for \\(m_{\\pi}\\ll\\mu\\ll\\Lambda_{QCD}\\) can be explained as follows. Right after the normal to superfluid phase transition occurs, the baryon density \\(n_{B}\\) is small and our system can be understood as a weakly interacting gas of diquarks. The pressure of such a gas is negligible compared to the energy density, which comes mostly from diquark rest mass. Thus, \\(\\langle\\Theta_{\\mu}^{\\mu}\\rangle\\) increases with \\(n_{B}\\) in precise correspondence with the \"dilute\" nuclear matter case (1). On the other hand, for \\(\\mu\\gg m_{\\pi}\\), energy density is approximately equal to pressure, and both are mostly due to self-interactions of the diquark condensate. Luckily, the effective Chiral Lagrangian (3) gives us control over these self-interactions as long as \\(\\mu\\ll\\Lambda_{QCD}\\). The main lesson to be learned for real \\(QCD(N_{c}=3)\\) from exact results discussed above is as follows. The transition to the CS phases is expected to occur1 at \\(\\mu_{c}\\simeq 2.3\\cdot\\Lambda_{QCD}\\)[6],[7] in contrast with \\(\\mu_{c}=m_{\\pi}\\) for transition to superfluid phase for \\(N_{c}=2\\) case. The binding energy, the gap, the quasi -particle masses are also expected to be the same order of magnitude \\(\\sim\\mu_{c}\\). This is in drastic contrast with nuclear matter case when binding energy is very small. At the same time, \\(QCD(N_{c}=2)\\) represents a nice model where the binding energy, the gap, the masses of quasi -particles carrying the baryon charge are the same order of magnitude. This model explicitly shows that the gluon condensate can experience extremely nontrivial behavior as function of \\(\\mu\\). We expect a similar behavior for \\(QCD(N_{c}=3)\\) in CS phases when function of \\(m_{\\pi}^{2}/\\mu^{2}\\) in (6) is replaced by some function of \\(\\mu_{c}/\\mu^{2}\\) for \\(N_{c}=3\\). We should note in conclusion that the recent lattice calculations [8],[9] are consistent with our prediction (6). Footnote 1: \\(\\mu\\) here for \\(QCD(N_{c}=3)\\) is normalized as the quark (rather than the baryon) chemical potential ## References * (1) M. A. Metlitski and A. R. Zhitnitsky, Nucl. Phys. B **731**, 309 (2005); Phys. Lett. B **633**, 721 (2006). * (2) T. D. Cohen, R. J. Furnstahl and D. K. Griegel, Phys. Rev. C **45**, 1881 (1992). * (3) M. Alford, K. Rajagopal, and F. Wilczek, Phys. Lett. **B 422** (1998) 247. * (4) R. Rapp, T. Schafer, E. V. Shuryak, and M. Velkovsky, Phys. Rev. Lett. **81** (1998) 53. * (5) J.B. Kogut et al Nucl. Phys. **B 582** (2000) 477-513. * (6) D. Toublan and A. R. Zhitnitsky, Phys. Rev. D **73**, 034009 (2006) * (7) A. R. Zhitnitsky, arXiv:hep-ph/0601057, Proceedings, Light-Cone QCD, Australia, 7-15 Jul 2005. * (8) S. Hands, S. Kim and J. I. Skullerud, arXiv:hep-lat/0604004. * (9) B. Alles, M. D'Elia and M. P. Lombardo, Nucl. Phys. B **752**, 124 (2006)
The Equation of States (EoS) plays the crucial role in all studies of neutron star properties. Still, a microscopical understanding of EoS remains largely an unresolved problem. We use 2-color QCD as a model to study the dependence of vacuum energy (gluon condensate in QCD) as function of chemical potential \\(\\mu\\ll\\Lambda_{QCD}\\) where we find very strong and unexpected dependence on \\(\\mu\\). We present the arguments suggesting that similar behavior may occur in 3-color QCD in the color superconducting phases. Such a study may be of importance for analysis of EoS when phenomenologically relevant parameters (within such models as MIT Bag model or NJL model) are fixed at zero density while the region of study lies at much higher densities not available for terrestrial tests. Keywords:
Provide a brief summary of the text.
arxiv-format/0701111v1.md
# Street centrality vs. commerce and service locations in cities: a Kernel Density Correlation case study in Bologna, Italy. Emanuele Strano Human Space Lab, DIAP, Politecnico di Milano, Via Bonardi 3, 20133 Milano, Italy. 2 Alessio Cardillo Dipartimento di Fisica e Astronomia, Universita di Catania, and INFN Sezione di Catania, Via S. Sofia 64, 95123 Catania, Italy. 3 Valentino Iacoviello Human Space Lab, DIAP, Politecnico di Milano, Via Bonardi 3, 20133 Milano, Italy. 2 Vito Latora Roperto Messora Scuola Superiore di Catania, Via San Nullo, 5/i, 95123 Catania, Italy. 4 Sergio Porta Human Space Lab, DIAP, Politecnico di Milano, Via Bonardi 3, 20133 Milano, Italy. 2 Salvatore Scellato Human Space Lab, DIAP, Politecnico di Milano, Via Bonardi 3, 20133 Milano, Italy. 2 ## 1 Introduction: understanding the grocer's mantra \\(\\alpha\\)_There are three factors that counts for the success of a grocery: location, location and location_. So the grocer's mantra goes, almost an axiom for all those who are in business with anything that relies on some sort of exchange with the customer/public. If we may easily agree on the principle that location matters, but we are not done with just doing well with our business, the next question is: why location matters? What is the property of a good location that makes it that good? Again, the grocer's answer would be loud and clear: centrality. Everyone knows that a place which is central has some special features to offer in many ways to those who live or work in cities: it is more visible, more accessible from the immediate surroundings as well as from far away, it is more popular in terms of people walking around and potential customers, it has a greater probability to develop as an urban landmark and a social catalyst, or offer first level functions like theatres or office headquarters as well as a larger diversity of opportunities and goods. That's why central locations are more expensive in terms of real estate values and tend to be socially selective: because such special features make them capable of providing a reasonable trade-off to larger investments than in less central spots of the same urban area. From this point of view, centrality is not exactly just a problem for grocers: because the potential of an urban area to sustain community retail and services is a key-factor in achieving a number of relevant urban sustainability goals in the sub-centers of the nodal information city of the future (Newman and Kenworthy, 1999), from social cohesion to self-surveillance, from liveability to local economy vibrancy, from cyclist-pedestrian friendliness to visual landscape diversity, centrality emerges as one of the most powerful determinant in the hands of urban planners and designers in order to understand how an urban area works and eventually where to address policies of renovation and redevelopment. But there is more. If one looks at where a city centre is located, she or he will mostly - if not always - find that it sprouts from the intersection of two main routes, where some special configuration of the terrain or some particular shape of the river system or the waterfront makes that place compulsory to pass through. That's where cities begin. Then, departing from such central locations, they grow up in time adding here and there buildings and activities, firstly along the main routes, then filling the in-between areas, then adding streets that provide loop routes and points of return, then, as the structure becomes more complex, forming new central streets and places and keeping buildings around them again. It is an evolutionary process that has been driving the formation of our urban fabrics, the heart of human civilization, through most of the seven millenniums of city history until the dawn of modernity and the very beginning of the industrial age. In short, centrality appears to be somehow at the heart of that kind of \"marvellous\" hidden order that supports the formation of \"spontaneous\", organic cities (Jacobs, 1961), which again is an issue of crucial importance in the contemporary debate on the search for more bottom-up, \"natural\" strategies of urban planning beyond the modernistic heritage. ## 2 Mapping centrality in urban networks: Multiple Centrality Assessment Despite the evident relevance of centrality in city life, this issue has been rarely raised to the forefront of urban studies as a comprehensive approach to the subject. Geographers and transport planners have used centrality as a means to understand the location of uses and activities at the regional scale or the level of convenience to reach a place from all others in an urban system (Wilson, 2000): in so doing, they have built their understanding on some hidden assumptions, particularly on a notion of centrality that, putting it shortly, is limited to the following: the more a place is close to all others, the more central. Since the early eighties \"Space Syntax\", a methodology of spatial analysis based on visibility and integration (Hillier and Hanson, 1984; Hillier, 1996), opened to urban designers a whole range of opportunities to develop a deeper understanding of some structural properties of city spaces, but such opportunities have been seldom understood, often perceived as a quantitative threat to the creativity embedded in the art of city design. Basically, the ones who have addressed a specific work on centrality in itself are scientists in fields not related with space, particularly structural sociologists and - more recently - physicists in the \"new sciences\" of complex networks (Boccaletti et al, 2006). Drawing mainly from those streams, we have recently proposed a whole set of procedures and techniques named Multiple Centrality Assessment (MCA) aimed at the spatial analysis of centralities in urban networks constituted by streets as links or \"edges\", and intersections as \"nodes\" (Porta et al, 2006a,b, 2007; Cardillo et al, 2006; Crucitti et al, 2006a,b; Scellato et al, 2006; Scheurer and Porta, 2006). MCA's main characteristics are the utilization of a standard \"primal\" format of street network's representation, the definition of centrality as a multiple concept described by a set of different peer indices, and the anchoring of all measures on a metric computation of spatial distances along the real street footprint (graph's edge). While we forward the reader to quoted bibliography for a deeper discussion of such characteristics, it is important here to highlight that MCA's final results are maps of street networks based on the attribution of a centrality \"weight\" to every segment of the system, which means every street space between one couple of intersections. ## 3 Correlating densities: Kernel Density Estimators in urban analysis After several first pioneering studies in mathematical statistics (Akaike,1954; Rosenblatt, 1956; Parzen, 1962), Kernel Density Estimators (KDEs) have been applied to obtain a smooth estimate of a univariate or multivariate probability density from an observed sample of events (Bailey and Gatrell, 1995). Used in many fields that deal with space related problems, from geography to epidemiology, criminology, demography, ethology, hydrology and urban analysis, KDEs are basically a technique aimed at creating a smooth map of density values in which the density at each location reflects the concentration of points in the surrounding area. In spatial analysis, KDEs are therefore a form of surface modelling that attributes values to any location between observed data points (Silverman, 1986). Such probability techniques of analysis have proven effective in revealing spatial patterns that would not emerge otherwise by the sole distribution of point events, therefore resulting supportive of decision making in public policies related to issues anchored in space: in particular, KDEs are often used for comparative studies of the evolution of spatial events over time or for visually correlating the location of spatial events with other background environmental features, like for instance the presence of front doors (background) with the density of burglaries (events, foreground) in a neighbourhood. A weak point in KDE is the subjectivity in the definition of the bandwidth factor \\(h\\), i.e. the radius of the circular region around each point event (or the width of the buffer around linear events) covered by the density surfaces eventually overlaid on the given cell, that deeply affects the smoothness of the interpolation between observed \"event\" data (Cao et al, 1994), while both theory and practice suggest that the choice among the various kernel functions does not significantly affects the statistical results (Epanechnikov, 1969). KDEs have been mostly used in spatial analysis for revealing pattern of clustering of events belonging to the same category, which means to the same layer in a GIS environment. For instance, crime occurrences (Anselin et al, 2000) or street intersections (Bornuso, 2003) in urban environments have been investigated by deepening their spatial clustering by means of density estimations. On the other hand, the correlation between phenomena belonging to different categories have been mostly investigated at the condition that they were structurally cross-referenced sharing a field in their respective databases: that was the case of Space Syntax studies on the correlation between street integration, a property of the spatial configuration of urban streets, and the most diverse socio-economic and environmental indicators like pedestrian flows, crime events, retail commerce vitality and pollution (Penn and Turner, 2003). However, a unique potential of KDEs when performed in a Geographic Information System (GIS) environment for urban analysis purposes, lays on the opportunity that it gives to overcome the need of cross-referenced fields for the correlation analysis of \\(n\\) different phenomena by simply correlating their proximity in space. Most municipalities in the advanced world have in fact developed a massive amount of information ranging from environmental to economic, demographic or sociologic, that do not share anything but the same geo-referential system: they are, in fact, represented in the same space. Another unique feature of KDEs in the correlation analysis of space-related variables is the more realistic interpretation of the graduality of spatial influence that they provide, due to their smoothing behaviour: in fact, that seems to capture the experiential notion that, in example, the \"effect\" of a central street space is not just limited to the curtain facades but instead \"spreads\" to adjacent spaces and streets to a certain extent, with a decrease in intensity which is a function of distance; or, again, that the \"prominence\" of one crossing is higher than that of any single converging street: coherently, cells in a crossing proximity are assigned a kernel density value which multiplies that of the converging streets because of the overlaying of the correspondent density surfaces. This feature, which turns out to be essential in the analysis of urban streets, implies the direct implementation of KDE on linear rather than just punctual events: in so doing, the property of a street (centrality in the present research) acts like a \"weight\" in the computation of that property's density in space. ## 4 Kernel Density in Bologna: distributions of street centrality and commerce/service activities Bologna is a less than half a million inhabitants important urban centre located in the middle of the river Po plain at the convergence of the main (historical) routes that connect Florence and southern Italy with the northern part of the Country. It is a wealthy city, a relevant political laboratory, the main national transportation hub and the location of the \"Alma Mater\", the most ancient university in the world. The aim of the present study on the city of Bologna is to shed light on the correlation between the centrality of streets and the presence of retail commerce and community services. Data were provided by the municipality of Bologna in two separate datasets. The first dataset included all ground floor retail commercial and service activities for the whole urban area stored in an ESRI point shapefile format. For each of the \\(n\\) activities, we were given the location in two dimensional geographic space, that will be indicated in the following as \\(X_{i}\\)_, _i=1,2, n_ (each \\(X_{i}\\) has to be intended here as a vector in a two dimensional space). For the purpose of this study we then split this dataset into two separated layers, one for retail commercial activities alone (\\(n_{comm}\\)=7,257 points), and one for commercial and service activities altogether (\\(n_{comm-sen}\\)=9,676 points). The second dataset included the \"primal\" graph of all streets in the same area stored in a ESRI polyline shapefile format: the dataset counted 7,191 street segments (edges) and 5,448 intersections (nodes). The two datasets were not structurally correlated, but they were coherently geo-referenced such that they resulted perfectly overlayable. The Kernel Density Correlation (KDC) study proceeded by firstly setting a rectangular region \\(R\\) that encompassed the whole extension of both datasets, and secondly dividing the region in 2,771,956 square cells (edge=10mt). Density has then been calculated, for each of the cells, with reference to events falling within a bandwidth of 100, 200 and 300 meters from the centre of the cell. In the case of activities, which were represented as points defined in geographic space, a smoothly curved surface was fitted over each point. The surface value was highest at the location of the point and diminished with increasing distance from the point reaching 0 at the circumference of the circular bandwidth from the point; differently from streets, activities were non-weighted entities and, as such, the volume under the surface equalled 1 in all cases. The density of activity at each cell was then calculated by adding the values of all the kernel surfaces where they overlaid the cell's center, following: \\[\\hat{f}_{h}\\left(x\\right)=\\frac{1}{nh}\\sum\\limits_{i=1}^{n}K\\left(\\frac{x-X_{ i}}{h}\\right) \\tag{1}\\] In this formula, \\(K\\) denotes the kernel function and \\(h\\) its bandwidth, \\(x\\) represents the position of the centre of each cell, \\(X_{i}\\) is the position of the _i_-th activity, and \\(n\\) is the total number of activities (_n_ will coincide, in turn, with \\(n_{comm}\\) or with \\(n_{comm-sen}\\)). The kernel function _K(y)_ is a function defined for the two-dimensional vector \\(y\\), and satisfying the normalization: \\(\\int\\limits_{R^{2}}K\\left(\\,y\\,\\right)dy\\,=1\\). Usually, it is a radially symmetric decreasing function of its argument. The most commonly adopted kernel is the exponential function: \\(K\\left(\\,y\\,\\right)=\\left(\\,2\\,\\pi\\,\\right)^{-1/2}\\,\\exp\\left(-\\,\\frac{1}{2} \\,y^{\\,2}\\,\\right)\\). In our computation we make use, instead, of the following kernel function, described in Silverman (1986, p. 76, equation 4.5): In the case of streets a smoothly curved surface was analogously fitted over each graph edge. The surface was defined so that the volume under the surface equalled the product of _the portion of line length_ included in the bandwidth region and the _centrality value_ associated to that edge, on the basis of three different indices of centrality discussed in previous works (Porta et al, 2006a,b; Crucitti et al, 2006a,b). As for the activity case, the density of centrality at each cell was then calculated by adding the values of all the kernel surfaces where they overlay the cell's center. The use of the kernel function for lines is adapted from the same function for point densities quoted above in formula (1) and formula (2). The output of this first stage of the Kernel Density Correlation process were 21 identical raster layers (tab.1) separately representing the kernel density estimation of the three centrality indices (calculated globally and locally by MCA, excluding betweenness centrality that was calculated only globally), and the two activity categories (commerce and together commerce+services). \\begin{tabular}{|c|c|c|c|} \\hline Order & Centralities & KDE \\\\ \\# & & & Bandwidth \\\\ \\cline{2-3} & Index & Description & MCA & Meters \\\\ & & distance & \\\\ & & factor \\(d\\) & \\\\ \\hline \\hline 1 & \\(C^{\\delta}\\)\\({}_{\\delta\\delta\\delta}\\) & Global betweenness centrality & all & 300 \\\\ \\hline 2 & \\(C^{\\delta}\\)\\({}_{\\delta\\delta\\delta}\\) & Global closeness centrality & all & 300 \\\\ \\hline 3 & \\( The choice of the bandwidth in any KDE application is a well known statistical key-issue (Williamson et al, 1998, Levine, 2004). In this case, all layers have been calculated at three bandwidths (\\(h\\)=100, 200 and 300 meters) in order to test both the smoothness performance of KDE at that urban scale and the substantive relevance of such metric thresholds that hold a clear significance in urban studies, being related to concept of street, block and neighbourhood pedestrian sheds. It is worth noting that areas covered by 300 to 400 meters radius circles are usually taken as a reference for a good five minutes walk coverage, and, as such, define the extension of neighbourhood centres, the building blocks of the the \"transit-oriented\" hierarchical community structure that shapes the sustainable city of the future (Frey, 1999; Calthorpe and Fulton, 2001; Cervero, 2004). In this section the focus will be on the statistical distribution of density of both activities and centralities among all the cells in region \\(R\\), while the study of the correlations centrality-activity, the Kernel Density Correlation (KDC), will be discussed in the next section. In order to mathematically quantify the geographic distributions, such as those depicted in panels (b) and (d) of fig.1, we have computed the number of cells with a given density of activities \\(a\\), and the number of cells with a given density of centrality \\(c\\), respectively as a function of \\(a\\) and \\(c\\). The results are illustrated in fig.2 and fig.3. In fig.2 we report the number of cells with a density of commerce and service activities in the range _[a,a+\\(\\Delta\\)a]_, as a function of the density value \\(a\\). We have used a bandwidth \\(h\\)=300\\(m\\), and a binning with \\(\\Delta\\)a=3,396\\({}^{+}\\)10\\({}^{5}\\). The log-log scale used in the plot, indicates that the distribution is extremely heterogeneous, and can be rather well approximated by a power-law behaviour (a straight line in the log-log plot) spanning up to two Figure 2: Distribution of commerce and service activities in Bologna. We report the histogram of the number of cells with a given density of activities. Densities of activities are evaluated through the Kernel Density method of formula 1, with a bandwidth \\(h\\)=300 meters. orders of magnitude. This means that, although most of the cells have a small density of activities (of the order of 0.00001-0.0001), there are a few cells with an extremely (even 100 times) larger density of activities. Or, in other words, the activities are distributed in Bologna in such a way that the average activity density is equal to 0,33332*10\\({}^{4}\\), while the standard deviation is equal to 1,4732*10\\({}^{4}\\), which is much larger than the average. Therefore, it appears that even the larger bandwidth does not \"flatten\" the descriptive potential of the process while, quite on the contrary, still captures striking differences in the territorial distribution of activities. Such range of difference is certainly an outcome of the typical concentric shape of the medieval street structure, which favours the concentration of spatial values in a fraction of the regional space; however, we should highlight that the such distribution parallels that exhibited by betweenness centrality (fig.3, panel a). Concentration of spatial centrality as well as accessibility to the functional backbone of community life, therefore, both appear \"natural\" outcomes of an historical evolution rather than the manifestation of a relatively recent market \"distortion\". Figure 3: Distribution of centralities in Bologna. We report the histogram of the number of cells with a given density of centrality. Densities are evaluated through the Kernel Density method and using a bandwidth h=300 meters. Four measures of centralities from MCA are reported: (a) global betweenness, (b) global straightness, (c) global closeness, (d) local closeness evaluated at a distance of 800 meters. In fig.3, in fact, we focus on how centrality indices are distributed. We show only four centrality indices of the 15 calculated by MCA and reported in table 1, namely betweenness (a), straightness (b), closeness evaluated globally (c), and closeness evaluated locally on a scale of 800 meters (d). In all the four cases, the bandwidth adopted in computing kernel density was of \\(h\\)=300 meters, and we plotted the number of cells with a centrality density in a range _[c,c+\\(\\Delta\\)c]_ as function of \\(c\\). As for \\(\\Delta\\)c we have used the values 45,310 for global betweenness, 476,902 for the straightness and 1,767 for all the closeness. Betweenness is the only one of the centrality measures that exhibits a heterogeneous density distribution, such as that found for the density of activities. In fact, the curve in panel (a) can be fitted by a power law. The betweenness density in a cell spans values from 10 to larger than 1000, with over 100,000 cells having a density equal to 10, and only a few hundreds of cells with a density larger than 1000. The other three centralities, namely \\(C^{S}_{\\mbox{\\tiny globo}}\\), \\(C^{C}_{\\mbox{\\tiny globo}}\\) and \\(C^{C}_{\\mbox{\\tiny geo}}\\) are more uniformly distributed and characterized by rapidly decreasing tails. This is confirmed by the fact that average and standard deviation are of the same order of magnitude. In particular, the distribution of \\(C^{C}_{\\mbox{\\tiny globo}}\\) is very well approximated by a decreasing exponential curve, represented as a straight line with negative slope in the lin-log plot of panel (c). Conversely, both \\(C^{S}_{\\mbox{\\tiny globo}}\\) and \\(C^{C}_{\\mbox{\\tiny geo}}\\) are characterized by distributions having two peaks at two different values of centrality density. This means that most of the cells preferentially exhibits a centrality value close to one of such two values. A similar behaviour with the presence of two peaks has been found also for straightness and closeness evaluated locally but on different scales. ## 5 Kernel Density in Bologna: KDC, correlating street centrality and commerce/service activities The main idea that underpinned our study was that _centrality_ acts as a driving force in the formation and constitution of urban structure interacting with the inner laws that link street geography with several _key land uses_ like commerce and services activities at the neighbourhood level. The correlation between centrality and such urban activities should therefore be investigated: we then chose to correlated the density of centrality with the density of activities, in so doing overcoming the lack of cross reference information and reaching a more accurate interpretation of the smooth dispersion of relationships in cities by spatial distance. The resulting methodology, that we named Kernel Density Correlation (KDC), turns a well established _combination_ of density values on a cell-by-cell basis (Thurstain-Goodwin and Unwin, 2000), in a _correlation_ of the same factors. Therefore, a correlation table between density of centralities and density of activities was produced by means of a dedicated GIS extension. What we did in this phase was to extract from each layer the values of each cell and to build a correlation table where to each cell (record) are attributed the values of that cell in every layer (field) under comparison (fig.4). We then investigated the table in search of linear and non linear statistical correlations coupling centrality (of the kind of _KD_1_ in fig.4) and activity (of the kind of _KD_2_, _KD_3_, _KD_n_ in fig.4) layers that shared the same bandwidth \\(h\\), which means 30 couples of layers. We have excluded cells that took zero values in both the elements of the couple of layers under scrutiny (the actual universe of cells considered spanned eventually between some 1,500,000 and some 1,800,000 cells). For each couple we have computed the Pearson correlation index. Such index expresses how much two quantities are linearly correlated by giving a number between -1 (most negative correlation) and 1 (most positive); however, the value of the Pearson correlation decreases as the dataset size increases due to statistical fluctuations (Taylor, 1982). The results emerging from the correlation study on the city of Bologna fully confirms the assumption that structural centrality, quantified by the adopted indices, acts as a driving force in the formation and constitution of urban structure, positively influencing the emergence of commerce and services activities at the neighbourhood level. In fact, we found that the first half of the 30 linear correlations hereby investigated between street centrality indices and commerce-service locations (tab.2) takes a Pearson value beyond 0.5, which means, given the large size of correlated datasets, a pretty high positive result. In particular, all the five \\(C^{B}_{\\text{ glob}}\\) correlations as well as all the five \\(C^{C}_{\\text{ glob}}\\) are included in the first fifteen ranked scores, while \\(C^{S}_{\\text{ glob}}\\) is present just twice: that indicates a lower coherence between straightness and activity location when the centrality index is computed at the global level. Global betweenness \\(C^{B}_{\\text{ glob}}\\) emerges by far as the highest statistical determinant of commerce and service locations at all scales, with Pearson scores climbing to values well beyond 0.7. This is not unexpected, since betweenness centrality measures the structural centrality of a place by counting the number of times that such place is traversed by the shortest paths connecting couples of places chosen at random on the urban network. In short, the betweenness of a cell is an interpretation of the \"traffic\" which is present in that cell even if not finding in that cell origin nor destination: it is a property of the space that tells a lot of that \"informal\" economy based on a widespread presence of secondary functions, i.e. those functions that do not have the power to attract people in themselves, but rather take advantage by the presence of people who are there for other purposes: the core of a community life. Hence a high value of betweenness density in a cell often implies a high value of commerce-service density. Figure 4: A schematic example of the translation of all values that one cell (in this case, cell # 234) takes in a number of KDE layers (produced in stage 1 of Kernel Density Correlation process) into a correlation table. By means of this procedure all kinds of information that are coherently geo-referenced can be correlated statistically with no need of cross-referenced internal structure between different categories (layers) of data. In fig.5 we have extended this idea by plotting the density of commerce and services as a function of a cell's centrality. The results are obtained as follows: we divide the cells according to their centrality's density, as we did to build the histograms in fig.3; then, for a given bin of centrality's density, let's say _[c,c+\\(\\Delta c\\)]_, we calculate the average density of commerce-activities, where the average is taken over cells with a centrality's density value in the range _[c,c+\\(\\Delta c\\)]_, and we plot it as a function of \\(c\\). The results shown in figure, clearly indicate that a higher value of a cell's density of centrality usually implies a higher average density of activities. The correlation is extremely neat for global betweenness in panel (a) and global closeness in panel (c). Both measures (especially betweenness) are in an almost linear relationship with the presence of activities. Concerning the global straightness and the local closeness of panels (b) and (d), one observes a steeper increases of the curves. This indicates that, for smaller values of centrality, activities are almost independent from centrality, while the positive influence of centrality over activities is recovered for higher values of centrality. Finally, in the inset of panel (a) we show that correlations are maintained also when betweenness density is evaluated with a bandwidth _h=100m_, although we observe larger fluctuations. This is a further confirmation that the bandwidth value _h=300m_ is to be preferred over _h=100m_ and _h=200m_. \\begin{table} \\begin{tabular}{|c|c|c|c|c|} \\hline Rank & \\multicolumn{2}{c|}{Correlated variables} & \\multicolumn{2}{c|}{KDE} & Linear \\\\ \\# & & \\multicolumn{2}{c|}{Bandwidth} & Correlation \\\\ \\cline{3-5} & Centratities & Activities & Meters & Pearson \\\\ & & & index & \\\\ \\hline \\hline 1 & \\(C^{\\mathcal{B}}\\) (Gob) & _Comm+Serv_ & 300 & 0,727 \\\\ \\hline 2 & \\(C^{\\mathcal{B}}\\) (Gob) & _Comm_ & 300 & 0,704 \\\\ \\hline 3 & \\(C^{\\mathcal{B}}\\) (Gob) & _Comm+Serv_ & 200 & 0,673 \\\\ \\hline 4 & \\(C^{\\mathcal{B}}\\) (Gob) & _Comm_ & 200 & 0,653 \\\\ \\hline 5 & \\(C^{\\mathcal{C}}\\) (Gob) & _Comm_ & 300 & 0,641 \\\\ \\hline 6 & \\(C^{\\mathcal{S}}\\) (Gob) & _Comm+Serv_ & 300 & 0,620 \\\\ \\hline 7 & \\(C^{\\mathcal{S}}\\) (Gob) & _Comm+Serv_ & 300 & 0,615 \\\\ \\hline 8 & \\(C^{\\mathcal{C}}\\) (Gob) & _Comm+Serv_ & 300 & 0,608 \\\\ \\hline 9 & \\(C^{\\mathcal{C}}\\) (Gob) & _Comm+Serv_ & 200 & 0,583 \\\\ \\hline 10 & \\(C^{\\mathcal{B}}\\) (Gob) & _Comm+Serv_ & 100 & 0,567 \\\\ \\hline 11 & \\(C^{\\mathcal{C}}\\) (Gob) & _Comm+Serv_ & 300 & 0,565 \\\\ \\hline 12 & \\(C^{\\mathcal{B}}\\) (Gob) & _Comm_ & 100 & 0,555 \\\\ \\hline 13 & \\(C^{\\mathcal{C}}\\) (Gob) & _Comm_ & 200 & 0,547 \\\\ \\hline 14 & \\(C^{\\mathcal{S}}\\) (Gob) & _Comm+Serv_ & 200 & 0,546 \\\\ \\hline 15 & \\(C^{\\mathcal{C}}\\) (Gob) & _Comm_ & 300 & 0,533 \\\\ \\hline \\end{tabular} \\end{table} Table 2: Linear correlation (Pearson index) between kernel density of street centrality and kernel density of ground floor activities in Bologna: first 15 positions in ranking. ## 6 Conclusions In this paper the correlation between different measures of street centrality and the presence of ground floor commercial and service activities is investigated in the case study of the city of Bologna, northern Italy. Street centrality has been computed by means of a MCA process, then cross compared with the presence of commerce and service activities by means of a Kernel Density Correlation methodology: in this methodology, centrality and activities have been correlated on the basis of their \"inner\" density and, in a second step, their mutual proximity in space. A relevant result of this paper is that activities showed a quite significant orientation to aggregate in the proximity of urban areas where central streets also aggregated, with a particularly high correlation to global betweenness and, to a slightly lesser extent, global closeness centralities. This fully confirms the assumption, within the limits of this case study, that street centrality plays a crucial role in shaping the functional asset of Bologna and that MCA, as a tool for mapping street Figure 5: A graphical plot of the correlation between centrality densities and commerce-service density. The average density of commerce and services is reported as a function of the centrality density: (a) global betweenness, (b) global straightness, (c) global closeness, (d) local closeness are considered. In all the cases the bandwidth for the calculation of the kernel density is set to h=300. Inset in panel (a) shows the correlation between commerce-service density and global betweenness, where now the centrality is evaluated by using a kernel density bandwidth equal to h=100. centrality in cities, captures a most fundamental aspect of the urban phenomenon which plays a substantial role in a wide range of urban planning and design issues. Another achievement is that the distribution of both densities of betweenness centrality and activities do follow a strong power law behaviour, which means that such territorial resources are \"assigned\" to space in a quite heterogeneous way: very few places in the city's structure hold a lot of them, while the vast majority hold almost nothing. Such distributions parallel the distribution of centrality in most self-organized complex systems in nature, technology and society, which furthermore confirms our previous studies on the deep \"organic\" order that seems to have driven the evolution of our historical cities. To what extent this property is due to the concentric structure of the city, an heritage of its prominently medieval street pattern, it is left to further studies. _Acknowledgements: Streets and activity datasets were kindly provided, on behalf of the Municipality of Bologna, by dr. Andrea Minghetti, dr. Giovanni Fini and dr. Gabriella Santoro. The authors wish to thank Giuseppe Borruso for his many valuable suggestions during the elaboration of the research._ ## References * Akaike (1954) Akaike H (1954), _An Approximation to the Density Function_, Annals of the Institute of Statistical Mathematics, 6, 127-132. * Anselin et al. (2000) Anselin L, Cohen J, Cook D, Gorr W, Tita G (2000), _Spatial analysis of crime_, in <<Criminal Justice 2000, Measurement and analysis of crime and justice>>, U.S. Department of Justice, Office of Justice Programs, vol. 4, 213-262. * Bailey and Anthony (1995) Bailey, Trevor C. and Anthony C. Gatrell (1995), _Interactive Spatial Data Analysis_, Addison-Wesley Publishers, Edinburgh, UK. * Boccaletti et al. (2006) Boccaletti S, Latora V, Moreno Y, Chavez M, Hwang D-U (2006), _Complex networks: Structure and Dynamics_, Physics Report, 424, 175-308. * Borruso (2003) Borruso G (2003), _Network Density and the Delimitation of Urban Areas_, Transactions in GIS, 7 2, 177-191. * Calthorpe and Fulton (2001) Calthorpe P, Fulton W (2001), _The regional city: planning for the end of sprawl_, Island Press, Washington DC. * Cervero (2004) Cervero R (2004), _Developing around transit: strategies and solutions that work_, Urban Land Institute, Washington, DC. * Cao et al. (1994) Cao R, Cuevas A, Gonzalez-Manteiga W (1994), _A comparative study of several smoothing methods in density estimation_, Computational Statistics and Data Analysis, 17, 153-176. * Cardillo et al. (2006) Cardillo A, Scellato S, Latora V, Porta S (2006), _Structural properties of planar graphs of urban street patterns_, Physical Review E, Journal of the American Physical Society, 73 6, * Crucitti et al. (2006a) Crucitti P, Latora V, Porta S (2006a), _Centrality measures in spatial networks of urban streets_, Physical Review E, Journal of the American Physical Society, 73 3. * Crucitti et al. (2006b) Crucitti P, Latora V, Porta S. (2006b), _Centrality in networks of urban streets_, Chaos, Quarterly of the American Institute of Physics, 16 1. * Crucitti et al. (2006c)Porta S, Latora V, Strano E, Cardillo A, Scellato S, Iacoviello V, Messora R (January 2007) _Street centrality vs. commerce and service locations in cities: a Kernel Density Correlation case study in Bologna, Italy._ Epanechnikov VA (1969), _Nonparametric estimation of a multivariate probability density_, Theory of probability and its applications, 14, 153-158 Hillier B, Hanson J (1984), _The social logic of space_, Cambridge University Press, Cambridge. Hillier B (1996), _Space is the machine: a configurational theory of architecture_, Cambridge University Press, Cambridge. Jacobs J (1961), _The Death and Life of Great American Cities_, Random House, New York. Newman P, Kenworthy J (1999), _Sustainability and Cities: Overcoming Automobile Dependence_, Island Press, Washington, DC. Parzen E (1962), _On estimation of a probability density function and mode_, Annals of Mathematical Statistics, 33, pp. 1065-1076. Penn A, Turner A (2003), _Space layout affects search efficiency for agents with vision_, in Proceedings, 4th International Space Syntax Symposium London Porta S, Crucitti P, Latora V (2006a), _The network analysis of urban streets: a dual approach_, Physica A, Statistical mechanics and its applications, 369 2. Porta S, Crucitti P, Latora V. (2006b), _The network analysis of urban streets: a primal approach_, Environment and Planning B: planning and design, 33 5. Porta S, Crucitti P, Latora V. (2007), _Multiple Centrality Assessment in Parma: a network analysis of paths and open spaces_, Urban Design International, forthcoming. Rosenblatt F (1956), _Remarks on some nonparametric estimates of a density function_, Annals of Mathematical Statistics, 27, 832-837. Scheurer J, Porta S (2006), _Centrality and Connectivity in Public Transport Networks and their Significance for Transport Sustainability in Cities_, paper presented at the World Planning Schools Congress, Mexico DF, 13-16 July 2006. Scellato S, Cardillo A, Latora V, Porta S (2006), _The Backbone of a city_, The European Physical Journal B, 50 1-2. Silverman BW (1986), _Density estimation for statistics and data analysis_, Chapman and Hall, London, UK. Taylor JR (1982), An Introduction to Error Analysis, The Study of Uncertainties in Physical Measurements, University Science Books, Mill Valley, CA. Thurstain-Goodwin M, Unwin DJ (2000), _Defining and delimiting the central areas of towns for statistical modelling using continuous surface representations_, Transactions in GIS, 4, 305-317. Wilson GA (2000), _Complex spatial systems: the modelling foundations of urban and regional analysis_, Prentice Hall, Upper Saddle River, NJ. Williamson D, McLafferty S, Goldsmith V, Mollenkopf J, McGuire P (1998), _Smoothing crime incident data: New methods for determining the bandwidth in Kernel estimation_, Working paper presented at the Environmental Systems Research Institute International User Conference, 27-31 July.
In previous research we defined a methodology for mapping centrality in urban networks. Such methodology, named Multiple Centrality Assessment (MCA), makes it possible to ascertain how each street is structurally central in a city according to several different notions of centrality, as well as different scales of \"being central\". In this study we investigate the case of Bologna, northern Italy, about how much higher street centrality statistically \"determines\" a higher presence of activities (shops and services). Our work develops a methodology, based on a kernel density evaluation, that enhances standard tools available in Geographic Information System (GIS) environment in order to support: 1) the study of how centrality and activities are distributed; 2) linear and non-linear statistical correlation analysis between centrality and activities, hereby named Kernel Density Correlation (KDC). Results offer evidence-based foundations that a strong correlation exists between centrality of streets, especially betweenness centrality, and the location of shops and services at the neighbourhood scale. This issue is at the heart of the current debate in urban planning and design towards the making of more sustainable urban communities for the future. Our results also support the \"predictive\" capability of the MCA model as a tool for sustainable urban design.
Give a concise overview of the text below.
arxiv-format/0701162v1.md
Five year prediction of Sea Surface Temperature in the Tropical Atlantic: a comparison of simple statistical methods Thomas Laepple (AWI) Stephen Jewson (RMS)1 Jonathan Meagher (NOAA) Adam O'Shay (RMS) Jeremy Penzer (LSE) Footnote 1: _Correspondence email_: [email protected] ## 1 Introduction The number of hurricanes occurring in the Atlantic Ocean basin has increased in recent years, and this has led to considerable interest in trying to predict future levels of hurricane activity. One sector of society that is particularly interested in the number of hurricanes that may occur in the future is the insurance industry, which pays out large amounts of money when severe hurricanes make landfall in the US. The timescales over which this industry is most interested in forecasts of hurricane activity are, roughly speaking, a zero-to-two year timescale, for underwriters to set appropriate insurance rates, and a zero-to-five year timescale, to allow financial planners to ensure that their business has sufficient capital to withstand potential losses. Motivated by this, we are in the process of building a set of models for the prediction of future hurricane numbers over these timescales. The models in our set are based on different methodologies and assumptions, in an attempt to understand how different methodologies and assumptions can impact the ultimate predictions. Within the set, one subset of methods is based on the idea of first predicting sea surface temperatures (SSTs), and then predicting hurricane numbers as a function of the predicted SSTs. The rationale for this approach is that there is a clear correlation between SST and hurricane numbers, such that greater numbers of hurricanes occur in years with warmer SSTs. How, then, should we predict SSTs in order to make hurricane number predictions on this basis? Meagher and Jewson (2006) compared three simple statistical methods for the _one-year_ forecasting of tropical Atlantic SST. Their results show that the relative skill levels of the forecasts produced by the different methods they consider is determined by a trade-off between bias and variance. Bias can be reduced by using a two parameter trend prediction model, but a one parameter model that ignores the trend has lower variance and ultimately gives better predictions when skill is measured using mean square error. How are these results likely to change as we move from considering one-year forecasts to considering five-year forecasts? For five year forecasts both bias and variance are likely to increase, but not necessarily in the same way, and as a result which model performs best might be expected to change compared to the results of Meagher and Jewson (2006). We therefore extend their study to investigate which methods and parameter sets perform best for five year predictions. We also consider 2 new statistical models, known as 'local level' and 'local linear' models. These models are examples of so-called _structural time-series models_ and are commonly used in Econometrics. We produce SST forecasts using these 2 additional methods, and compare the forecasts with those from our original set of 3 methods. Data As in Meagher and Jewson (2006) we use the SST dataset HadISST (Rayner et al., 2002), which contains monthly mean SSTs from 1870 to 2005 on a 1\\({}^{\\circ}\\)x1\\({}^{\\circ}\\) grid. As in Meagher and Jewson (2006), we define a Main Development Region SST index as the average of the SSTs in the region (10\\({}^{\\circ}\\)-20\\({}^{\\circ}\\)N, 15\\({}^{\\circ}\\)-70\\({}^{\\circ}\\)W), although we differ from Meagher and Jewson (2006) in that we now use a July to September average rather than a June to November average. This is because July to September SSTs show a slightly higher correlation with annual hurricane numbers than the June to November SSTs. The HadISST data is not updated in real-time, and so to update this dataset to the end of 2006 we use the NOAA Optimal Interpolation SST V2 data which is available from 1981 to the present. The July-September MDR index derived from the NOAA dataset is highly correlated with that derived from HADISST (with linear correlation coefficient of 0.98). ## 3 Method Following Meagher and Jewson (2006) we compare three simple methods for predicting SST using backtesting on the MDR SST timeseries. Meagher and Jewson (2006) tested 1 year forecasts while we now test 1-5 year forecasts. The basic 3 methods we use are: 1. Flat-line (FL): a trailing moving average 2. Linear trend (LT): a linear trend fitted to the data and extrapolated to predict the next five years 3. Damped linear trend (DLT): An 'optimal' combination of the flat-line and linear trend (originally from Jewson and Panzer (2004)). We compare predictions from these methods with predictions from two structural time series prediction methods which are common in Econometrics (see for example Harvey and Shephard (1993)). These models are: * a _local level_ model, that assumes that the historic SST time series is a random walk plus noise. * a _local linear trend_ model, that assumes that the historic SST time series is a random walk plus random walk trend plus noise The local level model has two parameters (the amplitude of the random walk and the amplitude of the noise) and captures the idea that the level of SST changes over time, but with some memory. The local linear trend model has three parameters (the amplitude of the basic random walk, the amplitude of the random walk for the trend, and the amplitude of noise) and additionally captures the idea that SST is influenced by a slowly changing trend. We fit the two structural time-series model to the historical data using maximum likelihood. ## 4 Results ### Backtesting skill To compare the three basic prediction methods, 5 year periods from 1911-1915 to 2001-2005 were predicted (or 'hindcasted') using from 5 to 40 years of prior data. Figure 1 shows the RMS error for all three models versus the number of years of prior data used. The upper left panel shows the score for 5-year forecasts, and the other five panels show the scores for separate forecasts for 1 to 5 years ahead. Considering first the RMSE score for the 5-year forecast, we see that the flat-line model with a window length of 8-10 years performs best. Next best is the damped linear trend model for a window length of around 17 years. Worst of the three models is the linear trend model, which has an optimal window length of 24 years. The damped linear trend and linear trend models do very badly for short window lengths, because of the huge uncertainty in the trend parameters when estimated using so little data. Their performance is then very stable for window lengths longer than 13 years. We now consider the forecasts for the individual years. First we note that the RMSE scores of these forecasts are scarcely lower than the RMSE score for the 5 year forecast. This is presumably because the ability of our simple methods to predict SST comes from the representation of long time-scale processes. Our methods do not capture any interannual time-scale phenomena. Second, we note that the optimal window length for the flat-line forecast gradually reduces from 11 years to 7 years as the lead time increases. This is the expected behaviour of the flat-line model when used to model data with a weak trend. To better understand the error behaviour of these prediction methods we decompose the RMSE into the bias and the standard deviation of the error. Figure 2 shows the bias for the three models and figure 3 their standard deviations. The flat line model shows a high bias which increases with the averaging period and the lead time. This is because using a flat-line cannot capture the trends in the data. Figure 3 shows that it is the high variance in the predictions from the linear trend and damped linear trend models, presumably due to high parameter uncertainty, which is responsible for their poor performance when using small windows. The standard deviation of the flat line model error is close to independent of the lead time although we can see that the minimum is shifted to smaller window lengths for longer forecasts. ### Sensitivity of the results to the hindcast period One obvious question concerns the stability of our results with respect to the hindcast data we have used. Understanding this should give us some indication of the robustness of future forecasts. To check this stability we apply a bootstrap technique by calculating the window-length dependent RMSE on bootstrap samples of forecast years. Figure 4 shows the results for the five year forecast based on 1000 bootstrap samples. The left panel shows the frequency in which one method outperforms the other two methods, and the other panels show the distribution of optimal window lengths for the three methods. For a five year forecast the flat line method with a window length of 8 years is the best in almost all cases. In contrast, the optimal window length of the linear methods is strongly dependent on the hindcast years used. However we note that this is not necessarily a problem since the minima in the RMSE score for these methods is very shallow and therefore an imperfect window length does not greatly reduce the forecast quality. Figure 5 shows the same experiment as the previous figure, but for a one year ahead forecast. Here the linear trend models outperform the flat line model in 40% of the bootstrap samples and the optimal window length of the flat line method is around 10 years, confirming the results given in Meagher and Jewson (2006). ### Forecast for 2006-2010 and comparison to structural time series model forecasts We now make forecasts for SST for the period 2006-2010 using the methods described above. Based on the backtesting results we use the flat line model with an 8 year window length, the linear trend model with a 24 year window and the damped linear trend model with a 17 year window. In addition we make forecasts with the local level and local linear structural time series models. Point predictions from these models are the same as predictions from ARIMA(0,1,1) and ARIMA(0,2,2) models, although predicted error distributions are different. Figure 6 shows the forecasts from the 3 simple methods, not including the structural models. As expected the linear trend models predict higher SSTs than the flat-line models. Curiously, the damped linear trend model actually predicts higher SSTs and a greater trend slope than the linear trend model. This is because it uses a shorter window length than the linear trend model. This unexpected behaviour slightly calls into question the way the damped linear trend model is constructed, and suggests that there may be other ways that one could construct such an optimal combination that might avoid this slightly awkward result. It also highlights the fact that the optimal window length for the linear trend models is not terribly well determined by the backtesting. Figure 7 also shows the predictions from the structural models. We see that these predictions lie between the predictions from the flat-line and linear trend models. Figure 8 shows the predictions from the 3 simple models, but now including (a) predicted RMSE scores for each model based on the backtesting results, and (b) a prediction for 2006 based on data up to the end of 2005. To estimate the 2006 MDR SST data we predict the July-September SST for 2006 using a linear model with the NOAA Optimal Interpolation SST July-August data as predictor (\\(1981-2005:R^{2}=0.913\\)). This point forecast and 90% confidence intervals are plotted in the figure as a grey box. those of Meagher and Jewson (2006), who tested the same prediction methods for year-ahead forecasting. The flat line method, a trailing moving average, performed best using a window length of 8 years, which is slightly lower than the optimal window length for year-ahead forecasts. Next best was the damped linear trend method with window lengths around 17 years. The linear trend method shows no advantage over flat-line and damped linear trend for any forecast periods or window length. By applying the hindcast experiment on subsets of hindcast data we have shown that for the five year forecast the flat line methods nearly always outperform the linear trend methods whereas for a one year ahead forecast the linear methods are sometimes more accurate. It is worth remarking that the five year ahead forecasts we have described have only around 10% higher uncertainty than the one year ahead forecast. It is likely that the one year ahead forecast can be improved significantly by including additional information such as the ENSO state, but for the five year ahead forecast the simple methods we have presented will be more difficult to beat. We have presented 5 year forecasts from both these simple methods and local level and local linear trend structural time series models. The forecasts from these structural time-series methods lie in-between the flat line and linear trend forecasts and this suggests that one might consider the flat line and linear trend forecasts as lower and upper bounds. One final but important point is that our backtesting study has compared the performance of forecast methods on average over the historical data. Are methods that have worked well over the period covered by the historical data likely to work well in the future? Not necessarily, since we seem to be in a period of rapid warming. Although there are similar periods of rapid warming in the historical data, there are also periods of cooling, and our backtesting results reflect some kind of average performance over the two. If we believe that the current warming will continue, then the methods that incorporate information about the trend may do better than they have done over the historical period, and the methods that ignore the trend may do worse than they have done over the historical period. ## References * Harvey and Shephard (1993) A Harvey and N Shephard. Structural Time Series Models. In _Handbook of Statistics Volume 11_. Elsevier Science, 1993. * Jewson and Penzer (2004) S Jewson and J Penzer. Optimal year ahead forecasting of temperature in the presence of a linear trend, and the pricing of weather derivatives. _[http://ssrn.com/abstract=563943_](http://ssrn.com/abstract=563943_), 2004. * Meagher and Jewson (2006) J Meagher and S Jewson. Year ahead prediction of hurricane season SST in the tropical Atlantic. _arxiv:physics/0606185_, 2006. * Rayner et al. (2002) N Rayner, D Parker, E Horton, C Folland, L Alexander, D Rowell, E Kent, and A Kaplan. Global analyses of SST, sea ice and night marine air temperature since the late nineteenth century. _Journal of Geophysical Research_, 108:4407, 2002. Figure 2: forecast bias for the flat line model (black solid), linear trend model (red dashed) and damped linear trend model (blue dotted) plotted against the window length; the upper left panel shows the mean bias over all forecast periods, the remaining panels show the bias for specific forecast times. Figure 1: forecast RMSE for the flat line model (black solid), linear trend model (red dashed) and damped linear trend model (blue dotted) plotted against the window length; the upper left panel shows the RMSE over all forecast periods, the remaining panels show the RMSE for specific forecast times. Figure 3: standard deviation of the forecast error for the flat line model (black solid), linear trend model (red dashed) and damped linear trend model (blue dotted) plotted against the window length; the upper left panel shows the SD error calculated over all hindcasts and forecast periods, the remaining panels show the SD error for specific forecast times. Figure 4: sensitivity to the hindcast period for 5yr forecasts as determined by bootstrap. From left to right; percentage of hindcast year samples in which a specific method performed the best, distribution of optimal window lengths for the flat line method, linear trend method and damped linear trend method. Figure 5: sensitivity to the hindcast period for 1yr forecasts as determined by bootstrap. From left to right; percentage of hindcast year samples in which a specific method performed the best, distribution of optimal window lengths for the flat line method, linear trend method and damped linear trend method. Figure 6: Comparison of the 3 simple statistical forecasts for 2006-2010 and their predicted RMSE. Flat-line (solid), linear trend (dashed) and damped linear trend (dotted). Figure 7: As in figure 6, but including predictions from the local level (long dashes) and local linear (dot-dashed) models. Figure 8: As in figure 6, but including (a) error bars showing plus/minus 1 standard deviation and (b) a forecast for 2006, with 90% confidence interval (grey box).
We are developing schemes that predict future hurricane numbers by first predicting future sea surface temperatures (SSTs), and then apply the observed statistical relationship between SST and hurricane numbers. As part of this overall goal, in this study we compare the historical performance of three simple statistical methods for making five-year SST forecasts. We also present SST forecasts for 2006-2010 using these methods and compare them to forecasts made from two structural time series models.
Provide a brief summary of the text.
arxiv-format/0701165v1.md
Five year ahead prediction of Sea Surface Temperature in the Tropical Atlantic: a comparison between IPCC climate models and simple statistical methods Thomas Laepple (AWI) Stephen Jewson (RMS) Correspondence email: [email protected] ## 1 Introduction The insurance and reinsurance industries use predictions of annual numbers of landfalling hurricanes to help them set the rates on the contracts they sell, and to allocate sufficient capital to their businesses. This creates a need to produce better predictions of future hurricane numbers, especially over the 0-5 year time-scales of interest to these industries. We have been looking into a number of ways of making such predictions, based on several different approaches such as time-series analysis historical landfalling hurricane data, time-series analysis of basin hurricane number data, and statistical predictions of sea-surface temperature (SST). In the latter approach, time-series methods are used to predict future SSTs based on historical SSTs, and the predictions of SST are then converted to predictions of landfalling hurricane numbers using statistical relations. One obvious question arises: might it not be better to use numerical models of climate, rather than statistical methods, to predict future SSTs? We feel that the answer to this question is not at all obvious, and the purpose of this article is to perform an initial comparison of numerical model predictions with our simple statistical methods. Our approach is to take simulations of the 20th century climate from a set of integrations of state-of-the-art climate models that were prepared as part of a submission to the Intergovernmental Panel for Climate Change (IPCC: see www.ipcc.ch), and compare the predictions from these models with those from our own simple statistical methods. ## 2 Methods ### Data We are trying to predict SSTs in the Main Development Region (MDR) for hurricanes, defined as 10\\({}^{o}\\)-20\\({}^{o}\\)N, 15\\({}^{o}\\)-70\\({}^{o}\\)W. From a statistical analysis of the correlation of SSTs in different months with observed hurricane numbers, we choose an index based on the average SST in this region for the period July to September. This index has one value per year, and extends from 1860 to 2005, although we only use values from 1900 to 2000. We derive values for this index from the HADISST data set (Rayner et al., 2002). ### Statistical models The statistical model we use for this comparison is taken from a comparison of the ability of simple statistical models to predict this MDR SST index, described in Laepple et al. (2006). The winning model in this comparison was the '8 year flat-line', which predicts future SSTs using a simple average of SSTs from the last 8 years. For a 5-year forecast this model beats models with longer and shorter averaging windows, and beats models that attempt to model any trends in the data. Why does this model do so well? There is considerable interannual, decadal and multidecadal time-scale variability in the SST index. Since we are trying to predict SSTs over the next 5 years, and the interannual variability is only predictable over periods of a few months, making a 5 year prediction is mainly about estimating the current level of the decadal and interdecadal variability. The 8 year window presumably works because it does this well. A shorter window would be too influenced by interannual variability during the windowing period, and a longer window would presumably fail to capture the current level of the long-term variability. ### Numerical models The numerical models we use for this comparison are coupled-ocean atmosphere models, running from effectively random initial conditions in 1900 throughout the 20th century. The models are forced with estimates of the observed climate forcings in place during the 20th century, the most important of which are changing levels of CO\\({}_{2}\\), sulphate aerosols, solar variability and volcanic activity. The methodology we use to make predictions from the climate model output is as follows: * For each of the 22 coupled climate models simulations of the 20th century available from the IPCC, we take the 101 year time-series of simulated MDR SST index values. * We split the simulations into two sets: those that include volcanic forcing, and those that do not * We create an ensemble mean of the climate model simulations in each set * In order to create a prediction of real MDR SSTs from year \\(n+1\\) onwards, we calculate a bias correction to the ensemble mean climate model predictions based on the 8 years \\(n-8,n-7, ,n-1,n\\) * Applying this bias correction, we then predict future MDR SST values from simulated values for the years \\(n+1\\) onwards. We calibrate using 8 years of data to make the comparison with the statistical model as clean as possible. As a result of this calibration, if the climate model predicted constant values it would give exactly the same prediction as the statistical model. It will give better predictions if the fluctuations produced by the model are, on average, realistic. There are two reasons why this methodology might lead to skillful predictions. First, the bias correction sets the level of any internal variability in the predictions. For instance, if there were a multidecadal cycle in observed MDR SSTs, the bias correction will put the predictions at the right level within that cycle. This is essentially the same source of predictive skill as is exploited by the statistical model. Note, however, that the climate model predictions would not be expected to continue the development of such a cycle into the future in a realistic way, since they start from random initial conditions, and know nothing about the phase of any cycles of internal climate variability. Second, to the extent that there is a part of climate variability that is driven by variations in the external forcings, then the models might be able to capture that. Most obviously, if the rise in CO\\({}_{2}\\) during the 20th century drives any changes in MDR SSTs, then the models could in principle reproduce such a change. By using simulations of past climate, driven by after-the-fact estimates of the forcings, we are effectively assuming that the forcings driving these models are perfectly predictable, which gives these models a large unfair and unrealistic advantage over the statistical models. The least predictable of the forcings is the volcanic forcing, which is in reality totally unpredictable. This is why we split the climate models into two sets. Comparing the statistical model with the set that doesn't include volcanoes gives a much fairer comparison, since the remaining forcing parameters are _reasonably_ predictable. It still, however, gives a slightly unfair advantage to the climate models. The climate model simulations of MDR SST from the runs with volcanic forcing are shown in the top panel of figure 1, along with the observed MDR SST (solid black line). The climate model simulations of MDR SST from the runs without volcanic forcing are shown in the lower panel. Figure 2 shows the same data, but with the range of results from individual climate models shown as the mean plus and minus one standard deviation. Figure 3 shows the same data again, but smoothed with a 3 year running mean to emphasize longer timescales. Figures 4 and 5 show the forecast errors for the climate models versusthe length of the calibration period. The optimal calibration period is longest for the shortest lead times. The 8 year calibration period used for our comparison is apparently optimal for forecasts with lead times of around 5 years, and seems to be a good overall choice. Based on the results in figures 4 and 5 one could consider varying the length of the calibration window for the climate model according to the lead-time being predicted. One would also then have to vary the length of the window in the statistical method in a similar way, to keep the comparison fair. We, however, consider this unnecessarily complex at this point. ## 3 Results The results of our comparison between the predictive skill of our simple statistical method and the climate models are given in figure 6. The solid black line shows the RMSE of the forecasts from our statistical model versus lead time. The error in these forecasts gradually increases with lead time, although the error for 30 year predictions is only around 30% larger than that for 1 year predictions. The extra skill at one year is presumably because the model is capturing some of the decadal and interdecadal time-scale variability in the time-series, including any long-term trends. The error at one year is presumably dominated by the interannual variability in the climate, which is not predicted by this model. The dashed red line then shows the RMSE of the forecasts derived from the climate models that include a perfect prediction of future volcanic activity. These models do very slightly better than the statistical models at all lead times: the errors are around 5% smaller. The dotted blue line shows the RMSE of the forecasts derived from the climate models that do not include volcanic activity, but do include a perfect prediction of the other atmospheric forcings. In practice, it is not possible to predict any of the atmospheric forcings perfectly, and these RMSE values should thus be considered as artificially too low, especially at longer lead-times. We see that the climate models do very slightly better than the statistical model up to around 5 years, and are worse than the statistical model beyond 5 years. Figure 6 shows the same results, but now with error bars. We see that at short lead-times the differences between the different models are not significant. What can we understand from these results, and in particular the result that the climate models without volcanic forcing don't perform materially better than the statistical model? The most obvious interpretation is that the non-volcanic climate model simulations contain no information about the direction that climate is moving. Their skill comes purely from the bias correction, that sets them at the correct current level for climate. The fact they do progressively _worse_ than the statistical model as the lead time increases is presumably because of gradual drift in the models. Why do the climate models do so badly? The two extreme-case explanations are (a) that the models are perfect representations of the physics of the real climate, but that climate variability is driven by internal climate variability, rather than externally forced variability, and (b) that the models are very poor representations of the physics of the real climate, and even if there is some part of climate variability that is driven by external forcings, the models fail to capture it. Reality is presumably a combination of these two. The volcanically forced models, on the other hand, do have some information about which way the climate moves away from the 8 year baseline. It is clear that these models do have some (albeit retrospective) skill in capturing the climate response to volcanoes. ## 4 Conclusions We are developing methods to predict hurricane numbers over the next 5 years. One class of methods we are developing is based on the idea of predicting main development region SSTs over this time period, and then converting those predictions to predictions of hurricane numbers. Hitherto we have used simple statistical methods to make our SST predictions. In this article, however, we have considered the possibility of using climate models instead. We have compared predictions made from an ensemble of state-of-the-art climate models with our statistical predictions. Climate models that include a perfect prediction of future volcanic activity are the best predictions of those that we consider. Climate models that ignore volcanic activity, but include a perfect prediction of other climate forcings, do very slightly better than our statistical models up to a lead time of 5 years, but do increasing less well thereafter. Given the unfair advantage that using perfect predictions of future atmospheric forcing parameters confers on the climate models, the tiny margin by which they beat the statistical models, and the vast complexity of using these models versus using the statistical models, we conclude that the statistical models are preferable to the climate models over our time period of interest. We have thus answered our initial question. Should we use IPCC climate model output to feed into our predictions of future hurricane numbers, rather than the simple statistical methods we currently use? The answer is no. What can we say about (a) whether the climate models might one day be preferable to the statistical models and (b) what this tells us about predictions of climate in the 21st century? Wrt (a), the one obvious way that the climate model predictions _might_ be improved would be to include realistic, rather than random, initial conditions. If there really is a component of unforced internal climate variability that is predictable on decadal and interdecadal time-scales, and the models can simulate it, then their errors might be reduced. However, whether predictable internal climate variability exists on these long-timescales is currently unknown, and climate scientists have different opinions on this subject. In the absence of such signals, at short lead-times both the statistical models and the climate models are probably hitting the limits of predictability defined by the levels of interannual variability in the climate, that is unpredictable over the time-scales of interest. At long-time scales it seems likely that the climate models could be significantly improved, by the gradual eradication of the slow drift that is presumably driving the more rapid increase in the climate model errors versus the errors in the statistical model. Wrt (b), should we extrapolate these results to tell us about the likely skill of future climate predictions? This is rather hard to answer, since it depends, in part, on what the climate does in the future. But, _prima facie_, the suggestion from these results is that statistical models will continue to do as well or better than climate models in the future. However, what if future climate variability is dominated by single externally-forced upward trend, as many climate models suggest, rather than by interdecadal variability, as it has been in the past century? Will the climate models then do any better? Perhaps. Although at that point statistical models that capture trends would improve as well, and the comparison between statistical models and climate models would have to be repeated. ## References * Laepple et al. (2006) T Laepple, S Jewson, J Meagher, A O'Shyay, and J Penzer. Five-year ahead prediction of Sea Surface Temperature in the Tropical Atlantic: a comparison of simple statistical methods. _arXiv:physics/0701162_, 2006. * Rayner et al. (2002) N Rayner, D Parker, E Horton, C Folland, L Alexander, D Rowell, E Kent, and A Kaplan. Global analyses of SST, sea ice and night marine air temperature since the late nineteenth century. _Journal of Geophysical Research_, 108:4407, 2002. Figure 1: Upper panel: MDR SST simulations from the ‘with volcanic forcing’ IPCC climate model runs we use in our comparison, along with observed MDR SSTs (solid line). Lower panel: the same, but for ‘without volcanic forcing’ runs. Figure 2: As figure 1, but with the climate model simulations summarised as a mean and plus and minus one standard deviation. Figure 3: As figure 2, but smoothed with a 3-year running mean. Figure 4: Predictive performance of the ensemble mean of the ‘without volcanic forcing’ climate model runs, versus length of the calibration window. Figure 5: As figure 4, but for the ‘with volcanic forcing’ climate model runs. Figure 6: Predictive performance of the statistical model (solid black line), climate model ensemble mean with volcanic forcing (dotted blue line) and climate model ensemble mean without volcanic forcing (dashed red line). The horizontal lines show the performance of the climate model predictions when calibrated using the entire data-set (included for reference only since this is not a proper out-of-sample calibration method). Figure 7: As figure 6, but with error bars on each prediction. ## Forecast error ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{1}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{2}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{3}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{4}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{5}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{6}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{7}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{8}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{9}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{10}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{11}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{12}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{13}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{14}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{15}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{16}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{17}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{18}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{19}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{20}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{21}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{22}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{23}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{24}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{25}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{26}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{27}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{28}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{29}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{30}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{31}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{32}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{33}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{34}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{35}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{36}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{37}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{\\text{avg}}} \\tag{38}\\] where \\(\\sigma_{\\text{avg}}\\) is the error of the forecast error. ### Forecast error The Forecast error is calculated by the following error: \\[\\sigma_{\\text{avg}}=\\frac{\\sigma_{\\text{avg}}}{\\sigma_{
There is a clear positive correlation between boreal summer tropical Atlantic sea-surface temperature and annual hurricane numbers. This motivates the idea of trying to predict the sea-surface temperature in order to be able to predict future hurricane activity. In previous work we have used simple statistical methods to make 5 year predictions of tropical Atlantic sea surface temperatures for this purpose. We now compare these statistical SST predictions with SST predictions made by an ensemble mean of IPCC climate models.
Provide a brief summary of the text.
arxiv-format/0701170v2.md
# Predicting basin and landfalling hurricane numbers from sea surface temperature Stephen Jewson (RMS)1 Roman Binter (LSE) Shree Khare (RMS) Kechi Nzerem (RMS) Adam O'Shay (RMS) Footnote 1: _Correspondence email:_ [email protected] ## 1 Introduction We are interested in developing practical methods for the prediction of the distribution of the number of hurricanes that might make landfall in the US over the next 5 years. We are investigating a number of ways that one might make such a prediction, such as methods based on a change point analysis of the historical hurricane record (see Binter et al. (2006)), and methods based on predictions of sea surface temperatures (SST). This article contributes to the development of one particular 3 step SST method that works as follows: 1. We predict the distribution of possible SSTs in the main development region (MDR) over the next 5 years 2. We predict the distribution of the number of hurricanes in the Atlantic, given the SST forecast from step 1, using a model for the relationship between MDR SST and Atlantic hurricane numbers 3. We predict the distribution of the number of hurricanes making US landfall, given the prediction for the number in the Atlantic from step 2, and a model for the relationship between Atlantic hurricane numbers and numbers of hurricanes making landfall in the US The first step, how to predict MDR SST, has been considered in Meagher and Jewson (2006) and Laepple et al. (2007). We now consider steps 2 and 3, and make some predictions. That there is a non-trivial relationship between SST and the number of Atlantic basin hurricanes is both physically intuitive and is supported by a number of studies in the meteorological literature. For instance, Peixoto and Oort (1992) discuss how long-term variability in hurricane activity is ultimately driven by the ocean, with its large thermal and mechanical inertia. Also, ocean SSTs play a direct role in providing energy to developing tropical cyclones (Landsea et al., 1999a; Saunders and Harris, 1997), and higher SSTs decrease the stability of the atmosphere, making tropical cyclones more resistant to windshear (Demaria, 1996). Much of this has been known for a long time: Gray (1968) discusses how warm SSTs promote tropical cyclone development and enhancement. The statistical relationship between Atlantic hurricane activity and SSTs has been studied by a number of authors (e.g. Shapiro (1982); Shapiro and Goldenberg (1989); Saunders and Harris (1997); Goldenberg et al. (2001); Landsea et al. (1999b)) and is a key aspect in anumber of recent studies concerning the impacts of long-term climate trends on hurricane frequency and intensity (e.g. Emanuel et al. (2005); Kerr (2005); Trenberth (2005); Webster et al. (2005); Klotzbach (2006); Sriver and Huber (2006); Elsner (2006)). As this is our first attempt to relate SST to hurricane numbers, and our first attempt to build a forecasting system on that basis, we are keen to keep our models as simple as possible. For this reason, we restrict ourselves to representing SST variability using just a single index, and we choose what we think is the simplest possible reasonable index: the average of summer SST in the MDR region of the tropical Atlantic. Based on an analysis of correlation between hurricane numbers and MDR SSTs in different months we define summer to be July to September. In the future it may be appropriate to extend our prediction system to consider more than one SST index, and to derive our index or indices on the basis of some kind of regression or eigenvector analysis. Given our SST index, our first goal is to test a number of statistical models that relate this index to the observed number of hurricanes. We consider both the total number of hurricanes, and the number of intense hurricanes, where we define intense as falling in categories 3-5. We fit our statistical models to data from both 1900-2005 and 1950-2005, for reasons discussed below. Having settled on a couple of models for the SST to hurricane number relationship, we then convert SST predictions derived from Laepple et al. (2007) into hurricane number predictions for 2006-2010. The rest of this article proceeds as follows. In section 2 we briefly discuss the data we use for this study. In section 3 we present the statistical models for the SST-hurricane number relationship that we will test. In section 4 we show the results from these tests, and in section 5 we summarise and discuss those results. Then in section 6 we describe how we can use these models to make hurricane number predictions, and in sections 7 and 8 we make some predictions, for all hurricanes and intense hurricanes, respectively. Finally in section 9 we conclude. ## 2 Data We use two data sets for this study: one for SST, and the other for hurricane numbers. The SST data we use is HadISST, available from [http://www.hadobs.org](http://www.hadobs.org), and described in Rayner et al. (2002). From this data we create an annual index consisting of the average SST in the region (10\\({}^{o}\\)-20\\({}^{o}\\)N, 15\\({}^{o}\\)-70\\({}^{o}\\)W) for the months July to September. This is exactly the same index as was used in our SST prediction study described in Laepple et al. (2007). The hurricane number data is derived from the standard HURDAT data base, available from [http://www.aoml.noaa.gov/hrd/hurdat](http://www.aoml.noaa.gov/hrd/hurdat), and described in Jarvinen et al. (1984). In both cases we consider the data to be exact, i.e. without observational errors. This is pragmatic, since neither data-set is delivered with any estimates of the likely error. However, given the (widely discussed) possibility that both data sets may be less accurate earlier in the time period we repeat our analyses for both the periods 1900-2005 and 1950-2005. ## 3 Statistical models for the SST to hurricane number relationship The purpose of sections 3, 4 and 5 of this article is to compare the performance of a number of simple statistical models for modelling the relationship between our MDR SST index and observed hurricane numbers. The models we consider are as follows: ### Models #### 3.1.1 Poisson distribution independent of SST The simplest model we consider models the number of hurricanes as a poisson distribution independent of SST. We include this model as a baseline that the subsequent models should be able to beat. We write this model as: \\[n\\sim\\text{Po}(\\text{rate}=\\alpha) \\tag{1}\\] where \\(n\\) is the number of hurricanes. We estimate the parameter \\(\\alpha\\) as the mean of the observed hurricane numbers. #### 3.1.2 Linear normal regression Our next model is plain-vanilla linear regression, with both the residuals and the response modelled as normal distributions. We write this model as: \\[n\\sim\\mathrm{N}(\\alpha+\\beta s,\\sigma^{2}) \\tag{2}\\] where \\(s\\) is the SST index. We fit the parameters using maximum likelihood, which is equivalent to least squares in this case. From the point of view of making point predictions of future hurricane numbers this is a perfectly reasonable first model, and the prediction is \\(\\alpha+\\beta s\\). From the point of view of making a probabilistic prediction of future hurricane numbers this model is slightly odd, since it models a non-negative integer (the number of hurricanes) using a real number that can be both positive and negative. Nevertheless we include this model for two reasons: (a) as another baseline for comparison, since it is simple and well understood, and there is a closed-form solution for the parameters, and (b) since this model can easily be optimised for predictive purposes (the next model). #### 3.1.3 Damped linear normal Our third model is an adaption of linear regression. Standard linear regression, when fitted using in-sample fitting techniques such as maximum likelihood or least squares, is an inherently overfitted model. In other words, these fitting techniques optimise the model's ability to predict the training data, but not to extrapolate. For large samples and strong signals this doesn't matter, but for smaller samples and weak signals this means that linear regression does not give good predictions, even when the process being predicted really does consist of a linear trend plus gaussian noise. Adapting linear regression so that it is optimised to make good predictions is, however, difficult, and is not guaranteed to be successful. Our third model is one such simple adaption, due to Jewson and Penzer (2004), in which the slope of the trend is reduced in order to reduce the overfitting. This model may or may not make better predictions than the unadapted linear regression model because of the difficulty of estimating exactly what this slope reduction should be. We write this model as: \\[n\\sim\\mathrm{N}(\\alpha+k\\beta s,\\sigma^{2}) \\tag{3}\\] We fit this model by first fitting the underlying linear regression model, and then calculating the adjustment \\(k\\) using the expressions in Jewson and Penzer (2004). As with the linear normal model this model is a reasonable model if used to make point predictions, but is a slightly odd model to use for probabilistic predictions, for the same reasons as given above. We include it in order to see the extent to which overfitting may be an issue. #### 3.1.4 Linear Poisson The two normally distributed models given above can be criticized, as probabilistic forecast models, for using a distribution that is clearly not close to being correct. To overcome this criticism we now change the distribution from normal to poisson. We write this model as: \\[n\\sim\\mathrm{Po}(\\mathrm{rate}=\\alpha+\\beta s,\\sigma^{2}) \\tag{4}\\] We fit this model using iteratively reweighted least squares. At a mathematical level this model might be criticized because for certain values of the parameters the poisson rate can be negative. However we have found that this is not a problem for the data and parameter ranges that are of interest to us. The standard fitting procedure we use for this model is an in-sample fitting procedure, and thus leads to inherently overfitted parameter estimates, and sub-optimal predictive properties. It would be possible in principle to fit a 'damped' version of this model, in the same way that we have fitted a damped version of linear regression, in order to attempt to overcome this problem. However there appears to be no simple analytic way to do this, and one would have to use more complex methods such as cross-validation. That is beyond the scope of this study, but might be an interesting avenue for future work. #### 3.1.5 Exponential poisson The next model that we use is the same as the previous model, but exchanges a linear relationship for the rate of the poisson distribution for an exponential one. This model is common in the statistical literature, where it is known as either 'poisson regression' or 'a generalised linear model for the poisson distribution with a log-linear link function'. We write this model as: \\[n\\sim\\text{Po}(\\text{rate}=\\exp(\\alpha+\\beta s)) \\tag{5}\\] At a mathematical level this model avoids the potential problem with negative rates described above, since the rates are positive for all parameter values and data sets. This model has previously been used to model the relationship between SST and hurricane numbers by Elsner and Schmertmann (1993) and Solow and Moore (2000). Comparing the linear poisson and exponential poisson models, one obvious question is, is there any statistical evidence for the non-linearity included in the exponential poisson model? We will discuss this question when we present our results below. #### 3.1.6 Exponential negative binomial Our final model is an adaption of the previous model that allows the distribution around the mean to be negative binomial rather than exponential, and thus has one extra parameter that allows for the variance to be different from the mean. We write this model as: \\[n\\sim\\text{NB}(\\text{mean}=\\alpha+\\beta s,\\text{variance}=\\gamma) \\tag{6}\\] ### Model comparison How are we going to compare the results from these different models? We consider two scores, one of which assesses the ability of the models to make point predictions, and the other of which assesses the ability of the models to make probabilistic predictions. For each of these scores we consider in-sample scores, which are not really what we care about, but are included for interest, and out-of-sample scores, which are the real test. The out-of-sample scores are calculated using leave-one-out cross-validation, a.k.a. the Quenouille-Tukey jack-knife (Quenouille, 1949; Tukey, 1958). We score the point predictions using the most obvious score available: the root mean square error. We score the probabilistic predictions using (what we consider to be) the most obvious probabilistic score, which is the mean out-of-sample log-likelihood. ## 4 Comparing the performance of the statistical models We now present results from our comparisons of the various models. First we consider models for the total number of hurricanes, for the periods 1900-2005 and 1950-2005, and then we consider models for intense hurricane numbers for the same two periods. Correlations between SST and hurricane numbers for these 4 cases are shown in table 1. In each of the 4 cases we produce a standard set of diagnostics, consisting of four tables and three graphs. The four tables are: * the scores achieved by the models * the fitted parameter values, including standard error estimates based on the Fisher information * the percentage of times each model wins in pairwise comparisons, and corresponding p-values for point predictions * pairwise comparisons and corresponding p-values for probabilistic predictions The graphs are: * a scatter plot showing the data on which the model is based * the same scatter plot, showing the decade in which each data point occurred * the same scatter plot showing the fitted curves from the six models ### All hurricanes, 1900-2005 The first results we present are based on all hurricanes, and data from 1900 to 2005. The scatter plot shown in figure 1 shows a clear relationship between the SSTs and the hurricane numbers during this period, with warmer SSTs coinciding with more hurricanes. By eye, the relationship looks more or less linear. The linear correlation was found to be 0.56, while the rank correlation was found to be 0.51, as shown in table 1. The decade scatter plot is shown in figure 2. Note that the most recent period labelled by 'A' gives relatively high SST and basin number count. Table 2 shows the score comparisons for the six models for this data set. Considering the RMSE scores we see that all the non-trivial models comfortably beat the trivial flat-line model, as we'd guess would be the case from the scatter plot. Table 4, which shows results for pairwise comparisons in RMSE, shows that the differences between the trivial model and the non-trivial models are statistically significant (at the 5 percent level), with the exception of the exponential binomial model. The differences between the performance of the 5 non-trivial models are small. The exponential models yield the best out-of-sample RMSEs, but none of the RMSE differences between the five models are statistically significant. Given that the exponential poisson model gives the second lowest out-of-sample RMSE and beats the flat poisson model in a statistically significant way, one might choose the exponential poisson model as the best one. However, since the non-trivial models are not different in a statistically significant way, one shouldn't be surprised if that result were overturned given more data. These models all explain around 30% of the variance in the hurricane number time series. Considering the out of sample log-likelihood scores in table 2, we find that the linear and exponential poisson are better than the flat poisson model in a statistically significant way. The linear and damped linear normal are worse than the flat poisson model in a statistically significant way. The exponential negative binomial does yield the best out-of-sample log-likelihood score, but the result is not statistically significant. Based on the log-likelihood scores, one might choose the exponential poisson model once again. The parameters of all the models are reasonably well estimated: all the parameters given in table 3 are significantly different from zero (judging by the standard error estimates, and assuming normality for the sampling distributions). For instance, the slopes in the linear models are between 4.3 and 4.6, with standard error of roughly 0.7. Each extra degree of SST is therefore related to just over 4 more hurricanes, plus or minus 1.5 hurricanes. The in-sample fits to the data for the various models are displayed in figure 3. The damping parameter in the damped linear trend model is very close to one. This suggests that the models are not significantly overfitted, and there is no real need to use such damped models in this case. This is because the signal is strong enough that we can estimate it reasonably well. ### All hurricanes, 1950-2005 Given the doubts that one might have about the quality of both the SST and the hurricane number data prior to 1950, it makes sense to repeat the analysis given in the previous section for just the more recent data from 1950 to 2005. The corresponding results and data plots are provided in tables 6, 7, 8 and 9 and figures 4, 5 and 6. For this data set, a linear correlation of 0.62 and rank correlation of 0.56 was found. There are only small differences in the results relative to the analysis based on 1900-2005 data. Once again, the non-trivial models all beat the trivial constant level model. These results are statistically significant for the linear normal, damped linear normal and linear poisson models. The differences between the non-trivial models are not, once again, statistically significant. Based on these results, one may be inclined to choose the linear poisson model, as it yields the lowest RMSE of the models that beat the flat poisson model in a statistically significant way. These models all explain around 40% of the variance in the hurricane number time series (a little higher than before). With regards to the log-likelihood scores, the linear and exponential poisson models and the exponential negative binomial models beat the flat poisson model, whereas the linear normal and damped linear normal do not. The result for the linear poisson model is statistically significant. The linear and damped linear normal are defeated by the flat poisson model in a statistically significant way. Given these results, one might choose the linear poisson model once again. However, it is possible that this conclusion would be overturned by using more data as the differences between the linear poisson, exponential poisson and exponential negative binomial are not statistically significant. The slope parameters are again significantly different from zero, but the slope of the linear relations is now a bit higher, giving between 5.0 and 5.4 hurricanes per degree, with a slightly larger uncertainty of around 1.0 hurricanes per degree. Given the uncertainties, the slope estimates from the two data sets are entirely consistent. ### Intense hurricanes, 1900-2005 We now consider the relationship between MDR SST and the number of intense hurricanes. Considering the scatter plot in figure 7, and comparing with the scatter plot in figure 1, we see immediately that the relationship is less clear than before. The linear correlation for this data set is 0.52 and the rank correlation is 0.54. This may not be because the underlying relationship is any less strong: simple statistical arguments suggest that the relationship will appear less strongly in the data just because there are fewer events. We now consider the results in tables 10, 11, 12 and 13. The non-trivial models all beat the trivial constant level model statistically significantly, but are not statistically significantly different from each other in terms of the RMSE scores. The parameters of all models are significantly different from zero, and the linear models give roughly 2.9 extra hurricanes per extra degree of SST, with a standard error of around 0.4. The models explain around 25% of the variability in the number of intense hurricanes. As far as the log-likelihood scores are concerned, the linear normal and damped linear normal are defeated by the flat poisson model in a statistically significant way. The linear and exponential poisson models and the negative binomial model defeat the flat poisson model in a statistically significant way. The differences between the linear poisson, exponential poisson and exponential negative binomial are not statistically significant. ### Intense hurricanes, 1950-2005 Finally we consider intense hurricane numbers for the more recent data, for which results are shown in tables 14, 15, 16 and 17 and figures 10 through 12. Once again, the non-trivial models defeat the flat poisson model in a statistically significant way. The differences among the non-trivial models in this case are not statistically significant. Once again, the parameter estimates for the models appear to be significantly different from zero. The linear models give a slightly higher number of hurricanes per degree, roughly around 3.4 extra hurricanes per extra degree, with standard error of around 0.7. With regards to the log-likelihood scores, the linear normal and damped linear normal are defeated by the flat poisson model in a statistically significant way. The linear poisson and exponential negative binomial defeat the flat poisson model in a statistically significant way, whereas the exponential poisson does not. The differences between the linear poisson, exponential poisson and exponential negative binomial are not statistically significant. ## 5 Summary of statistical model results In sections 3 and 4 we have considered how to model the relationship between MDR SST and the number of hurricanes in the Atlantic basin. We considered both the total number of hurricanes and the number of intense hurricanes. We now summarise the results of this investigation. W.r.t. the total number of hurricanes our findings are: * that there is a clear and statistically significant relationship, such that higher SSTs correspond to higher numbers of hurricanes, with one degree of SST relating to between 4.0 and 5.5 extra hurricanes * using only more recent data the relationship is slightly stronger, but is less accurately estimated * that statistical models of this relationship give better point predictions of hurricane numbers than a simple model that ignores this relationship * w.r.t. point predictions, all the non-trivial models defeat the flat poisson model in a statistically significant way with the exception of the exponential poisson and exponential negative binomial model. The differences between the non-trivial models are, however, not statistically significant. If one _had_ to choose among the models based on the RMSE results, one might choose the linear poisson model as it yields the lowest RMSE of the models that beat the trivial model in statistically significant way. * that the non-trivial models explain between 29% and 44% of the variability in the number of hurricanes * w.r.t probabilistic predictions, the linear and damped linear normal models are defeated by the flat poisson model in a statistically significant way. The linear poisson model defeats the flat-line model in a statistically significant way. That there is a non-trivial relationship between Atlantic MDR SSTs and basin-wide hurricane activity is in general agreement with statistical analyses performed in both Klotzbach (2006) and Landsea et al. (1999b). W.r.t the number of intense hurricanes our findings are the same, with the exception of: * each extra degree of SST gives between 3.0 and 3.5 extra intense hurricanes * the non-trivial models explain slightly less of the variability in the numbers of hurricanes: only 25% * w.r.t. the point predictions, all the non-trivial models defeat the flat-line model in a statistically significant way. * w.r.t. the probabilistic predictions, the linear poisson and negative binomial models defeat the flat poisson model in a statistically significant way for both data sets. Our results for intense hurricanes are broadly consistent with an analysis done by Hoyos et al. (2006) which suggests that increasing number of category 4-5 hurricanes is directly linked to the trend in tropical SSTs. Given all of this, what models would we recommend to use to model the SST to hurricane relationship? Firstly, it probably makes sense to use only the recent data, since it is possible to estimate the regression parameters reasonably well using this data, and it avoids doubts about data quality. Assuming we need probabilistic forecasts of hurricane activity, then we have seen that the normal distribution models don't work well. This leaves 3 reasonable models: linear poisson, exponential poisson, and negative binomial. We could eliminate the negative binomial model on the basis that it is more complex than the other two, but performs no better. This then leaves us with the linear poisson and exponential poisson models. On the basis of our results it is not possible to distinguish between these models, and this is perhaps the most important result of this paper: even though previous authors have, by default, used an exponential poisson model for the SST to hurricane number relationship, the linear poisson model works just as well, and in some forecast applications (especially those that involve applying the modelled SST-hurricane relationships for extreme values of the SST) is likely to give quite different results. We end this section by mentioning that a number of studies suggest that the strength of the relationship between SST and hurricane frequency is dependent on the region of the north Atlantic being considered (Shapiro and Goldenberg, 1989; Raper, 1993; Goldenberg et al., 2001). Understanding the regional dependence of the statistical relationship between SST and hurricane numbers is therefore an interesting avenue for future work. ## 6 Making SST based predictions of hurricane numbers We now have all the pieces we need to make SST-based predictions of hurricane numbers. Firstly, in Laepple et al. (2007), we have derived simple statistical methods for predicting SST. We will take three SST predictions from that article: the flat-line model based on 8 years of data (FL), the linear trend model based on 24 years of data, and a damped linear trend model that is the mean of these two. Secondly, in sections 3 and 4 above, we have analysed the relationship between SST and the number of hurricanes in the Atlantic basin. We were able to find two relatively simple models that were significantly better than the trivial model of no relationship in out-of-sample tests. The first of these models represents the mean number of basin hurricanes as a linear function of SST, and the distribution as a poisson distribution. The second model represents the mean number of basin hurricanes as an exponential function of SST, and the distribution once again as poisson. A more complex model with more parameters (that represents the distribution as negative binomial) was not significantly better, so we ignore it. Thirdly, using historical hurricane data for the period 1950 to 2005 we can estimate the probability that individual hurricanes make landfall in the US. For cat 1-5 hurricanes the estimate of this probability is 0.254 while for cat 3-5 hurricanes the estimate is 0.240. Finally, in Jewson (2007), we have derived simple analytic relationships that allow us to put this all together and predict the mean, variance, and standard error of the number of landfalling hurricanes as a function of the mean, variance and standard error of an SST forecast. In particular, we use the equations given in section 9 of that paper. Combining our three SST models with two ways of converting SST to basin hurricane numbers gives a total of six different forecast methods. We present results for all of these six different methods, since the SST forecasts capture a range of possible points of view about the possible future behaviour of SST, and the two SST-to-basin relationships are both sensible models, and can't be distinguished given the observational data, but give clearly different final answers. ## 7 Predictions for category 1-5 landfalling hurricanes We now present our various predictions for SST and the number of category 1-5 hurricanes at US landfall. First, in figure 13, we show the three SST predictions. Second, in tables 18 to 22, we show details of the six predictions of _basin_ hurricane numbers that we derive from these SST predictions. Since these are just an intermediate step in the process of predicting landfalling hurricane numbers, we don't discuss these in detail. We note briefly that the predictions for individual years range from 8.0 to 10.5 hurricanes per year, and the 5 year averages range from 8.9 to 9.9 hurricanes per year. Thirdly, in tables 23 to 27, we show details of the six predictions of _landfalling_ hurricane numbers that we derive from these predictions of basin hurricane numbers by multiplying by the estimated probabilities that a hurricane will make landfall. These predictions are also illustrated in figures 14 to 19. Model 1 gives a flat prediction of future hurricane numbers by year since it is based on a flat prediction of SST and a linear conversion model. Model 4 gives a very gradual increase in the prediction of the mean number of hurricanes with lead time because of the increasing uncertainty of the SST prediction with lead time, in combination with the non-linearity of the conversion model. Models 3 and 6 give rapidly increasing predictions of hurricane numbers since they are based on rapidly increasing SST predictions. Models 2 and 5 lie somewhere in between these two extremes. Models 4, 5 and 6, based on the exponential poisson model for the relation between SST and hurricane numbers, all give higher predictions of future hurricane numbers than models 1, 2 and 3 that are based on the linear poisson relation. ## References * Binter et al. (2006) R Binter, S Jewson, S Khare, A O'Shay, and J Penzer. Year ahead prediction of US landfalling hurricane numbers: the optimal combination of multiple levels of activity since 1900. _arXiv:physics/0611070_, 2006. RMS Internal Report E01. * Demaria (1996) M Demaria. The effect of vertical shear on tropical cyclone intensity change. _Journal of Atmospheric Sciences_, 53:2076-2088, 1996. * Atlantic hurricane hypothesis. _Geophysical Research Letters_, 33, 2006. * Elsner and Schmertmann (1993) J Elsner and C Schmertmann. Improving extended-range seasonal predictions of intense Atlantic hurricane activity. _Weather and Forecasting_, 3:345-351, 1993. * Emanuel et al. (2005) K Emanuel, Ravela S, Vivant E, and Risi C. A combined statistical-deterministic approach of hurricane risk assessment. Unpublished manuscript, 2005. * Goldenberg et al. (2001) S Goldenberg, C Landsea, A Mestas-Nunez, and W Gray. The recent increase in Atlantic hurricane activity: causes and implications. _Science_, 293:474-479, 2001. * Gray (1968) W M Gray. Global view of the origin of tropical disturbances and storms. _Monthly Weather Review_, 95:55-73, 1968. * Hoyos et al. (2006) C D Hoyos, P A Agudelo, P J Webster, and J A Curry. Deconvolution of the factors contributing to the increase in global hurricane intensity. _Science Express_, 2006. * Jarvinen et al. (1983) B Jarvinen, C Neumann, and M Davis. A tropical cyclone data tape for the North Atlantic Basin, 1886-1983: Contents, limitations, and uses. Technical report, NOAA Technical Memorandum NWS NHC 22, 1984. * Jewson (2007) S Jewson. Predicting Hurricane Numbers from Sea Surface Temperature: closed form expressions for the mean, variance and standard error of the number of hurricanes. _arXiv:physics/0701167_, 2007. RMS Internal Report E06. * Jewson and Penzer (2004) S Jewson and J Penzer. Optimal year ahead forecasting of temperature in the presence of a linear trend, and the pricing of weather derivatives. _[http://ssrn.com/abstract=563943_](http://ssrn.com/abstract=563943_), 2004. * Kerr (2005) R A Kerr. Atlantic climate pacemaker for millenia past, decades hence? _Science_, 309:41-43, 2005. * Klotzbach (2006) P J Klotzbach. Trends in global tropical cyclone activity over the past twenty years. _Geophysical Research Letters_, 33:1-4, 2006. * Laepple et al. (2007) T Laepple, S Jewson, J Meagher, A O'Shay, and J Penzer. Five-year ahead prediction of Sea Surface Temperature in the Tropical Atlantic: a comparison of simple statistical methods. _arXiv:physics/0701162_, 2007. * Landsea et al. (1999a) C W Landsea, G D Bell, and W M Gray. The extremely active 1995 atlantic hurricane season: Environmental conditions and verification of seasonal forecasts. _Monthly Weather Review_, 126:1174-1193, 1999a. * Landsea et al. (1999b) C W Landsea, R A Pielke Jr, A M Mestas-Nunez, and J A Knaff. Atlantic basin hurricanes: Indices of climate changes. _Climatic Change_, 42:89-129, 1999b. * Meagher and Jewson (2006) J Meagher and S Jewson. Year ahead prediction of hurricane season SST in the tropical Atlantic. _arXiv:physics/0606185_, 2006. * Peixoto and Oort (1992) J P Peixoto and A H Oort. _Physics of Climate_. American Institute of Physics, New York, 1992. * Quenouille (1949) M Quenouille. Approximate tests of correlation in time series. _Journal of the Royal Statistical Society, Soc. Series B_, 11:18-84, 1949. * Raper (1993) S Raper. _Observational data on the relationships between climate change and the frequency and magnitude of severe tropical storms: Climate and sea level change: Observations, projections and implications_. Cambridge University Press, editors: R A Warrick, E M Barrow and T M L Wigley, 1993. * Rayner et al. [2002] N Rayner, D Parker, E Horton, C Folland, L Alexander, D Rowell, E Kent, and A Kaplan. Global analyses of SST, sea ice and night marine air temperature since the late nineteenth century. _Journal of Geophysical Research_, 108:4407, 2002. * Saunders and Harris [1997] M A Saunders and A R Harris. Statistical evidence links exceptional 1995 Atlantic hurricane season to record sea warming. _Geophysical Research Letters_, 24:1255-1258, 1997. * Shapiro [1982] L J Shapiro. Hurricane climatic fluctuations. part ii: Relation to large-scale circulation. _Monthly Weather Review_, 110:1007-1013, 1982. * Shapiro and Goldenberg [1989] L J Shapiro and S B Goldenberg. Atlantic sea surface temperatures and tropical cyclone formation. _Journal of Climate_, 11:2598-2614, 1989. * Solow and Moore [2000] A R Solow and L Moore. Testing for a trend in a partially incomplete hurricane record. _Journal of Climate_, 13:3696-3699, 2000. * Sriver and Huber [2006] R Sriver and M Huber. Low frequency variability in globally integrated tropical cyclone power dissipation. _Geophysical Research Letters_, 33, 2006. * Trenberth [2005] K Trenberth. Uncertainty in hurricanes and global warming. _Science_, 308:1753-1754, 2005. * Tukey [1958] J W Tukey. Bias and confidence in not quite large samples. _Annals of Mathematical Statistics_, 29:614, 1958. * Webster et al. [2005] P J Webster, G J Holland, and H R Chang. Changes in tropical cyclone number, duration, and intensity in a warming environment. _Science_, 309:1844-1846, 2005. \\begin{table} \\begin{tabular}{|c|c|c|} \\hline & Linear Correlation & Rank Correlation \\\\ \\hline 1900 - 2005 Basin vs SST & 0.56 & 0.51 \\\\ 1950 - 2005 Basin vs SST & 0.62 & 0.56 \\\\ 1900 - 2005 Intense Basin vs SST & 0.52 & 0.54 \\\\ 1950 - 2005 Intense Basin vs SST & 0.53 & 0.56 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Linear and Rank Correlations \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|} \\hline & model 1 & model 2 & model 3 & model 4 & model 5 & model 6 \\\\ \\hline model 1 & 0 (1) & 42 (0.968) & 41 (0.98) & 41 (0.98) & 40 (0.987) & 43 (0.928) \\\\ model 2 & 58 (0.049) & 0 (1) & 45 (0.857) & 46 (0.809) & 52 (0.385) & 45 (0.857) \\\\ model 3 & 59 (0.032) & 55 (0.191) & 0 (1) & 45 (0.857) & 53 (0.314) & 49 (0.615) \\\\ model 4 & 59 (0.032) & 54 (0.248) & 55 (0.191) & 0 (1) & 47 (0.752) & 48 (0.686) \\\\ model 5 & 60 (0.02) & 48 (0.686) & 47 (0.752) & 53 (0.314) & 0 (1) & 43 (0.928) \\\\ model 6 & 57 (0.103) & 55 (0.191) & 51 (0.461) & 52 (0.385) & 57 (0.103) & 0 (1) \\\\ \\hline \\end{tabular} \\end{table} Table 4: Winning count for particular model 1900 - 2005 Basin vs SST \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|} \\hline & model name & RMSE (in) & RMSE (out) & 100-100*RMSE/RMSEconst & LL (in) & LL (out) \\\\ \\hline model 1 & Flat Poisson & 2.648 & 2.674 & 0 & -2.372 & -2.385 \\\\ model 2 & Linear Normal & 2.185 & 2.238 & 29.94 & -2.61 & -2.612 \\\\ model 3 & Damped Linear Normal & 2.185 & 2.239 & 29.867 & -2.61 & -2.612 \\\\ model 4 & Linear Poisson & 2.187 & 2.233 & 30.24 & -2.169 & -2.189 \\\\ model 5 & Exponential Poisson & 2.153 & 2.204 & 32.019 & -2.163 & -2.182 \\\\ model 6 & Exponential Neg. Bin. & 2.153 & 2.168 & 34.249 & -2.163 & -2.171 \\\\ \\hline \\end{tabular} \\end{table} Table 2: RMSE comparison 1900 - 2005 Basin vs SST \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|} \\hline & model 1 & model 2 & model 3 & model 4 & model 5 & model 6 \\\\ \\hline model 1 & 0 (1) & 77 (0) & 77 (0) & 41 (0.98) & 41 (0.98) & 44 (0.897) \\\\ model 2 & 23 (1) & 0 (1) & 45 (0.857) & 12 (1) & 11 (1) & 11 (1) \\\\ model 3 & 23 (1) & 55 (0.191) & 0 (1) & 12 (1) & 11 (1) & 11 (1) \\\\ model 4 & 59 (0.032) & 88 (0) & 88 (0) & 0 (1) & 47 (0.752) & 48 (0.686) \\\\ model 5 & 59 (0.032) & 89 (0) & 89 (0) & 53 (0.314) & 0 (1) & 43 (0.928) \\\\ model 6 & 56 (0.143) & 89 (0) & 89 (0) & 52 (0.385) & 57 (0.103) & 0 (1) \\\\ \\hline \\end{tabular} \\end{table} Table 5: Winning count (LL) for particular model 1900 - 2005 Basin vs SSTFigure 2: 1900 - 2005 Basin vs SST Figure 1: 1900 - 2005 Basin vs SST Figure 3: Fitted Lines for all Models 1900 - 2005 Basin vs SST \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|} \\hline & model name & RMSE (in) & RMSE (out) & 100-100*RMSE/RMSEconst & LL (in) & LL (out) \\\\ \\hline model 1 & Flat Poisson & 2.607 & 2.654 & 0 & -2.332 & -2.352 \\\\ model 2 & Linear Normal & 2.04 & 2.13 & 35.609 & -2.507 & -2.516 \\\\ model 3 & Damped Linear Normal & 2.04 & 2.133 & 35.391 & -2.508 & -2.516 \\\\ model 4 & Linear Poisson & 2.042 & 2.122 & 36.063 & -2.137 & -2.161 \\\\ model 5 & Exponential Poisson & 1.981 & 2.067 & 39.335 & -2.126 & -2.149 \\\\ model 6 & Exponential Neg. Bin. & 1.981 & 2.002 & 43.117 & -2.126 & -2.136 \\\\ \\hline \\end{tabular} \\end{table} Table 6: RMSE comparison 1950 - 2005 Basin vs SST \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|} \\hline & model 1 & model 2 & model 3 & model 4 & model 5 & model 6 \\\\ \\hline model 1 & 0 (1) & 38 (0.978) & 38 (0.978) & 38 (0.978) & 39 (0.959) & 39 (0.959) \\\\ model Figure 4: 1950 - 2005 Basin vs. SST Figure 5: 1950 - 2005 Basin vs. SST Figure 6: Fitted Lines for all Models 1950 - 2005 Basin vs SST Figure 8: 1900 - 2005 Intenense Basin vs SST Figure 7: 1900 - 2005 Intense Basin vs SST Figure 9: Fitted Lines for all Models 1900 - 2005 Intense Basin vs SST \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|} \\hline & model 1 & model 2 & model 3 & model 4 & model 5 & model 6 \\\\ \\hline model 1 & 0 (1) & 36 (0.989) & 36 (0.989) & 36 (0.989) & 38 (0.978) & 36 (0.989) \\\\ model 2 & 64 (0.022) & 0 (1) & 57 (0.175) & 54 (0.344) & 43 (0.886) & 46 (0.748) \\\\ model 3 & 64 (0.022) & 43 (0.886) & 0 (1) & 41 (0.93) & 41 (0.93) & 43 (0.886) \\\\ model 4 & 64 (0.022) & 46 (0.748) & 59 (0.114) & 0 (1) & 43 (0.886) & 45 (0.825) \\\\ model 5 & 62 (0.041) & 57 (0.175) & 59 (0.114) & 57 (0.175) & 0 (1) & 45 (0.825) \\\\ model 6 & 64 (0.022) & 54 (0.344) & 57 (0.175) & 55 (0.252) & 55 (0.252) & 0 (1) \\\\ \\hline \\end{tabular} \\end{table} Table 16: Winning count for particular model 1950 - 2005 Intenense Basin vs SST \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|} \\hline & model name & RMSE (in) & RMSE (out) & 100-100*RMSE/RMSEconst & LL (in) & LL (out) \\\\ \\hline model 1 & Flat Poisson & 1.965 & 2.001 & 0 & -2.018 & -2.044 \\\\ model 2 & Linear Normal & 1.664 & 1.715 & 26.517 & -2.154 & -2.184 \\\\ model 3 & Damped Linear Normal & 1.664 & 1.718 & 26.294 & -2.154 & -2.184 \\\\ model 4 & Linear Poisson & 1.664 & 1.711 & 26.868 & -1.811 & -1.847 \\\\ model 5 & Exponential Poisson & 1.668 & 1.717 & 26.307 & -1.82 & -1.853 \\\\ model 6 & Exponential Neg. Bin. & 1.668 & 1.693 & 28.392 & -1.82 & -1.837 \\\\ \\hline \\end{tabular} \\end{table} Table 14: RMSE comparison 1950 - 2005 Intenense Basin vs SST \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|} \\hline & model 1 & model 2 & model 3 & model 4 & model 5 & model 6 \\\\ \\hline model 1 & 0 (1) & 71 (0.001) & 71 (0.001) & 36 (0.989) & 39 (0.959) & 36 (0.989) \\\\ model 2 & 29 (1) & 0 (1) & 57 (0.175) & 12 (1) & 11 (1) & 12 (1) \\\\ model 3 & 29 (1) & 43 (0.886) & 0 (1) & 12 (1) & 11 (1) & 12 (1) \\\\ model 4 & 64 (0.022) & 88 (0) & 88 (0) & 0 (1) & 43 (0.886) & 45 (0.825) \\\\ model 5 & 61 (0.07) & 89 (0) & 89 (0) & 57 (0.175) & 0 (1) & 43 (0.886) \\\\ model 6 & 64 (0.022) & 88 (0) & 88 (0) & 55 (0.252) & 57 (0.175) & 0 (1) \\\\ \\hline \\end{tabular} \\end{table} Table 17: Winning count (LL) for particular model 1950 - 2005 Intenense Basin vs SST \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|} \\hline & model 1 & model 2 & model 3 & model 4 & model 5 & model 6 \\\\ \\hline model 1 & 0 (1) & 36 (0.989) & 36 (0.989) & 36 (0.989) & 38 (0.978) & 36 (0.989) \\\\ model 2 & 64 (0.022) & 0 (1) & 57 (0.175) & 54 (0.344) & 43 (0.886) & 46 (0.748) \\\\ model 3 & 64 (0.022) & 43 (0.886) & 0 (1) & 41 (0.93) & 41 (0.93) & 43 (0.886) \\\\ model 4 & 64 (0.022) & 46 (0.748) & 59 (0.114) & 0 (1) & 43 (0.886) & 45 (0.825) \\\\ model 5 & 6Figure 11: 1950 - 2005 Intense Basin vs SST Figure 10: 1950 - 2005 Intense Basin vs SST \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \\hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline No. & SST Model(Window) & SST2HU Model & 2006 Mean & 2007 Mean & 2008 Mean & 2009 Mean & 2010 Mean & Mean \\\\ \\hline 1 & FL(8) & Linear Poisson & 8.040 & 8.040 & 8.040 & 8.040 & 8.040 & 8.040 \\\\ 2 & DLT & Linear Poisson & 8.353 & 8.441 & 8.529 & 8.616 & 8.704 & 8.529 \\\\ 3 & LT(22) & Linear Poisson & 8.667 & 8.842 & 9.018 & 9.193 & 9.368 & 9.018 \\\\ 4 & FL(8) & Exp. Poisson & 8.368 & 8.381 & 8.383 & 8.387 & 8.388 & 8.381 \\\\ 5 & DLT & Exp. Poisson & 8.807 & 8.952 & 9.092 & 9.240 & 9.379 & 9.094 \\\\ 6 & LT(22) & Exp. Poisson & 9.295 & 9.596 & 9.897 & 10.217 & 10.524 & 9.906 \\\\ \\hline \\end{tabular} \\end{table} Table 18: Predictions of mean basin hurricane numbers. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \\hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline No. & SST Model(Window) & SST2HU Model & 2006 V/M & 2007 V/M & 2008 V/M & 2009 V/M & 2010 V/M & Mean V/M \\\\ \\hline 1 & FL(8) & Linear Poisson & 1.252 & 1.267 & 1.269 & 1.273 & 1.274 & 1.267 \\\\ 2 & DLT & Linear Poisson & 1.235 & 1.249 & 1.254 & 1.266 & 1.267 & 1.254 \\\\ 3 & LT(22) & Linear Poisson & 1.243 & 1.262 & 1.271 & 1.288 & 1.287 & 1.271 \\\\ 4 & FL(8) & Exp. Poisson & 1.479 & 1.508 & 1.513 & 1.521 & 1.522 & 1.509 \\\\ 5 & DLT & Exp. Poisson & 1.489 & 1.531 & 1.558 & 1.599 & 1.617 & 1.560 \\\\ 6 & LT(22) & Exp. Poisson & 1.554 & 1.630 & 1.688 & 1.771 & 1.807 & 1.694 \\\\ \\hline \\end{tabular} \\end{table} Table 21: Ratio of the variance to the mean. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|} \\hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline No. & SST Model(Window) & SST2HU Model & 2006 V/M & 2007 V/M & 2008 V/M & 2009 V/M & 2010 V/M & Mean V/M \\\\ \\hline 1 & FL(8) & Linear Poisson & 1.252 & 1.267 & 1.269 & 1.273 & 1.274 & 1.267 \\\\ 2 & DLT & Linear Poisson & 1.235 & 1.249 & 1.254 & 1.266 & 1.267 & 1.254 \\\\ 3 & LT(22) & Linear Poisson & 1.243 & 1.262 & 1.271 & 1.288 & 1.287 & 1.271 \\\\ 4 & FL(8) & Exp. Poisson & 1.479 & 1.508 & 1.513 & 1.521 & 1.522 & 1.509 \\\\ 5 & DLT & Exp. Poisson & 1.489 & 1.531 & 1.558 & 1.599 & 1.617 & 1.560 \\\\ 6 & LT(22) & Exp. Poisson & 1.554 & 1.630 & 1.688 & 1.771 & 1.807 & 1.694 \\\\ \\hline \\end{tabular} \\end{table} Table 20: Breakdown of the variance in table 19 into variance driven by SST prediction uncertainty and uncertainty in the SST-to-hurricane numbers regression model. Figure 12: Fitted Lines for all Models 1950 - 2005 Intense Basin vs SST \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|} \\hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\\\ \\hline No. & SST Model(Window) & SST2HU Model & 2006 V/M & 2007 V/M & 2008 V/M & 2009 V/M & 2010 V/M & Mean V/M \\\\ \\hline 1 & FL(8) & Linear Poisson & 1.252 & 1.267 & 1.269 & 1.273 & 1.274 & 1.267 \\\\ 2 & DLT & Linear Poisson & 1.235 & 1.249 & 1.254 & 1.266 & 1.267 & 1.254 \\\\ 3 & LT(22) & Linear Poisson & 1.243 & 1.262 & 1.271 & 1.288 & 1.287 & 1.271 \\\\ 4 & FL(8) & Exp. Poisson & 1.479 & 1.508 & 1.513 & 1.521 & 1.522 & 1.509 \\\\ 5 & DLT & Exp. Poisson & 1.489 & 1.531 & 1.558 & 1.599 & 1.617 & 1.560 \\\\ 6 & LT(22) & Exp. Poisson & 1.554 & 1.630 & 1.688 & 1.771 & 1.807 & 1.694 \\\\ \\hline \\end{tabular} \\end{table} Table 21: Ratio of the variance to the mean. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \\hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline No. & SST Model & S2B & 2006 RMSE & 2007 RMSE & 2008 RMSE & 2009 RMSE & 2010 RMSE & Mean \\\\ \\hline 1 & FL(8) & Linear Poisson & 0.764 & 0.774 & 0.776 & 0.778 & 0.779 & 0.774 \\\\ 2 & DLT & Linear Poisson & 0.975 & 1.044 & 1.113 & 1.189 & 1.269 & 1.118 \\\\ 3 & LT(22) & Linear Poisson & 1.199 & 1.331 & 1.475 & 1.629 & 1.796 & 1.486 \\\\ 4 & FL(8) & Exp. Poisson & 1.247 & 1.266 & 1.268 & 1.273 & 1.274 & 1.266 \\\\ 5 & DLT & Exp. Poisson & 1.612 & 1.744 & 1.878 & 2.029 & 2.187 & 1.890 \\\\ 6 & LT(22) & Exp. Poisson & 2.051 & 2.334 & 2.650 & 3.011 & 3.405 & 2.690 \\\\ \\hline \\end{tabular} \\end{table} Table 22: Standard errors on the predicted means. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \\hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\ \\hline No. & SST Model & S2B Model & B2L Model & 2006 V/M & 2007 V/M & 2008 V/M & 2009 V/M & 2010 V/M & Mean V/M \\\\ \\hline 1 & FL(8) & LinPois & PoisConProp & 1.318 & 1.322 & 1.323 & 1.324 & 1.324 & 1.322 \\\\ 2 & DLT & LinPois & PoisConProp & 1.314 & 1.317 & 1.319 & 1.322 & 1.322 & 1.319 \\\\ 3 & LT(22) & LinPois & PoisConProp & 1.316 & 1.321 & 1.323 & 1.328 & 1.327 & 1.323 \\\\ 4 & FL(8) & ExpPois & PoisConProp & 1.376 & 1.384 & 1.385 & 1.387 & 1.387 & 1.384 \\\\ 5 & DLT & ExpPois & PoisConProp & 1.379 & 1.389 & 1.396 & 1.407 & 1.411 & 1.397 \\\\ 6 & LT(22) & ExpPois & PoisConProp & 1.395 & 1.415 & 1.429 & 1.450 & 1.459 & 1.431 \\\\ \\hline \\end{tabular} \\end{table} Table 26: Ratio of the variance to the mean. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \\hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\ \\hline No. & SST Model & S2B Model & B2L Model & 2006 RMSE & 2007 RMSE & 2008 RMSE & 2009 RMSE & 2010 RMSE & Mean \\\\ \\hline 1 & FL(8) & LinPois & PoisConProp & 0.562 & 0.563 & 0.563 & 0.563 & 0.563 & 0.563 \\\\ 2 & DLT & LinPois & PoisConProp & 0.601 & 0.614 & 0.627 & 0.641 & 0.656 & 0.628 \\\\ 3 & LT(22) & LinPois & PoisConProp & 0.645 & 0.672 & 0.700 & 0.732 & 0.766 & 0.703 \\\\ 4 & FL(8) & ExpPois & PoisConProp & 0.634 & 0.637 & 0.638 & 0.638 & 0.639 & 0.637 \\\\ 5 & DLT & ExpPois & PoisConProp & 0.708 & 0.736 & 0.764 & 0.796 & 0.829 & 0.767 \\\\ 6 & LT(22) & ExpPois & PoisConProp & 0.802 & 0.865 & 0.936 & 1.018 & 1.107 & 0.946 \\\\ \\hline \\end{tabular} \\end{table} Table 27: Standard errors on the predicted means. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \\hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\ \\hline No. & SST Model & S2B Model & B2L Model & 2006 Var & 2007 Var & 2008 Var & 2009 Var & 2010 Var & Mean Var \\\\ \\hline 1 & FL(8) & LinPois & PoisConProp & 2.695 & 2.703 & 2.704 & 2.706 & 2.706 & 2.703 \\\\ 2 & DLT & LinPois & PoisConProp & 2.791 & 2.828 & 2.860 & 2.896 & 2.926 & 2.860 \\\\ 3 & LT(22) & LinPois & PoisConProp & 2.901 & 2.970 & 3.034 & 3.103 & 3.162 & 3.034 \\\\ 4 & FL(8) & ExpPois & PoisConProp & 2.928 & 2.949 & 2.952 & 2.958 & 2.958 & 2.949 \\\\ 5 & DLT & ExpPois & PoisConProp & 3.087 & 3.162 & 3.228 & 3.305 & 3.366 & 3.230 \\\\ 6 & LT(22) & ExpPois & PoisConProp & 3.297 & 3.452 & 3.597 & 3.768 & 3.906 & 3.604 \\\\ \\hline \\end{tabular} \\end{table} Table 23: Predictions of mean landfalling hurricane numbers. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \\hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\ \\hline No. & SST Model & S2B Model & B2L Model & 2006 Var & 2007 Range & 2008 RMSE & 2009 RMSE & 2010 RMSE & Mean \\\\ \\hline 1 & FL(8) & LinPois & PoisConProp & 0.562 & 0.563 & 0.563 & 0.563 & 0.563 & 0.563 \\\\ 2 & DLT & LinPois & PoisConProp & 0.601 & 0.614 & 0.627 & 0.641 & 0.656 & 0.628 \\\\ 3 & LT(22) & LinPois & PoisConProp & 0.645 & 0.672 & 0.700 & 0.732 & 0.766 & 0.703 \\\\ 4 & FL(8) & ExpPois & PoisConProp & 0.634 & 0.637 & 0.638 & 0.638 & 0.639 & 0.637 \\\\ 5 & DLT & ExpPois & PoisConProp & 0.708 & 0.736 & 0.764 & 0.796 & 0.829 & 0.767 \\\\ 6 & LT(22) & ExpPois & PoisConProp & 0.802 & 0.865 & 0.936 & 1.018 & 1.107 & 0.946 \\\\ \\hline \\end{tabular} \\end{table} Table 27: Standard errors on the predicted means. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \\hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\ \\hline No. & SST Model & S2B Model & B2L Model & 2006 RMSE & 2007 RMSE & 2008 RMSE & 2009 RMSE & 2010 RMSE & Mean \\\\ \\hline 1 & FL(8) & LinPois & PoisConProp & 0.562 & 0.563 & 0.563 & 0.563 & 0.563 & 0.563 \\\\ 2 & DLT & LinPois & PoisConProp & 0.601 & 0.614 & 0.627 & 0.641 & 0.656 & 0.628 \\\\ 3 & LT(22) & LinPois & PoisConProp & 0.645 & 0.672 & 0.700 & 0.732 & 0.766 & 0.703 \\\\ 4 & FL(8) & ExpPois & PoisConProp & 0.634 & 0.637 & 0.638 & 0.638 & 0.639 & 0.637 \\\\ 5 & DLT & ExpPois & PoisConProp & 0.708 & 0.736 & 0.764 & 0.796 & 0.829 & 0.767 \\\\ 6 & LT(22) & ExpPois & PoisConProp & 0.802 & 0.865 & 0.936 & 1.018 & 1.107 & 0.946 \\\\ \\hline \\end{tabular} \\end{table} Table 25: Breakdown of the variance in table 24 into variance driven by SST prediction uncertainty and uncertainty in the SST-to-hurricane numbers regression model. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \\hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline No. & SST Model(Window) & SST2HU Model & 2006T(1,T2) & 2007T(1,T2) & 2008(T1,T2) & 2009(T1,T2) & 2010(T1,T2) \\\\ \\hline 1 & FL(8) & Linear Poisson & 20, 80 & 20, 80 & 21, 79 & 21, 79 & 21, 79 \\\\ 2 & DLT & Linear Poisson & 18, 82 & 19, 81 & 19, 81 & 20, 80 & 20, 80 \\\\ 3 & LT(22) & Linear Poisson & 19, 81 & 20, 80 & 20, 80 & 21, 79 & 21, 79 \\\\ 4 & FL(8) & Exp. Poisson & 35, 65 & 36, 64 & 37, 63 & 37, 63 & 37, 63 \\\\ 5 & DLT & Exp. Poisson & 36, 64 & 38, 62 & 40, 60 & 42, 58 & 42, 58 \\\\ 6 & LT(22) & Exp. Poisson & 40, 60 & 43, 57 & 46, 54 & 49, 51 & 51, 49 \\\\ \\hline \\end{tabular} \\end{table} Table 30: Breakdown of the variance in table 29 into variance driven by SST prediction uncertainty and uncertainty in the SST-to-hurricane numbers regression model. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \\hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline No. & SST Model(Window) & SST2HU Model & 2006 Var & 2007 Var & 2008 Var & 2009 Var & 2010 Var & Mean Var \\\\ \\hline 1 & FL(8) & Linear Poisson & 4.851 & 4.907 & 4.916 & 4.931 & 4.933 & 4.908 \\\\ 2 & DLT & Linear Poisson & 5.038 & 5.160 & 5.252 & 5.369 & 5.444 & 5.253 \\\\ 3 & LT(22) & Linear Poisson & 5.319 & 5.537 & 5.717 & 5.932 & 6.072 & 5.716 \\\\ 4 & FL(8) & Exp. Poisson & 6.330 & 6.501 & 6.527 & 6.573 & 6.580 & 6.502 \\\\ 5 & DLT & Exp. Poisson & 6.933 & 7.365 & 7.714 & 8.183 & 8.503 & 7.740 \\\\ 6 & LT(22) & Exp. Poisson & 7.974 & 8.918 & 9.795 & 10.975 & 11.828 & 9.898 \\\\ \\hline \\end{tabular} \\end{table} Table 28: Predictions of mean basin intense hurricane numbers. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|} \\hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline No. & SST Model(Win) & SST2HU Model & 2006 T(1,T2) & 2007(T1,T2) & 2008(T1,T2) & 2009(T1,T2) & 2010(T1,T2) \\\\ \\hline 1 & FL(8) & Linear Poisson & 20, 80 & 20, 80 & 21, 79 & 21, 79 & 21, 79 \\\\ 2 & DLT & Linear Poisson & 18, 82 & 19, 81 & 19, 81 & 20, 80 & 20, 80 \\\\ 3 & LT(22) & Linear Poisson & 19, 81 & 20, 80 & 20, 80 & 21, 79 & 21, 79 \\\\ 4 & FL(8) & Exp. Poisson & 35, 65 & 36, 64 & 37, 63 & 37, 63 & 37, 63 \\\\ 5 & DLT & Exp. Poisson & 36, 64 & 38, 62 & 40, 60 & 42, 58 & 42, 58 \\\\ 6 & LT(22) & Exp. Poisson & 40, 60 & 43, 57 & 46, 54 & 49, 51 & 51, 49 \\\\ \\hline \\end{tabular} \\end{table} Table 29: Prediction of the variance of basin intense hurricane numbers. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|} \\hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline No. & SST Model(Win) & SST2HU Model & 2006 V/M & 2007 V/M & 2008 V/M & 2009 V/M & 2010 V/M & Mean V/M \\\\ \\hline 1 & FL(8) & Linear Poisson & 1.243 & 1.257 & 1.259 & 1.263 & 1.264 & 1.257 \\\\ 2 & DLT & Linear Poisson & 1.224 & 1.235 & 1.239 & 1.249 & 1.249 & 1.239 \\\\ 3 & LT(22) & Linear Poisson & 1.228 & 1.244 & 1.250 & 1.264 & 1.262 & 1.250 \\\\ 4 & FL(8) & Exp. Poisson & 1.537 & 1.573 & 1.578 & 1.588 & 1.589 & 1.573 \\\\ 5 & DLT & Exp. Poisson & 1.561 & 1.617 & 1.654 & 1.711 & 1.738 & 1.658 \\\\ 6 & LT(22) & Exp. Poisson & 1.655 & 1.762 & 1.846 & 1.969 & 2.030 & 1.862 \\\\ \\hline \\end{tabular} \\end{table} Table 31: Ratio of the variance to the mean. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|} \\hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline No. & SST Model(Win) & SST2HU Model & 2006 T(1,T2) & 2007(T1,T2) & 2008(T1,T2) & 2009(T1,T2) & 2010(T1,T2) \\\\ \\hline 1 & FL(8) & Linear Poisson & 20, 80 & 20, 80 & 21, 79 & 21, 79 & 21, 79 \\\\ 2 & DLT & Linear Poisson & 18, 82 & 19, 81 & 19, 81 & 20, 80 & 20, 80 \\\\ 3 & LT(22) & Linear Poisson & 19, 81 & 20, 80 & 20, 80 & 21, 79 & 21, 79 \\\\ 4 & FL(8) & Exp. Poisson & 35, 65 & 36, 64 & 37, 63 & 37, 63 & 37, 63 \\\\ 5 & DLT & Exp. Poisson & 36, 64 & 38, 62 & 40, 60 & 42, 58 & 42, 58 \\\\ 6 & LT(22) & Exp. Poisson & 40, 60 & 43, 57 & 46, 54 & 49, 51 & 51, 49 \\\\ \\hline \\end{tabular} \\end{table} Table 30: Breakdown of the variance in table 29 into variance driven by SST prediction uncertainty and uncertainty in the SST-to-hurricane numbers regression model. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \\hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\ \\hline No. & SST Model & S2B Model & B2L Model & 2006 V/M & 2007 V/M & 2008 V/M & 2009 V/M & 2010 V/M & Mean V/M \\\\ \\hline 1 & FL(8) & LinPois & PoisConProp & 1.298 & 1.302 & 1.302 & 1.303 & 1.303 & 1.302 \\\\ 2 & DLT & LinPois & PoisConProp & 1.295 & 1.298 & 1.297 & 1.297 & 1.297 & 1.297 \\\\ 3 & LT(22) & LinPois & PoisConProp & 1.293 & 1.294 & 1.293 & 1.293 & 1.291 & 1.293 \\\\ 4 & FL(8) & ExpPois & PoisConProp & 1.369 & 1.377 & 1.379 & 1.381 & 1.381 & 1.378 \\\\ 5 & DLT & ExpPois & PoisConProp & 1.375 & 1.388 & 1.397 & 1.411 & 1.417 & 1.398 \\\\ 6 & LT(22) & ExpPois & PoisConProp & 1.397 & 1.423 & 1.443 & 1.473 & 1.487 & 1.447 \\\\ \\hline \\end{tabular} \\end{table} Table 36: Ratio of the variance to the mean. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \\hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\ \\hline No. & SST Model & S2B Model & B2L Model & 2006 RMSE & 2007 RMSE & 2008 RMSE & 2009 RMSE & 2010 RMSE & Mean \\\\ \\hline 1 & FL(8) & LinPois & PoisConProp & 0.241 & 0.241 & 0.242 & 0.242 & 0.242 & 0.241 \\\\ 2 & DLT & LinPois & PoisConProp & 0.269 & 0.278 & 0.288 & 0.298 & 0.309 & 0.289 \\\\ 3 & LT(22) & LinPois & PoisConProp & 0.301 & 0.320 & 0.340 & 0.363 & 0.388 & 0.342 \\\\ 4 & FL(8) & ExpPois & PoisConProp & 0.299 & 0.302 & 0.302 & 0.303 & 0.303 & 0.302 \\\\ 5 & DLT & ExpPois & PoisConProp & 0.352 & 0.373 & 0.393 & 0.417 & 0.442 & 0.396 \\\\ 6 & LT(22) & ExpPois & PoisConProp & 0.421 & 0.469 & 0.522 & 0.585 & 0.654 & 0.530 \\\\ \\hline \\end{tabular} \\end{table} Table 37: Standard errors on the predicted means. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \\hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\ \\hline No. & SST Model & S2B Model & B2L Model & 2006 Var & 2007 Var & 2008 Var & 2009 Var & 2010 Var & Mean Var \\\\ \\hline 1 & FL(8) & LinPois & PoisConProp & 1.216 & 1.219 & 1.220 & 1.221 & 1.219 \\\\ 2 & DLT & LinPois & PoisConProp & 1.280 & 1.301 & 1.320 & 1.338 & 1.356 & 1.319 \\\\ 3 & LT(22) & LinPois & PoisConProp & 1.344 & 1.383 & 1.419 & 1.456 & 1.491 & 1.419 \\\\ 4 & FL(8) & ExpPois & PoisConProp & 1.353 & 1.366 & 1.368 & 1.372 & 1.373 & 1.367 \\\\ 5 & DLT & ExpPois & PoisConProp & 1.465 & 1.518 & 1.564 & 1.620 & 1.664 & 1.566 \\\\ 6 & LT(22) & ExpPois & PoisConProp & 1.616 & 1.729 & 1.838 & 1.970 & 2.080 & 1.846 \\\\ \\hline \\end{tabular} \\end{table} Table 34: Prediction of the variance of landfalling intense hurricane numbers. \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \\hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\ \\hline No. & SST Model & S2B Model & B2L Model & 2006 V/M & 2007 V/M & 2008 V/M & 2009 V/M & 2010 V/M & Mean V/M \\\\ \\hline 1 & FL(8) & LinPois & PoisConProp & 1.298 & 1.302 & 1.302 & 1.303 & 1.303 & 1.302 \\\\ 2 & DLT & LinPois & PoisConProp & 1.295 & 1.298 & 1.297 & 1.297 & 1.297 & 1.297 \\\\ 3 & LT(22) & LinPois & PoisConProp & 1.293 & 1.294 & 1.293 & 1.293 & 1.291 & 1.293 \\\\ 4 & FL(8) & ExpPois & PoisConProp & 1.369 & 1.377 & 1.379 & 1.381 & 1.381 & 1.378 \\\\ 5 & DLT & ExpPois & PoisConProp & 1.375 & 1.388 & 1.397 & 1.411 & 1.417 & 1.398 \\\\ 6 & LT(22) & ExpPois & PoisConProp & 1.397 & 1.423 & 1.443 & 1.473 & 1.487 & 1.447 \\\\ \\hline \\end{tabular} \\end{table} Table 35: Breakdown of the variance in table 34 into variance driven by SST prediction uncertainty and uncertainty in the SST-to-hurricane numbers regression model. Figure 13: The three SST predictions we use as input to our hurricane prediction method, along with observed SSTs for the period 1976 to 2005. Figure 14: Forecasts for the number of basin hurricanes for the years 2006 to 2011 for the 6 models described in the text. Figure 15: As in figure 14, but also showing observed basin hurricane numbers for the period 1950 to 2005. Figure 16: Forecasts for the number of _landfalling_ hurricanes for the years 2006 to 2011 for the 6 models described in the text. Figure 17: As figure 16, but also showing observed landfalling hurricane numbers for the period 1950 to 2005. Figure 18: As figure 17, but only showing observed data since 1980, and the forecasts based on a linear conversion from SST to basin hurricane numbers. Figure 19: As figure 17, but only showing observed data since 1980, and the forecasts based on an _exponential_ conversion from SST to basin hurricane numbers. **S2B2L Models and Uncertainty** LF Cat1-5 Forecasts FL(blue); LT(red); DLT(green) **S2B2L Models and Uncertainty** LF Cat1-5 Forecasts FL(blue); LT(red); DLT(green)
We are building a hurricane number prediction scheme based on first predicting main development region sea surface temperature (SST), then predicting the number of hurricanes in the Atlantic basin given the SST prediction, and finally predicting the number of US landfalling hurricanes based on the prediction of the number of basin hurricanes. We have described a number of SST prediction methods in previous work. We now investigate the empirical relationship between SST and basin hurricane numbers, and put this together with the SST predictions to make predictions of both basin and landfalling hurricane numbers.
Summarize the following text.
arxiv-format/0701176v2.md
Predicting landfalling hurricane numbers from sea surface temperature: theoretical comparisons of direct and indirect approaches Stephen Jewson (RMS) Thomas Laepple (AWI) Kechi Nzerem (RMS) Jeremy Penzer (LSE) _Correspondence email_: [email protected] ## 1 Introduction There is a great need to predict the distribution of the number of hurricanes that might make landfall in the US in the next few years. Such predictions are of use to all the entities that are affected by hurricanes, ranging from local and national governments to insurance and reinsurance companies. How, then, should we make such predictions? There is no obvious best method. For instance, one might consider making a prediction based on time-series analysis of the time-series of historical landfalling hurricane numbers; one might consider making a prediction of basin hurricane numbers using time-series analysis, and convert that prediction to a prediction of landfalling hurricane numbers; one might consider trying to predict SSTs first, and convert that prediction to a prediction of landfalling numbers; or one might try and use output from a numerical model of the climate system. All of these are valid approaches, and each has their own pros and cons. In this article, we consider the idea of first predicting SST and then predicting hurricane numbers given a prediction of SST. There are two obvious flavours of this. The first is what we will call the 'direct' (or 'one-step') method, in which one regresses historical numbers of landfalling hurricanes directly onto historical SSTs, and uses the fitted regression relation to convert a prediction of future SSTs into a prediction of future hurricane numbers. The second is what we will call the 'indirect' (or 'two-step') method, in which one regresses _basin_ hurricane numbers onto historical SSTs, predicts basin numbers, and then predicts landfalling numbers from basin numbers. In the simplest version of the indirect method one might predict landfalling numbers as a constant proportion of the number of basin hurricanes, where this proportion is estimated using historical data. Consideration of the direct and indirect SST-based methods motivates the question: at a theoretical level, which of these two methods is likely to work best? This is a statistical question about the properties of regression and proportion models. We consider this abstract question in the context of two simple models. The first model is the more realistic of the two. It uses observed SSTs, models the mean number of hurricanes in the basin as a linear function of SST, and models each basin hurricane as having a constant probability of making landfall. We run simulations that allow us to directly compare the performance of the direct and indirect methods in the context of this model. The second model is less realistic, but allows us to derive a general analytical result for the relative performance of the direct and indirect methods. In this model we represent SST, basin and landfalling hurricane numbers as being normally distributed and linearly related. We don't think the answer as to which of the direct or indirect methods is better is _a priori_ obvious. On the one hand, the direct method has fewer parameters to estimate, which might work in its favour. On the other hand, the indirect method allows us to use more data by incorporating the basin hurricane numbers into the analysis. Section 2 describes the methods used in the simulation study, and section 3 describes the results from that study. In section 4 we derive general analytic results for the linear-normal model. Finally in section 5 we discuss our results. ## 2 Simulation-based analysis: methods For our simulation study, we compare the direct and indirect methods described above as follows. ### Generating artificial basin hurricane numbers First, we simulate 10,000 sets of artificial basin hurricane numbers for the period 1950-2005, giving a total of 10,000 x 56 = 560,000 years of simulated hurricane numbers. These numbers are created by sampling from poisson distributions with mean given by: \\[\\lambda=\\alpha+\\beta S \\tag{1}\\] where \\(S\\) is the observed MDR SST for each year in the period 1950-2005. The values of \\(\\alpha\\) and \\(\\beta\\) are derived from model 4 in table 7 in Binter et al. (2007), in which observed basin hurricane numbers were regressed onto observed SSTs using data for 1950-2005. They have values of 6.25 and 5, respectively. The basin hurricane numbers we create by this method should contain roughly the same long-term SST driven variability as the observed basin hurricane numbers, but different numbers of hurricanes in the individual years. We say 'roughly' the same, because (a) the linear model we are using to relate SST to hurricane numbers is undoubtedly not exactly correct, although given the analysis in Binter et al. (2007) is certainly seems to be reasonable, and (b) the parameters of the linear model are only estimated. ### Generating artificial landfalling hurricane numbers Given the 10,000 sets of simulated basin hurricane numbers described above, we then create 10,000 sets of simulated _landfalling_ hurricane numbers by applying the rule that each basin hurricane has a probability of 0.254 of making landfall (this value is taken from observed data for 1950-2005). The landfalling hurricane numbers we create by this method should contain roughly the same long-term SST driven variability as the observed landfalling series, but different numbers of hurricane in the individual years. They should also contain roughly the right dependency structure between the number of hurricanes in the basin and the number at landfall (e.g. that years with more hurricanes in the basin will tend to have more hurricanes at landfall). ### Making predictions We now have 10,000 sets of 56 years of artificial data for basin and landfalling hurricanes. This data contains a realistic representation of the SST-driven variability of hurricane numbers, and of the dependency structure between the numbers of hurricanes in the basin and at landfall, but different actual numbers of hurricanes from the observations. We can consider this data as 10,000 realisations of what might have occurred over the last 56 years, had the SSTs been the same, but the evolution of the atmosphere different. This data is a test-bed that can help us understand aspects of the predictability of landfalling hurricanes given SST. The observed and simulated data is illustrated in figures 1 to 5. Figure 1 shows the observed basin data (solid black line) and the observed landfall data (solid grey line). The dashed black line shows the variability in the observed basin data that is explained using SSTs. The dotted grey line shows the variability in the observed landfall data that is explained using SSTs using the direct method, and the dotted grey line shows the variability in the landfall data that is explained using SSTs using the indirect method. Figures 2 to 5 show 4 realisations of the simulated data. In each figure the dotted and dashed lines are the same as in figure 1, and show the SST driven signal. The solid black line then shows the simulated basin hurricane numbers and the solid grey line shows the simulated landfalling hurricane numbers. We test predictions of landfalling hurricane numbers using the direct method as follows: * we loop through the 10,000 sets of simulated landfalling hurricanes * for each set, we miss out one of the 56 years * using the other 55 years in that set, we build a linear regression model between SST and landfalling hurricane numbers * we then use that fitted model to predict the number of landfalling hurricanes in the missed year, given the SST for that year * we calculate the error for that prediction * we then repeat for all 10,000 sets (missing out a different year each time) * this gives us 10,000 prediction errors, from which we calculate the RMSE We test the indirect method in almost exactly the same way, except that this time we also fit a model for predicting landfalling numbers from basin numbers. ### Comparing the predictions We compare the direct and indirect predictions in two ways: * First, we compare the two RMSE values * Second, we count what proportion of the time the errors from the direct method are smaller than the errors from the indirect method We also repeat the entire calculation a number of times as a rough way to evaluate the convergence of our results. Simulation-based analysis: results We now present the results from our simulation study. The RMSE for the direct method is 1.61 hurricanes, while the RMSE for the indirect method is 1.58 hurricanes. This difference is small, but the sign of it does appear to be real: when we repeat the whole experiment a number of times, we always find that the indirect method beats the direct method. The indirect method beats the direct method 51.8% of the time. Given the design of the experiment, these results tell us how the two methods perform, on average over the whole range of SST values. Next year's SST, however, is likely to be warm relative to historical SSTs. We therefore also consider the more specific question of how the methods are likely to perform for given warm SSTs. Based on Laepple et al. (2007), we fit a linear trend to the historical SSTs, and extrapolate this trend out to 2011. This then gives SST values that are warmer than anything experienced in history (27.987\\({}^{o}\\)C to be precise). We then repeat the whole analysis for predictions for this warm SST only. The results are more or less as before: the indirect method still wins, only this time by a slightly larger margin. The ratio of RMSE scores (direct divided by indirect) increases from 1.02 to 1.04. ## 4 The Linear normal case We now study a slightly less realistic model, in which we take SSTs and hurricane numbers in the basin and at landfall to be normally distributed. These changes allow us to derive a very general result for the relative performance of the direct and indirect methods. ### The setup Here's how we set the problem up in this case. Consider two simple regression models for centred random variables \\(Y\\) and \\(Z\\), \\[Y = X\\beta+\\varepsilon,\\quad\\varepsilon\\sim(0,\\sigma_{\\varepsilon}^ {2}I_{n}),\\] \\[Z = Y\\gamma+\\eta,\\quad\\eta\\sim(0,\\sigma_{\\eta}^{2}I_{n}),\\] where \\(\\varepsilon\\) and \\(\\eta\\) are independent. Here \\(X\\), \\(Y\\), \\(Z\\), \\(\\varepsilon\\) and \\(\\eta\\) are \\(n\\times 1\\) column vectors, \\(\\beta\\) and \\(\\gamma\\) are scalars, and \\(I_{n}\\) is the \\(n\\times n\\) identity matrix. We will assume \\(X\\) is fixed. In relation to the hurricane problem, \\(X\\) is the time-series of \\(n\\) years of SST values, \\(Y\\) is the time-series of \\(n\\) years of basin hurricane numbers and \\(Z\\) is the time-series of \\(n\\) years of landfalling hurricane numbers. Note that in our notation \\(X\\) is the _whole time-series_ of SST, written as a vector, and similarly for \\(Y\\) and \\(Z\\). Using vector notation avoids the messy use of subscripts. Two immediate comments about this setup: (a) we are assuming that basin and landfalling hurricane numbers are normally distributed. This doesn't really make sense, since they are counts that can only take integer values: using a poisson distribution would make more sense. We are starting off by addressing this question for normally distributed data because it's more tractable that way; (b) we are assuming a linear relationship (with offset and slope) between basin hurricanes and landfalling hurricanes. This is also a little odd, since there is no reason to have an offset in this relationship: if there aren't any basin hurricanes, there can't be any landfalling hurricanes. The most obvious model would be that each hurricane has a constant proportion of making landfall. Again, we are starting off by addressing this question in a linear context because it's more tractable that way. We want to know about the accuracy of forecasts that we might make with the direct and indirect methods. This translates mathematically into saying that we want to estimate \\[E(z_{n+1}) = E(y_{n+1})\\gamma \\tag{2}\\] \\[= x_{n+1}\\beta\\gamma\\] (3) \\[= x_{n+1}\\delta \\tag{4}\\] where \\(\\delta=\\beta\\gamma\\). The problem then boils down to measuring the quality of the estimator of \\(\\delta\\) since, if \\(\\hat{z}_{n+1}=x_{n+1}\\hat{\\delta}\\) is an estimator of \\(E(z_{n+1})\\) then \\[\\mbox{MSE}(\\hat{z}_{n+1}) = \\mbox{MSE}(x_{n+1}\\hat{\\delta}) \\tag{5}\\] \\[= E[(x_{n+1}\\hat{\\delta}-x_{n+1}\\delta)(x_{n+1}\\hat{\\delta}-x_{n+1 }\\delta)^{\\prime}]\\] (6) \\[= x_{n+1}\\mbox{MSE}(\\hat{\\delta})x_{n+1}^{\\prime}. \\tag{7}\\] So we now consider the direct and indirect methods for estimating \\(\\delta\\). ### Direct estimator of \\(\\delta\\) We start by considering the direct, or one-step, method. This means we consider the relationship between \\(X\\) and \\(Z\\), ignoring \\(Y\\). The usual OLS estimator for \\(\\delta\\) is \\[\\delta^{\\dagger} = (X^{\\prime}X)^{-1}X^{\\prime}Z \\tag{8}\\] \\[= (X^{\\prime}X)^{-1}X^{\\prime}(X\\beta\\gamma+\\varepsilon\\gamma+\\eta)\\] (9) \\[= \\delta+(X^{\\prime}X)^{-1}X^{\\prime}(\\varepsilon\\gamma+\\eta). \\tag{10}\\] What are the statistical properties of this estimator? In terms of mean: \\[E(\\delta^{\\dagger})=\\delta \\tag{11}\\] i.e. the estimator is unbiased. In terms of variance \\[\\mbox{Var}(\\delta^{\\dagger}) = (X^{\\prime}X)^{-1}X^{\\prime}\\mbox{Var}(\\varepsilon\\gamma+\\eta)X( X^{\\prime}X)^{-1}. \\tag{12}\\] We know that \\(\\mbox{Var}(\\varepsilon\\gamma+\\eta)=\\sigma_{\\varepsilon}^{2}I_{n}\\gamma^{2}+ \\sigma_{\\eta}^{2}I_{n}\\), so \\[\\mbox{Var}(\\delta^{\\dagger})=(X^{\\prime}X)^{-1}(\\sigma_{\\varepsilon}^{2}\\gamma ^{2}+\\sigma_{\\eta}^{2}). \\tag{13}\\] By equation 7 this then gives us an expression for the performance of the direct method. ### Indirect estimator of \\(\\delta\\) We now consider the indirect, or two-step, method. This means considering the relationships between \\(X\\) and \\(Y\\), and \\(Y\\) and \\(Z\\). First, we consider estimating each regression separately. The OLS estimators for the slopes in each case are: \\[\\hat{\\beta} = (X^{\\prime}X)^{-1}X^{\\prime}Y \\tag{14}\\] \\[= \\beta+(X^{\\prime}X)^{-1}X^{\\prime}\\varepsilon\\] (15) \\[\\hat{\\gamma} = (Y^{\\prime}Y)^{-1}Y^{\\prime}Z\\] (16) \\[= \\gamma+(Y^{\\prime}Y)^{-1}Y^{\\prime}\\eta \\tag{17}\\]We now put the two models together, to create a single regression model based on the separate estimates for the two steps. We call the estimate of the slope of this combined model \\(\\hat{\\delta}\\). Combining the expressions above, we have that: \\[\\hat{\\delta} = \\hat{\\beta}\\hat{\\gamma} \\tag{18}\\] \\[= \\beta\\gamma+(X^{\\prime}X)^{-1}X^{\\prime}\\varepsilon\\gamma+\\beta(Y ^{\\prime}Y)^{-1}Y^{\\prime}\\eta+(X^{\\prime}X)^{-1}X^{\\prime}\\varepsilon(Y^{ \\prime}Y)^{-1}Y^{\\prime}\\eta \\tag{19}\\] What are the statistical properties of this estimator \\(\\hat{\\delta}\\)? It is clear (by independence of \\(\\varepsilon\\) and \\(\\eta\\)) that \\(\\hat{\\delta}\\) is unbiased; \\[E(\\hat{\\delta}) = \\beta\\gamma \\tag{20}\\] \\[= \\delta \\tag{21}\\] The variance is more awkward. Note that if \\(\\varepsilon\\) were known then \\(\\hat{\\beta}\\) and \\(Y\\) would be fixed constants. Thus, \\[E(\\hat{\\delta}|\\varepsilon) = E(\\hat{\\beta}\\hat{\\gamma}|\\varepsilon) \\tag{22}\\] \\[= \\hat{\\beta}E(\\hat{\\gamma}|\\varepsilon)\\] (23) \\[= \\hat{\\beta}\\gamma,\\] (24) \\[\\mbox{Var}(\\hat{\\delta}|\\varepsilon) = \\mbox{Var}(\\hat{\\beta}\\hat{\\gamma}|\\varepsilon)\\] (25) \\[= \\hat{\\beta}\\mbox{Var}(\\hat{\\gamma}|\\varepsilon)\\hat{\\beta}^{\\prime}\\] (26) \\[= \\hat{\\beta}(Y^{\\prime}Y)^{-1}\\hat{\\beta}^{\\prime}\\sigma_{\\eta}^{ 2}. \\tag{27}\\] and so \\[\\mbox{Var}(\\hat{\\delta}) = \\mbox{Var}(\\hat{\\beta}\\hat{\\gamma}) \\tag{28}\\] \\[= E[\\mbox{Var}(\\hat{\\beta}\\hat{\\gamma}|\\varepsilon)]+\\mbox{Var}[E( \\hat{\\beta}\\hat{\\gamma}|\\varepsilon)]\\] (29) \\[= E[\\hat{\\beta}(Y^{\\prime}Y)^{-1}\\hat{\\beta}^{\\prime}]\\sigma_{\\eta }^{2}+\\gamma\\mbox{Var}(\\hat{\\beta})\\gamma^{\\prime}. \\tag{30}\\] where we have used a standard relation for disaggregating the variance: \\[\\mbox{var}(a)=E[\\mbox{var}(a|b)]+\\mbox{var}[E(a|b)] \\tag{31}\\] Using the facts that \\[E(Y^{\\prime}Y) = \\beta^{\\prime}X^{\\prime}X\\beta+n\\sigma_{\\varepsilon}^{2} \\tag{32}\\] \\[E(\\hat{\\beta}\\hat{\\beta}^{\\prime}) = \\beta\\beta^{\\prime}+(X^{\\prime}X)^{-1}\\sigma_{\\varepsilon}^{2} \\tag{33}\\] and approximating to second order: \\[\\mbox{Var}(\\hat{\\delta})=\\left[\\frac{\\beta^{2}+q^{2}}{\\beta^{2}+nq^{2}}\\right] (X^{\\prime}X)^{-1}\\sigma_{\\eta}^{2}+q^{2}\\gamma^{2}. \\tag{34}\\] where \\(q^{2}=(X^{\\prime}X)^{-1}\\sigma_{\\varepsilon}^{2}\\). ### Comparing the two estimators We are now in a position to compare the estimators for the direct and indirect methods. Subtracting equation 34 from equation 13 gives:\\[\\mathrm{Var}(\\delta^{\\dagger})-\\mathrm{Var}(\\hat{\\delta}) = (X^{\\prime}X)^{-1}(\\sigma_{\\varepsilon}^{2}\\gamma^{2}+\\sigma_{ \\eta}^{2})-\\left[\\frac{\\beta^{2}+q^{2}}{\\beta^{2}+nq^{2}}\\right](X^{\\prime}X)^{- 1}\\sigma_{\\eta}^{2}-(X^{\\prime}X)^{-1}\\sigma_{\\varepsilon}^{2}\\gamma^{2} \\tag{35}\\] \\[= (X^{\\prime}X)^{-1}\\sigma_{\\eta}^{2}-\\left[\\frac{\\beta^{2}+q^{2}} {\\beta^{2}+nq^{2}}\\right](X^{\\prime}X)^{-1}\\sigma_{\\eta}^{2}\\] (36) \\[= \\left(1-\\left[\\frac{\\beta^{2}+q^{2}}{\\beta^{2}+nq^{2}}\\right] \\right)(X^{\\prime}X)^{-1}\\sigma_{\\eta}^{2}\\] (37) \\[= \\left[\\frac{(n-1)q^{2}}{\\beta^{2}+nq^{2}}\\right](X^{\\prime}X)^{- 1}\\sigma_{\\eta}^{2} \\tag{38}\\] The right hand side of this equation is clearly positive for \\(n>1\\). This indicates: * that using the indirect method is an improvement on the direct method, at least up to our second order approximations * that if \\(\\frac{\\beta^{2}}{q^{2}}\\) is small or \\(\\sigma_{\\eta}^{2}\\) large then using the indirect method provides a marked improvement over the direct approach ## 5 Conclusions We have compared the likely performance of direct and indirect methods for predicting landfalling hurricane numbers from SST. The direct method is based on building a linear regression model directly from SST to landfalling hurricane numbers. The indirect method is based on building a regression model from SST to basin numbers, and then predicting landfalling numbers from basin numbers using a constant proportion. First, we compare these two methods in the context of a reasonably realistic model, using simulations. We find that the indirect method is better than the direct method, but that the difference is small. Secondly, we compare the two methods in the context of a less realistic model in which all variables are normally distributed. For this model we are able to derive the interesting general result that the indirect method should _always_ be better. Which method should we then use in practice? If we had to chose one method, our results seem to imply that we should choose the indirect method, since it is more accurate. The simulation results suggest, however, that the performance of the two methods is likely to be very close for the values of the parameters appropriate for hurricanes in the real world. Given the possibility to use two methods we would use both, as alterative points of view. Ideally we would also be able to solve the more realistic model analytically, as we have done for the linear-normal case. We are working on that. ## References * Binter et al. (2007) R Binter, S Jewson, and S Khare. Statistical modelling of the relationship between Main Development Region Sea Surface Temperature and Atlantic Basin hurricane numbers. _arXiv:physics/0701170_, 2007. RMS Internal Report E04a. * Laepple et al. (2007) T Laepple, S Jewson, J Meagher, A O'Shay, and J Penzer. Five-year ahead prediction of Sea Surface Temperature in the Tropical Atlantic: a comparison of simple statistical methods. _arXiv:physics/0701162_, 2007. Figure 1: Atlantic basin and landfalling hurricane numbers for the period 1950 to 2005 (solid lines), with the component of the variability that can be explained by SSTs (broken lines). Figure 2: One realisation of simulated basin and landfalling hurricane numbers (solid lines), with the SST driven components (broken lines). Figure 3: As in figure 2, but for a different realisation. Figure 4: As in figure 2, but for a different realisation. Figure 5: As in figure 2, but for a different realisation.
We consider two ways that one might convert a prediction of sea surface temperature (SST) into a prediction of landfalling hurricane numbers. First, one might regress historical numbers of landfalling hurricanes onto historical SSTs, and use the fitted regression relation to predict future landfalling hurricane numbers given predicted SSTs. We call this the direct approach. Second, one might regress _basin_ hurricane numbers onto historical SSTs, estimate the proportion of basin hurricanes that make landfall, and use the fitted regression relation and estimated proportion to predict future landfalling hurricane numbers. We call this the _indirect_ approach. Which of these two methods is likely to work better? We answer this question for two simple models. The first model is reasonably realistic, but we have to resort to using simulations to answer the question in the context of this model. The second model is less realistic, but allows us to derive a general analytical result.
Write a summary of the passage below.
arxiv-format/0701221v1.md
# Scattering of internal waves from small sea bottom inhomogeneities A. D. Zakharenko Il'ichev Pacific oceanological institute, Baltiyskay St. 43, Vladivostok, 41, 690041, Russia ## 1 Introduction The concept of mode-to-mode scattering was considered in the context of the acoustic scattering from small compact irregularities of the ocean floor by Wetton, Fawcett [1]. In their work some simple formulas for modal conversion coefficients, quantifying the amount of energy that is scattered from one normal mode of the sound field to another, were derived. Recently new formulas for these coefficients were obtained by Zakharenko [2] and applied to the inverse scattering problem in the subsequent work [3]. This paper contains the detailed derivation of such formulas in the case of scattering of linear internal waves from small compact sea bottom inhomogeneities. Some numerical examples are presented. Formulation and derivation of the main result We shall use the linearized equations for inviscid, incompressible stably stratified fluid, written for the harmonic dependence on the time with the factor \\(e^{-i\\omega t}\\) in the form \\[-i\\omega\\rho_{0}u+\\beta P_{x} =0,\\] \\[-i\\omega\\rho_{0}v+\\beta P_{y} =0,\\] \\[-i\\omega\\rho_{0}w+\\beta P_{z}+\\beta\\rho_{1} =0, \\tag{1}\\] \\[-i\\omega\\rho_{1}+w\\rho_{0z} =0,\\] \\[u_{x}+v_{y}+w_{z} =0,\\] where \\(x\\), \\(y\\), and \\(z\\) are the Cartesian co-ordinates with the z-axis directed upward, \\(\\rho_{0}=\\rho_{0}(z)\\) is he undisturbed density, \\(\\rho_{1}=\\rho_{1}(x,y,z)\\) is the perturbation of density due to motion, \\(P\\) is the pressure, and \\(u\\), \\(v\\;w\\) are the \\(x\\), \\(y\\) and \\(z\\) components of velocity respectively. The variables are nondimensional, based on a length scale \\(\\bar{h}\\)(a typical vertical dimension), a time scale \\(\\bar{N}^{-1}\\) (where \\(\\bar{N}\\) is a typical value of the Brunt-Vaisala frequency), and a density scale \\(\\;\\bar{\\rho}\\) (a typical value of the density). The parameter \\(\\beta\\) is \\(g/(\\bar{h}\\bar{N}^{2})\\), where \\(g\\) is the gravity acceleration. The boundary conditions for these equations are \\[\\begin{array}{ccl}w=0&\\mbox{at}&z=0,\\\\ w=-uH_{x}-vH_{y}&\\mbox{at}&z=-H,\\end{array} \\tag{2}\\] where \\(H=H(x,y)\\) is the bottom topography. We introduce a small parameter \\(\\epsilon\\), and postulate that the components of velocity and the pressure are represented in the form \\[\\begin{array}{ccl}u=u_{0}+\\epsilon u_{1}+\\ldots,&&v=v_{0}+\\epsilon v_{1}+ \\ldots,\\\\ w=w_{0}+\\epsilon w_{1}+\\ldots,&&P=P_{0}+\\epsilon P_{1}+\\ldots\\end{array}\\] We suppose also that the bottom topography is represented in the form \\(H=h_{0}+\\epsilon h_{1}\\), where \\(h_{0}\\) is constant and \\(h_{1}=h_{1}(x,y)\\) is a function of \\(x\\), \\(y\\) vanishing outside the bounded domain \\(\\Omega\\), which in the sequel is called a domain of inhomogeneity. Excluding from the system (1) \\(\\rho_{1}\\) and substituting the introduced expansions, we obtain \\[-i\\omega\\rho_{0}(u_{0}+\\epsilon u_{1}+\\ldots)+\\beta(P_{0x}+\\epsilon P_{1x}+\\ldots)=0,\\] \\[-i\\omega\\rho_{0}(v_{0}+\\epsilon v_{1}+\\ldots)+\\beta(P_{0y}+ \\epsilon P_{1y}+\\ldots)=0,\\] \\[(\\omega^{2}\\rho_{0}+\\beta\\rho_{0z})(w_{0}+\\epsilon w_{1}+\\ldots)+ i\\omega\\beta(P_{0z}+\\epsilon P_{z1}+\\ldots)=0,\\] \\[(u_{0x}+\\epsilon u_{1x}+\\ldots)+(v_{0y}+\\epsilon v_{1y}+\\ldots)+ w_{0z}+\\epsilon w_{1z}+\\ldots=0,\\] with the boundary conditions \\[w_{0}+\\epsilon w_{1}+\\ldots=0\\quad\\mbox{at}\\quad z=0,\\] \\[w_{0}+\\epsilon w_{1}+\\ldots=-(u_{0}+\\epsilon u_{1}+\\ldots)(h_{x 0}+\\epsilon h_{1x}) \\tag{4}\\] \\[-\\epsilon(v_{0}+\\epsilon v_{1}+\\ldots)(h_{1y}+\\ldots)\\quad\\mbox{ at}\\quad z=-H.\\] Separating terms in various orders of \\(\\epsilon\\), we obtain a sequence of boundary problems. At order \\(O(\\epsilon^{0})\\) we have \\[-i\\omega\\rho_{0}u_{0}+\\beta P_{0x} =0,\\] \\[-i\\omega\\rho_{0}v_{0}+\\beta P_{0y} =0,\\] \\[(\\omega^{2}\\rho_{0}+\\beta\\rho_{0z})w_{0}+i\\omega\\beta P_{0z} =0,\\] \\[u_{0x}+v_{0y}+w_{0z} =0,\\] with the boundary conditions \\[w_{0}=0\\quad\\mbox{at $z=0$}\\] \\[w_{0}=0\\quad\\mbox{at $z=-h_{0}$}.\\] Differentiating the third equation in (5) twice with respect to \\(x\\) and twice with respect to \\(y\\), summing obtained equations and replacing \\(\\beta(P_{0zxx}+P_{0zyy})\\) by \\(-i\\omega(\\rho_{0}w_{0z})_{z}\\), we obtain \\[(\\omega^{2}\\rho_{0}+\\beta\\rho_{0z})(w_{0xx}+w_{0yy})+\\omega^{2}(\\rho_{0}w_{0z} )_{z}=0. \\tag{6}\\] We seek a solution to this equation in the form of the sum of normal modes \\(w_{0}=e^{i(kx+ly)}\\phi(z)\\), where \\(\\phi\\) is the eigenfunction of the spectral boundary problem \\[-(\\omega^{2}\\rho_{0}+\\beta\\rho_{0z})(k^{2}+l^{2})\\phi+\\omega^{2} (\\rho_{0}\\phi_{z})_{z} =0,\\] \\[\\phi(0)=\\phi(-h_{0}) =0, \\tag{7}\\] with the eigenvalue \\(\\lambda=k^{2}+l^{2}\\). It is well known that the problem (7) has countably many eigenvalues \\(\\lambda_{n}\\), which are all positive. The corresponding real eigenfunctions \\(\\phi_{n}\\) we normalize by the condition \\[-\\int_{-h_{0}}^{0}\\left(\\omega^{2}\\rho_{0}+\\beta\\rho_{0z}\\right)\\phi^{2}\\,dz= \\frac{\\omega^{2}}{k^{2}+l^{2}}\\int_{-h_{0}}^{0}\\rho_{0}(\\phi_{z})^{2}\\,dz=1. \\tag{8}\\]The eigenfunctions \\(\\phi_{n}\\) and \\(\\phi_{m}\\) with \\(n\ eq m\\) are also orthogonal \\[(\\phi_{n},\\phi_{m})=0 \\tag{9}\\] with respect to the inner product \\[(\\phi,\\psi)=-\\int_{-h_{0}}^{0}\\left(\\omega^{2}\\rho_{0}+\\beta\\rho_{0z}\\right)\\phi \\psi\\,dz \\tag{10}\\] In our scattering problem \\(w_{0}\\) is the incident field, and we shall calculate the main term of scattering field \\(w_{1}\\), so we act in the framework of the Born approximation. At the first order of \\(\\epsilon\\) we obtain the following system of equations: \\[\\begin{split}-i\\omega\\rho_{0}u_{1}+\\beta P_{1x}&= 0,\\\\ -i\\omega\\rho_{0}v_{1}+\\beta P_{1y}&=0,\\\\ (\\omega^{2}\\rho_{0}+\\beta\\rho_{0z})w_{1}+i\\omega\\beta P_{1z}& =0,\\\\ u_{1x}+v_{1y}+w_{1z}&=0,\\end{split} \\tag{11}\\] with the boundary conditions \\[\\begin{split} w_{1}=0\\quad\\text{at}\\quad z=0,\\\\ w_{1}=-u_{0}h_{1x}-v_{0}h_{1y}\\quad\\text{at}\\quad z=-h_{0}- \\epsilon h_{1}.\\end{split} \\tag{12}\\] So far as we are interesting in the connection of modal contents of incident and scattering fields, we suppose that the incident field consists of one mode \\(w_{0}=e^{i(k_{n}x+l_{n}y)}\\phi_{n}(z)\\). Reducing the second boundary condition (12) to the boundary \\(z=-h_{0}\\) with taking into account the explicit form of \\(w_{0}\\), we obtain the new boundary condition for \\(w_{1}\\) at the boundary \\(z=-h_{0}\\): \\[w_{1}=\\left(h_{1}-\\frac{ik_{n}}{k_{n}^{2}+l_{n}^{2}}h_{1x}-\\frac{il_{n}}{k_{n}^ {2}+l_{n}^{2}}h_{1y}\\right)e^{i(k_{n}x+l_{n}y)}\\phi_{nz}. \\tag{13}\\] Reducing the system (11) in the same manner as it was done for the system (5), we obtain the equation for \\(w_{1}\\): \\[(\\omega^{2}\\rho_{0}+\\beta\\rho_{0z})(w_{1xx}+w_{1yy})+\\omega^{2}(\\rho_{0}w_{1z} )_{z}=0. \\tag{14}\\] We seek the scattering field in the form \\(w_{1}=\\sum_{m=1}^{N}C_{nm}(x,y)\\phi_{m}\\), the functions \\(C_{nm}(x,y)\\) are called the modal conversion coefficients. To obtain the equation for \\(C_{nm}\\) we substitute the postulated form of \\(w_{1}\\) to the (13), multiplicative it by the function \\(\\phi_{m}\\) and integrate from \\(-h_{0}\\) to \\(0\\). Using the conditions of orthogonality and normalization (8), (9) and the boundary condition (13), we finally obtain \\[\\frac{\\partial^{2}}{\\partial x^{2}}C_{nm}+\\frac{\\partial^{2}}{\\partial y^{2}}C_{ nm}+(k_{m}^{2}+l_{m}^{2})C_{nm}=F, \\tag{15}\\] where \\[F=\\omega^{2}\\rho_{0}\\left(h_{1}-\\frac{ik_{n}}{k_{n}^{2}+l_{n}^{2}}h_{1x}-\\frac {il_{n}}{k_{n}^{2}+l_{n}^{2}}h_{1y}\\right)e^{i(k_{n}x+l_{n}y)}\\phi_{nz}(-h_{0} )\\phi_{mz}(-h_{0}).\\] Writing the solution to the equation (15) as the convolution of the fundamental solution (Green function) of the Helmholtz operator \\(G=(-i/4)H_{0}^{(1)}(\\sqrt{k_{m}^{2}+l_{m}^{2}}R)\\) with the right-hand side \\(F\\), we have \\[C_{nm}(x_{r},y_{r})=-\\frac{i}{4}\\int\\limits_{x}\\int\\limits_{y}FH_{0}^{(1)}( \\sqrt{k_{m}^{2}+l_{m}^{2}}R)\\,dy\\,dx, \\tag{16}\\] where \\(R=\\sqrt{(x-x_{r})^{2}+(y-y_{r})^{2}}\\) and by the index \\(r\\) we designate the point of registration of the field. Integrating by parts the terms containing \\(h_{1x},h_{1y}\\) and passing to the cylindrical coordinate system with the origin in our domain of inhomogeneity and such that \\(k_{n}=\\kappa_{n}\\), \\(l_{n}=0\\), \\(x=r\\cos\\alpha\\), \\(y=r\\sin\\alpha\\), we obtain \\[C_{nm}=-\\frac{1}{4}\\frac{\\kappa_{m}}{\\kappa_{n}}G\\int\\limits_{0}^{\\infty}\\int \\limits_{0}^{2\\pi}h_{1}e^{i\\kappa_{n}r\\cos\\alpha}\\cos(\\psi-\\alpha_{r})H_{1}^{ (1)}(\\kappa_{m}R)r\\,d\\alpha dr, \\tag{17}\\] \\(G=\\omega^{2}\\rho_{0}\\phi_{nz}(-h_{0})\\phi_{mz}(-h_{0})\\), \\(R=\\sqrt{r^{2}+r_{r}^{2}-2rr_{r}cos(\\alpha-\\alpha_{r})}\\), \\((r_{r},\\alpha_{r})\\) are the polar coordinates of the registration point, \\(\\tan(\\psi)=r\\sin(\\alpha-\\alpha_{r})/(r_{r}-r\\cos(\\alpha-\\alpha_{r}))\\). Using the addition theorem for the Bessel functions we express contained in (17) \\(\\cos\\psi H_{1}^{(1)}(\\kappa_{m}R)\\) and \\(\\sin\\psi H_{1}^{(1)}(\\kappa_{m}R)\\) in the form: \\[\\left\\{\\begin{matrix}\\cos(\\psi)\\\\ \\sin(\\psi)\\end{matrix}\\right\\}H_{1}^{(1)}(\\kappa_{m}R)=\\sum\\limits_{k=-\\infty} ^{\\infty}H_{k+1}^{(1)}(\\kappa_{m}r_{r})J_{k}(\\kappa_{m}r)\\left\\{\\begin{matrix} \\cos k(\\alpha-\\alpha_{r})\\\\ \\sin k(\\alpha-\\alpha_{r})\\end{matrix}\\right\\}\\,.\\] From now on we shall assume that the distance \\(r_{r}\\) to the registration point is big enough to replace the functions \\(H_{k+1}^{(1)}(\\kappa_{m}r_{r})\\) by their asymptotics \\[H_{k+1}^{(1)}(\\kappa_{m}r_{r})\\approx\\sqrt{2/(\\pi\\kappa_{m}r_{r})}\\exp\\left[i (\\kappa_{m}r_{r}-(\\pi/2)(k+1)-\\pi/4)\\right].\\] Then, expanding \\(h_{1}(r,\\alpha)\\) as function of \\(\\alpha\\) in Fourier series with the coefficients \\(\\tilde{h}_{1\ u}(r)\\), after integration with respect to \\(\\alpha\\), we obtain \\[C_{nm}= \\frac{i\\sqrt{2\\pi}}{2}\\frac{\\sqrt{\\kappa_{m}}\\exp(i\\kappa_{m}r_{r}-i \\pi/4)}{\\kappa_{n}\\sqrt{r_{r}}}G\\cos\\alpha_{r}\\sum_{\ u=-\\infty}^{\\infty}(i)^{ \ u}e^{-i\ u\\alpha_{0}} \\tag{18}\\] \\[\\times\\sum_{k=-\\infty}^{\\infty}e^{-ik\\alpha_{r}}\\int\\limits_{0}^ {\\infty}\\tilde{h}_{1\ u}(r)J_{k}(\\kappa_{m}r)J_{\ u+k}(\\kappa_{n}r)r\\,dr\\] Changing the order of integration and summation we can achieve further simplification by using the formula \\[\\sum_{k=-\\infty}^{\\infty}J_{k}(\\kappa_{m}r)J_{\ u+k}(\\kappa_{n}r)e^{-ik\\alpha _{r}}=J_{\ u}(\\xi r)e^{-i\ u\\theta},\\] where \\(\\xi=\\sqrt{\\kappa_{m}^{2}+\\kappa_{n}^{2}-2\\varkappa_{m}\\varkappa_{n}\\cos \\alpha_{r}}\\), \\(\\theta=\\arctan\\frac{\\kappa_{m}\\sin\\alpha_{r}}{\\kappa_{n}-\\kappa_{m}\\cos \\alpha_{r}}\\). We expand now the radial coefficients \\(\\tilde{h}_{1\ u}(r)\\) on the segment \\([0,L]\\), where they do not vanish, in the Fourier-Bessel series \\[\\tilde{h}_{1\ u}(r)=\\sum_{p=1}^{\\infty}f_{p}^{\ u}J_{\ u}\\left(\\frac{\\gamma_{ p}^{\ u}}{L}r\\right)\\,,\\] where \\(\\gamma_{p}^{\ u}\\) are the positive roots of the function \\(J_{\ u}\\), \\(J_{\ u}(\\gamma_{p}^{\ u})=0\\). Substituting this expansion in (18) and taking into account that \\[\\int\\limits_{0}^{L}J_{\ u}\\left(\\frac{\\gamma_{p}^{\ u}}{L}r\\right)J_{\ u}(\\xi r )r\\,dr=\\frac{-L^{2}\\gamma_{p}^{\ u}J_{\ u}(\\xi L)J_{\ u}^{\\prime}(\\gamma_{p}^{ \ u})}{\\gamma_{p}^{\ u\\,2}-\\xi^{2}L^{2}},\\] we obtain the final expression for modal conversion coefficients \\[C_{nm}= -\\frac{iL^{2}\\sqrt{2\\pi}}{2}\\frac{\\sqrt{\\kappa_{m}}\\exp(i\\kappa_{m}r_{ r}-i\\pi/4)}{\\kappa_{n}\\sqrt{r_{r}}}G\\cos\\alpha_{r} \\tag{19}\\] \\[\\times\\sum_{\ u=-\\infty}^{\\infty}(i)^{\ u}J_{\ u}(\\xi L)e^{-i\ u (\\alpha_{0}+\\theta)}\\sum_{p=1}^{\\infty}f_{p}^{\ u}\\frac{\\gamma_{p}^{\ u}J_{ \ u}^{\\prime}(\\gamma_{p}^{\ u})}{\\gamma_{p}^{\ u\\,2}-\\xi^{2}L^{2}}\\,.\\] ## 3 Numerical examples For a model example we choose \\(\\rho=e^{-\\lambda z}\\),\\(\\beta=\\lambda^{-1}\\) and \\(H=1\\). Then the spectral boundary problem is written in the form \\[\\omega^{2}\\phi_{zz}-\\omega^{2}\\lambda\\phi_{z}-\\kappa^{2}(\\omega^{2 }-1)\\phi =0,\\] \\[\\phi(0)=0\\,,\\quad\\phi(-1) =0.\\] The eigenfunctions of such a problem are \\(\\phi=Ae^{\\lambda z/2}\\sin((l+1)\\pi z)\\) with the eigenvalues \\[\\kappa=\\frac{\\omega\\sqrt{(l+1)^{2}\\pi^{2}+\\lambda^{2}/4}}{\\sqrt{1-\\omega^{2}} }\\,.\\] Here \\(A=\\sqrt{2}/(\\sqrt{1-\\omega^{2}})\\) by the condition (9). For the calculations the value of parameter \\(\\lambda\\) was taken to be equal to \\(0.003\\), which corresponds to the typical stratification in the ocean shelf zones. The domain of inhomogeneity has the form of the ellipse with the big and small radii \\(a\\) and \\(b\\) of which were taken in proportion \\(a:b=2:1\\), and in this region \\[h_{1}(x,y)=0.05\\sqrt{1-\\frac{x^{2}}{a^{2}}-\\frac{y^{2}}{b^{2}}}\\,.\\] In the figure are presented the results of calculations with \\(\\omega=0.5\\) and the angle of incident field \\(\\alpha_{0}=0\\), conducted for various wave sizes \\(\\kappa a\\) of the scatterer. We note that according to the meaning of the small parameter \\(\\epsilon\\), in these calculations \\(\\epsilon=0.05\\). For the presentation of results we use the scattering amplitude \\[F_{nm}(\\alpha_{r})=\\left(\\frac{e^{i\\kappa_{m}r_{r}}}{\\sqrt{r_{r}}}\\right)^{-1} C_{nm}(\\alpha_{r})\\,.\\] ## References * [1]_Wetton, B. T. R., Fawcett, J. A._ Scattering from small three-dimensional irregularities in the ocean floor. J. Acoust. Soc. Am., vol. 85. (1989), No 4, pp. 1482-1488. * [2]_Zakharenko, A. D._ Sound scattering by small compact inhomogeneities in a sea waveguide. Acoustical Physics, vol. 46 (2000), pp. 160-163. * [3]_Zakharenko, A. D._ Sound scattering by small compact inhomogeneities in a sea waveguide. Acoustical Physics, vol. 46 (2000), pp. 160-163. Figure 1: Absolute value of scattering amplitude: \\(\\kappa a=1\\) (a,b), \\(\\kappa a=2\\) (c,d), \\(\\kappa a=8\\) (f,g)
The problem of scattering of linear internal waves from small compact sea bottom inhomogeneities is considered from the point of view of mode-to-mode scattering. A simple formula for modal conversion coefficients \\(C_{nm}\\), quantifying the amount of energy that is scattered into the n-th mode from the incident field m-th mode, is derived. In this formula the representation of inhomogeneities by their expansions into the Fourier and Fourier-Bessel series with respect to angular and radial coordinates respectively are used. Results of calculations, performed in a simple model case, are presented. The obtained formula can be used for a formulation of the inverse problem, as it was done in the acoustic case [2, 3]. Keywords: internal wave, scattering
Summarize the following text.
arxiv-format/0702015v2.md
# Large time existence for \\(3d\\) water-waves and asymptotics Borys Alvarez-Samaniego, David Lannes Universite Bordeaux I; IMB and CNRS UMR 5251, 351 Cours de la Liberation, 33405 Talence Cedex, France 1 [email protected]; [email protected] ## 1 Introduction ### General setting The motion of a perfect, incompressible and irrotational fluid under the influence of gravity is described by the free surface Euler (or water-waves) equations. Their complexity led physicists and mathematicians to derive simpler sets of equations likely to describe the dynamics of the water-waves equations in some specific physical regimes. In fact, many of the most famous equations of mathematical physicswere historically obtained as formal asymptotic limits of the water-waves equations: the shallow-water equations, the Korteweg-de Vries (KdV) and Kadomtsev-Petviashvili (KP) equations, the Boussinesq systems, etc. Each of these asymptotic limits corresponds to a very specific physical regime whose range of validity is determined in terms of the characteristics of the flow (amplitude, wavelength, anisotropy, bottom topography, depth, ). The derivation of these models goes back to the XIXth century, but the rigorous analysis of their relevance as approximate models for the water-waves equations only began three decades ago with the works of Ovsjannikov [40; 41], Craig [13], and Kano and Nishida [27; 28; 26] who first addressed the problem of justifying the formal asymptotics. For all the different asymptotic models, the problem can be formulated as follows: 1) do the water-waves equations have a solution on the time scale relevant for the asymptotic model and 2) does this model furnish a good approximation of the solution? Answering the first question requires a large-time existence theorem for the water-waves equations, while the second one requires a rigorous derivation of the asymptotic models and a precise control of the approximation error. Following the pioneer works for one-dimensional surfaces (\\(1DH\\)) of Ovsjannikov [41] and Nalimov [38] (see also Yosihara [50; 51]), Craig [13], and Kano and Nishida [27] provided the first justification of the KdV and \\(1DH\\) Boussinesq and shallow water approximations. However, the comprehension of the well-posedness theory for the water-waves equations hindered the perspective of justifying the other asymptotic regimes until the breakthroughs of S. Wu ([48] and [49] respectively for the \\(1DH\\) and \\(2DH\\) case, in infinite depth, and without restrictive assumptions). Since then, the literature on free surface Euler equations has been very active: the case of finite depth was proved in [29], and in the related case of the study of the free surface of a liquid in vacuum with zero gravity, Lindblad [34; 35]. More recently Coutand and Shkoller [12] and Shatah and Zeng [45] managed to remove the irrotationality condition and/or took into account surface tension effects (see also [3] for \\(1DH\\) water-waves with surface tension). In order to review the existing results of the rigorous justification of asymptotic models for water-waves, it is suitable to classify the different physical regimes using two dimensionless numbers: the amplitude parameter \\(\\varepsilon\\) and the shallowness parameter \\(\\mu\\) (defined below in (1.2)): * Shallow-water, large amplitude (\\(\\mu\\ll 1\\), \\(\\varepsilon\\sim 1\\)). Formally, this regime leads at first order to the well-known \"shallow-water equations\" (or Saint-Venant) and at second order to the so-called \"Green-Naghdi\" model, often used in coastal oceanography because it takes into account the dispersive effects neglected by the shallow-water equations. The first rigorous justification of the shallow-water model goes back to Ovsjannikov [40; 41] and Kano and Nishida [27] who proved the convergence of the solutions of the shallow-water equations to solutions of the water-waves equations as \\(\\mu\\to 0\\) in \\(1DH\\), and under some restrictive assumptions (small and analytic data). More recently, Y. A. Li [33] removed these assumptions and rigorously justified the shallow-water and Green-Naghdi equations, in \\(1DH\\) for flat bottoms. Finally, the first and so far only rigorous work on a \\(2DH\\) asymptotic model is due to a very recent work by T. Iguchi [24] in which he justified the \\(2DH\\) shallow-water equations, also allowing non-flat bottoms, but under a restrictive zero mass assumption on the velocity. * Shallow water, medium amplitude (\\(\\mu\\ll 1\\), \\(\\varepsilon\\sim\\sqrt{\\mu}\\)). This regime leads to the so-called Serre equations, which are quite similar to the aforementioned Green-Naghdi equations and are also often used in coastal oceanography. To our knowledge, no rigorous result exists on that model. * Shallow water, small amplitude (\\(\\mu\\ll 1\\), \\(\\varepsilon\\sim\\mu\\)). This regime (also called long-waves regime) leads to many mathematically interesting models due to the balance of nonlinear and dispersive effects: * Boussinesq systems: since the first derivation by Boussinesq, many formally equivalent models (also named after Boussinesq) have been derived. W. Craig [13] and Kano and Nishida [28] were the first to give a full justification of these models, in \\(1DH\\) (and for flat bottoms and small data). Note, however, that the convergence result given in [28] is given on a time scale too short to capture the nonlinear and dispersive effects specific to the Boussinesq systems; in [13], the correct _large time_ existence (and convergence) results for the water-waves equations are given. The proof, of such a large time well-posedness result for the water-waves equations, is the most delicate point in the justification process. Furthermore, it is the last step needed to fully justify the Boussinesq systems in \\(2DH\\), owing to [5] (flat bottoms) and [9] (general bottom topography), where the convergence property is proved _assuming_ that the large-time well-posedness theorem holds. * Uncoupled models: at first order, the Boussinesq systems reduce to a simple wave equation and, in \\(1DH\\), the motion of the free surface can be described as the sum of two uncoupled counter-propagating waves, slightly modulated by a Korteweg-de Vries (KdV) equation. In \\(2DH\\) and for weakly transverse waves, a similar phenomenon occurs, but with the Kadomtsev-Petviashvili (KP) equation replacing the KdV equation. Many papers addressed the problem of validating the KdV model (e.g. [13; 28; 43; 5; 47; 23]) and its justification is now complete. For the KP model, a first attempt was done in [26], under re strictive assumption (small and analytic data), but as in [27], the time scale considered is unfortunately too small for the relevant dynamics. A series of works then proved the KP limit for simplified systems and toy models [20; 4; 42], while a different approach was used in [32] where the KP limit is proved for the full water-waves equations, _assuming_ a large-time well-posedness theorem and a specific control of the solutions. * Deep-water, small steepness (\\(\\mu\\geq 1\\), \\(\\varepsilon\\sqrt{\\mu}\\ll 1\\)). This regime leads to the full-dispersion (or Matsuno) equations; to our knowledge, no rigorous result exists on this point. Instead of developing an existence/convergence theory for each physical scaling, we hereby propose a global method which allows one to justify all the asymptotics mentioned above at once. In order to do that, we nondimensionalize the water-waves equations, and keep track of the five physical quantities which characterize the dynamics of the water-waves: amplitude, depth, wavelength in the longitudinal direction, wavelength in the transverse direction and amplitude of the bottom variations. Our main theorem gives an estimate of the existence time of the solution of the water-waves equations which is _uniform_ with respect to _all_ these parameters. In order to prove this theorem, we introduce an energy which involves the aforementioned parameters and use it to construct our solution by an iterative scheme. Moreover, this energy provides some bounds on the solutions which appear to be exactly those needed in the justification of the asymptotics regimes mentioned above. ### Presentation of the results Parameterizing the free surface by \\(z=\\zeta(t,X)\\) (with \\(X=(x,y)\\in\\mathbb{R}^{2}\\)) and the bottom by \\(z=-d+b(X)\\) (with \\(d>0\\) constant), one can use the incompressibility and irrotationality conditions to write the water-waves equations under Bernouilli's formulation, in terms of a velocity potential \\(\\phi\\) (i.e., the velocity field is given by \\(\\mathbf{v}=\ abla_{X,z}\\phi\\)): \\[\\left\\{\\begin{array}{ll}\\partial_{x}^{2}\\phi+\\partial_{y}^{2}\\phi+\\partial_{ z}^{2}\\phi=0,&-d+b\\leq z\\leq\\zeta,\\\\ \\partial_{n}\\phi=0,&z=-d+b,\\\\ \\partial_{t}\\zeta+\ abla\\zeta\\cdot\ abla\\phi=\\partial_{z}\\phi,&z=\\zeta,\\\\ \\partial_{t}\\phi+\\frac{1}{2}\\big{(}|\ abla\\phi|^{2}+(\\partial_{z}\\phi)^{2} \\big{)}+\\zeta=0,&z=\\zeta,\\end{array}\\right. \\tag{1.1}\\] where \\(\ abla=(\\partial_{x},\\partial_{y})^{T}\\) and \\(\\partial_{n}\\phi\\) is the outward normal derivative at the boundary of the fluid domain. The qualitative study of the water-waves equations is made easier by the introduction of dimensionless variables and unknowns. Thisrequires the introduction of various orders of magnitude linked to the physical regime under consideration. More precisely, let us introduce the following quantities: \\(a\\) is the order of amplitude of the waves; \\(\\lambda\\) is the wave-length of the waves in the \\(x\\) direction; \\(\\lambda/\\gamma\\) is the wavelength of the waves in the \\(y\\) direction; \\(B\\) is the order of amplitude of the variations of the bottom topography. We also introduce the following dimensionless parameters \\[\\frac{a}{d}=\\varepsilon,\\quad\\frac{d^{2}}{\\lambda^{2}}=\\mu,\\quad\\frac{B}{d}=\\beta; \\tag{1.2}\\] the parameter \\(\\varepsilon\\) is often called _nonlinearity_ parameter, while \\(\\mu\\) is the _shallowness_ parameter. In total generality, one has \\[(\\varepsilon,\\mu,\\gamma,\\beta)\\in(0,1]\\times(0,\\infty)\\times(0,1]\\times[0,1] \\tag{1.3}\\] (the conditions \\(\\varepsilon\\in(0,1]\\) and \\(\\beta\\in[0,1]\\) mean that the the surface and bottom variations are at most of the order of depth --\\(\\beta=0\\) corresponding to flat bottoms-- and the condition \\(\\gamma\\in(0,1]\\) says that the \\(x\\) axis is chosen to be the longitudinal direction for weakly transverse waves). Zakharov [52] remarked that the system (1.1) could be written in Hamiltonian form in terms of the free surface elevation \\(\\zeta\\) and of the trace of the velocity potential at the surface \\(\\psi=\\phi_{|_{z=\\zeta}}\\) and Craig, Sulem and Sulem [17] and Craig, Schanz and Sulem [16] used the fact that (1.1) could be reduced to a system of two evolution equations on \\(\\zeta\\) and \\(\\psi\\); this formulation has commonly been used since then. The dimensionless form of this formulation involves the parameters introduced in (1.2), the transversity \\(\\gamma\\), and a parameter \\(\ u=(1+\\sqrt{\\mu})^{-1}\\) whose presence is due to the fact that the nondimensionalization is not the same in deep and shallow water. It is derived in Appendix A: \\[\\left\\{\\begin{aligned} &\\partial_{t}\\zeta-\\frac{1}{\\mu\ u} \\mathcal{G}_{\\mu,\\gamma}[\\varepsilon\\zeta,\\beta b]\\psi=0,\\\\ &\\partial_{t}\\psi+\\zeta+\\frac{\\varepsilon}{2\ u}|\ abla^{\\gamma} \\psi|^{2}-\\frac{\\varepsilon\\mu}{\ u}\\frac{(\\frac{1}{\\mu}\\mathcal{G}_{\\mu, \\gamma}[\\varepsilon\\zeta,\\beta b]\\psi+\\varepsilon\ abla^{\\gamma}\\zeta\\cdot \ abla^{\\gamma}\\psi)^{2}}{2(1+\\varepsilon^{2}\\mu|\ abla^{\\gamma}\\zeta|^{2})}= 0,\\end{aligned}\\right. \\tag{1.4}\\] where \\(\ abla^{\\gamma}=(\\partial_{x},\\gamma\\partial_{y})^{T}\\) and \\(\\mathcal{G}_{\\mu,\\gamma}[\\varepsilon\\zeta,\\beta b]\\) is the Dirichlet-Neumann operator defined by \\(\\mathcal{G}_{\\mu,\\gamma}[\\varepsilon\\zeta,\\beta b]\\psi=(1+\\varepsilon^{2}| \ abla\\zeta|^{2})^{1/2}\\partial_{n}\\Phi_{|_{z=\\varepsilon\\zeta}}\\), with \\(\\Phi\\) solving \\[\\left\\{\\begin{aligned} &\\partial_{z}^{2}\\Phi+\\mu\\partial_{x}^{2} \\Phi+\\gamma^{2}\\mu\\partial_{y}^{2}\\Phi=0,\\qquad-1+\\beta b<z<\\varepsilon\\zeta, \\\\ &\\Phi_{|_{z=\\varepsilon\\zeta}}=\\psi,\\qquad\\partial_{n}\\Phi_{|_{z=- 1+\\beta b}}=0.\\end{aligned}\\right. \\tag{1.5}\\] In Section 2, we give some preliminary results which will be used throughout the paper: a few technical results (such as commutator estimates) are given in SS2.1 and elliptic boundary value problems directly linked to (1.5) are studied in SS2.2. Section 3 is devoted to the study of various aspects of the Dirichlet-Neumann operator \\(\\mathcal{G}_{\\mu,\\gamma}[\\varepsilon\\zeta,\\beta b]\\). It is well-known that the Dirichlet-Neumann operator is a pseudo-differential operator of order one; in particular, it acts continuously on Sobolev spaces and its operator norm, commutators with derivatives, etc., have been extensively studied. The task here is more delicate because of the presence of four parameters \\((\\varepsilon,\\mu,\\gamma,\\beta)\\) in the operator \\(\\mathcal{G}_{\\mu,\\gamma}[\\varepsilon\\zeta,\\beta b]\\). Indeed, some of the classical estimates on the Dirichlet-Neumann are not uniform with respect to the parameters and must be modified. But the main difficulty is due to the fact that the energy introduced in this paper is not of Sobolev type; namely, it is given by \\[\\forall s\\geq 0,\\quad\\forall U=(\\zeta,\\psi),\\qquad|U|_{\\widetilde{X}^{s}}= \\big{|}\\zeta|_{H^{s}}+|\\frac{\ u^{-1/2}|D^{\\gamma}|}{(1+\\sqrt{\\mu}|D^{\\gamma} |)^{1/2}}\\psi\\big{|}_{H^{s}}, \\tag{1.6}\\] where \\(|D^{\\gamma}|:=(-\\partial_{x}^{2}-\\gamma^{2}\\partial_{y}^{2})^{1/2}\\). For high frequencies, this energy is equivalent to the \\(H^{s}\\times H^{s+1/2}\\)-norm specific to the non-strictly hyperbolic nature of the water-waves equations (see [14] for a detailed comment on this point), but the equivalence is not uniform with respect to the parameters, and the \\(H^{s}\\times H^{s+1/2}\\) estimates of [48; 49; 29; 3] for instance, are useless for our purposes here. We thus have to work with estimates in \\(|\\cdot|_{\\widetilde{X}^{s}}\\)-type norms and the classical results on Sobolev estimates of pseudodifferential operators cannot be used. Consequently, we must rely on the structural properties of the water-waves equations much more heavily than in the previous works quoted above. Fundamental properties of the DN operators are given in SS3.1, while commutator estimates and further properties are investigated in SS3.2 and SS3.3. We then give asymptotic expansions of \\(\\mathcal{G}_{\\mu,\\gamma}[\\varepsilon\\zeta,\\beta b]\\psi\\) in terms of the parameters in SS3.4. Using the results of the previous sections, we study the Cauchy problem associated to the linearization of (1.4) in Section 4; the main energy estimate is given in Proposition 4.1. The full nonlinear equations are addressed in Section 5 and our main result is stated in Theorem 5.1; it gives a \"large-time\" (of order \\(O(\\varepsilon/\ u)\\)) existence result for the water-waves equations (1.4) and a bound on its energy (defined in (1.6)). The most important point is that this result is uniform with respect to _all_ the parameters \\((\\varepsilon,\\mu,\\gamma,\\beta)\\) satisfying (1.3) and such that the steepness \\(\\varepsilon\\sqrt{\\mu}\\) and the ratio \\(\\beta/\\varepsilon\\) remain bounded. The theorem also requires a classical _Taylor sign condition_ on the initial data; we give in Proposition 5.1 very simple sufficient conditions (involving in particular the \"anisotropic Hessian\" of the bottom parameterization \\(b\\)), which imply that the Taylor sign condition is satisfied. Both Theorem 5.1 and Proposition 5.1 can be used for all the physical regimes given in the previous section, and the solution they provide exists over a time scale relevant with respect to the dynamics of the asymptotic models. We can therefore study the asymptotic limits, which is done in Section 6. It is convenient to use the classification introduced previously to present our results (we also refer to [31] for an overview of the methods developed here): * Shallow-water, large amplitude (\\(\\mu\\ll 1\\), \\(\\varepsilon\\sim 1\\)). We justify in SS6.1.1 the shallow-water equations without the restrictive assumptions of [24] and previous works. For the Green-Naghdi model, we extend in SS6.1.2 the result of [33] to non-flat bottoms, and to two dimensional surfaces. * Shallow water, medium amplitude (\\(\\mu\\ll 1\\), \\(\\varepsilon\\sim\\sqrt{\\mu}\\)). We rigorously justify the Serre approximation over the relevant \\(O(1/\\sqrt{\\mu})\\) time scale in SS6.1.2. * Shallow water, small amplitude (\\(\\mu\\ll 1\\), \\(\\varepsilon\\sim\\mu\\)). * Boussinesq systems: In SS6.2, we fully justify all the Boussinesq systems in the open case of two-dimensional surfaces (flat or non-flat bottoms). * Uncoupled models: We complete the full justification of the KP approximation in SS6.3. * Deep-water, small steepness (\\(\\mu\\geq 1\\), \\(\\varepsilon\\sqrt{\\mu}\\ll 1\\)). We show in SS6.4.1 that the solutions of the full-dispersion model converge to exact solutions of the water-waves equations as the steepness goes to zero and give accurate error estimates. We also give in SS6.4.2 an estimate on the precision of a model used for the numerical computation of the water-waves equations (see [15] for instance). ### Notations - We use the generic notation \\(C(\\lambda_{1},\\lambda_{2},\\dots)\\) to denote a nondecreasing function of the parameters \\(\\lambda_{1},\\lambda_{2},\\dots\\) - The notation \\(a\\lesssim b\\) means that \\(a\\leq Cb\\), for some nonnegative constant \\(C\\) whose exact expression is of no importance (_in particular, it is independent of the small parameters involved_). - For all tempered distribution \\(u\\in\\mathfrak{S}^{\\prime}(\\mathbb{R}^{2})\\), we denote by \\(\\widehat{u}\\) its Fourier transform. - Fourier multipliers: For all rapidly decaying \\(u\\in\\mathfrak{S}(\\mathbb{R}^{2})\\) and all \\(f\\in C(\\mathbb{R}^{2})\\) with tempered growth, \\(f(D)\\) is the distribution defined by \\[\\forall\\xi\\in\\mathbb{R}^{2},\\qquad\\widehat{f(D)u}(\\xi)=f(\\xi)\\widehat{u}(\\xi); \\tag{1.7}\\] (this definition can be extended to wider spaces of functions). - We write \\(\\langle\\xi\\rangle=(1+|\\xi|^{2})^{1/2}\\) and \\(\\Lambda=\\langle D\\rangle\\). - For all \\(1\\leq p\\leq\\infty\\), \\(|\\cdot|_{p}\\) denotes the classical norm of \\(L^{p}(\\mathbb{R}^{2})\\) while \\(\\|\\cdot\\|_{p}\\) stands for the canonical norm of \\(L^{p}(\\mathcal{S})\\), with \\(\\mathcal{S}=\\mathbb{R}^{2}\\times(-1,0)\\). - For all \\(s\\in\\mathbb{R}\\), \\(H^{s}(\\mathbb{R}^{2})\\) is the classical Sobolev space defined as \\[H^{s}(\\mathbb{R}^{2})=\\{u\\in\\mathcal{S}^{\\prime}(\\mathbb{R}^{2}),|u|_{H_{s}}:= |A^{s}u|_{2}<\\infty\\}.\\] - For all \\(s\\in\\mathbb{R}\\), \\(\\|\\cdot\\|_{L^{\\infty}H^{s}}\\) denotes the canonical norm of \\(L^{\\infty}([-1,0];H^{s}(\\mathbb{R}^{2}))\\). - If \\(B\\) is a Banach space, then \\(|\\cdot|_{B,T}\\) stands for the canonical norm of \\(L^{\\infty}([0,T];B)\\). - For all \\(\\gamma>0\\), we write \\(\ abla^{\\gamma}=(\\partial_{x},\\gamma\\partial_{y})^{T}\\), so that \\(\ abla^{\\gamma}\\) coincides with the usual gradient when \\(\\gamma=1.\\) We also use the Fourier multiplier \\(|D^{\\gamma}|\\) defined as \\[|D^{\\gamma}|=\\sqrt{D_{x}^{2}+\\gamma^{2}D_{y}^{2}},\\] as well as the anisotropic divergence operator \\[\\mathrm{div}_{\\gamma}=(\ abla^{\\gamma})^{T}.\\] - We denote by \\(\\mathfrak{P}_{\\mu,\\gamma}\\) (or simply \\(\\mathfrak{P}\\) when no confusion is possible) the Fourier multiplier of order \\(1/2\\) \\[\\mathfrak{P}_{\\mu,\\gamma}(=\\mathfrak{P}):=\\frac{\ u^{-1/2}|D^{\\gamma}|}{(1+ \\sqrt{\\mu}|D^{\\gamma}|)^{1/2}}. \\tag{1.8}\\] - We write \\(X=(x,y)\\) and \\(\ abla_{X,z}=(\\partial_{x},\\partial_{y},\\partial_{z})^{T}\\); we also write \\[\ abla^{\\mu,\\gamma}=(\\sqrt{\\mu}\\partial_{x},\\gamma\\sqrt{\\mu}\\partial_{y}, \\partial_{z})^{T}.\\] - We use the condensed notation \\[A_{s}=B_{s}+\\left\\langle C_{s}\\right\\rangle_{s>\\underline{s}} \\tag{1.9}\\] to say that \\(A_{s}=B_{s}\\) if \\(s\\leq\\underline{s}\\) and \\(A_{s}=B_{s}+C_{s}\\) if \\(s>\\underline{s}\\). - By convention, we take \\[\\prod_{k=1}^{0}p_{k}=1\\quad\\text{ and }\\quad\\sum_{k=1}^{0}p_{k}=0. \\tag{1.10}\\] - When the notation \\(\\partial_{n}u_{|_{\\partial\\Omega}}\\) is used for boundary conditions of an elliptic equation of the form \\(\ abla_{X,z}\\cdot P\ abla_{X,z}u=h\\) in some open set \\(\\Omega\\), it stands for the _outward conormal derivative_ associated to this operator, namely, \\[\\partial_{n}u_{|_{\\partial\\Omega}}=\\mathbf{n}\\cdot P\ abla_{X,z}u_{|_{\\partial \\Omega}}, \\tag{1.11}\\] \\(\\mathbf{n}\\) standing for the _outward_ unit normal vector to \\(\\partial\\Omega\\). ## 2 Preliminary results ### Commutator estimates and anisotropic Poisson regularization We recall first the tame product and Moser estimates in Sobolev spaces: if \\(t_{0}>1\\) and \\(s\\geq 0\\), then \\(\\forall f\\in H^{s}\\cap H^{t_{0}}(\\mathbb{R}^{2}),\\forall g\\in H^{s}(\\mathbb{R}^{ 2})\\), \\[|fg|_{H^{s}}\\lesssim|f|_{H^{t_{0}}}|g|_{H^{s}}+\\langle|f|_{H^{s}}|g|_{H^{t_{0}} }\\rangle_{s>t_{0}} \\tag{2.1}\\] and, for all \\(F\\in C^{\\infty}(\\mathbb{R}^{n};\\mathbb{R}^{m})\\) such that \\(F(0)=0\\), \\[\\forall u\\in H^{s}(\\mathbb{R}^{2})^{n},\\ \\ F(u)\\in H^{s}(\\mathbb{R}^{2})^{m}\\ \\ \\text{ and }\\ \\ |F(u)|_{H^{s}}\\leq C(|u|_{\\infty})|u|_{H^{s}}. \\tag{2.2}\\] In the next proposition, we give tame commutator estimates. Proposition 2.1 (Ths. 3 and 6 of [30]): Let \\(t_{0}>1\\) and \\(-t_{0}<r\\leq t_{0}+1\\). Then, for all \\(s\\geq 0\\), \\(f\\in H^{t_{0}+1}\\cap H^{s+r}(\\mathbb{R}^{2})\\) and \\(u\\in H^{s+r-1}(\\mathbb{R}^{2})\\), \\[\\big{|}[\\Lambda^{s},f]u\\big{|}_{H^{r}}\\lesssim|\ abla f|_{H^{t_{0}}}|u|_{H^{s+ r-1}}+\\langle|\ abla f|_{H^{s+r-1}}|u|_{H^{t_{0}}}\\rangle_{s>t_{0}+1-r}\\,,\\] where we used the notation (1.9). One can deduce from the above proposition some commutator estimates useful in the present study. Corollary 2.1: Let \\(t_{0}>1\\), \\(s\\geq 0\\) and \\(\\gamma\\in(0,1]\\). Then: **i.** For all \\(0\\leq r\\leq t_{0}+1\\), \\(f\\in L^{\\infty}((-1,0);H^{s+r}\\cap H^{t_{0}+1}(\\mathbb{R}^{2}))\\) and \\(u\\in L^{2}((-1,0);H^{s+r-1}(\\mathbb{R}^{2}))\\), \\[\\|\\Lambda^{r}[\\Lambda^{s},f]u\\big{\\|}_{2} \\lesssim\\|f\\|_{L^{\\infty}H^{t_{0}+1}}\\|\\Lambda^{s+r-1}u\\|_{2}\\] \\[\\quad+\\big{\\langle}\\|f\\|_{L^{\\infty}H^{s+r}}\\|\\Lambda^{t_{0}}u\\| _{2}\\big{\\rangle}_{s>t_{0}+1-r}\\,.\\] Proof: For the first point, just remark that \\[\\big{[}\\Lambda^{s},\\text{div}_{\\gamma}(\\mathbf{v}\\cdot)\\big{]}u=\\big{[} \\Lambda^{s},\\text{div}_{\\gamma}(\\mathbf{v})\\big{]}u+\\big{[}\\Lambda^{s}, \\mathbf{v}\\big{]}\\cdot\ abla^{\\gamma}u,\\] and use Proposition 2.1 to obtain the result (recall that \\(\\gamma\\leq 1\\)). For the second point of the corollary, remark that for all \\(z\\in[-1,0]\\), \\[|\\Lambda[\\Lambda^{s},f]u(z)|_{2}\\lesssim|f(z)|_{H^{t_{0}+1}}|u(z)|_{H^{s}}+ \\langle|f(z)|_{H^{s+1}}|u(z)|_{H^{t_{0}}}\\rangle_{s>t_{0}}\\,,\\] as a consequence of Proposition 2.1 (with \\(r=1\\)). The corollary then follows easily. Let us end this section with a result on anisotropic Poisson regularization (when \\(\\gamma=\\mu=1\\), the result below is just the standard gain of half a derivative of the Poisson regularization). **Proposition 2.2**.: Let \\(\\gamma\\in(0,1],\\)\\(\\mu>0\\) and \\(\\chi\\) be a smooth, compactly supported function and \\(u\\in\\mathfrak{S}^{\\prime}(\\mathbb{R}^{2}).\\) Define also \\(u^{\\dagger}:=\\chi(\\sqrt{\\mu}z|D^{\\gamma}|)u.\\) For all \\(s\\in\\mathbb{R},\\) if \\(u\\in H^{s-1/2}(\\mathbb{R}^{2}),\\) one has \\(\\Lambda^{s}u^{\\dagger}\\in L^{2}(\\mathcal{S})\\) and \\[c_{1}\\big{|}\\frac{1}{(1+\\sqrt{\\mu}|D^{\\gamma}|)^{1/2}}u\\big{|}_{H^{s}}\\leq\\| \\Lambda^{s}u^{\\dagger}\\|_{2}\\leq c_{2}\\big{|}\\frac{1}{(1+\\sqrt{\\mu}|D^{\\gamma }|)^{1/2}}u\\big{|}_{H^{s}}.\\] Moreover, for all \\(s\\in\\mathbb{R},\\) if \\(u\\in H^{s+1/2}(\\mathbb{R}^{2}),\\) one has \\(\\Lambda^{s}\ abla^{\\mu,\\gamma}u^{\\dagger}\\in L^{2}(\\mathcal{S})^{3}\\) and \\[c_{1}^{\\prime}\\big{|}\\frac{\\sqrt{\\mu}|D^{\\gamma}|}{(1+\\sqrt{\\mu}|D^{\\gamma}|)^ {1/2}}u\\big{|}_{H^{s}}\\leq\\|\\Lambda^{s}\ abla^{\\mu,\\gamma}u^{\\dagger}\\|_{2} \\leq c_{2}^{\\prime}\\big{|}\\frac{\\sqrt{\\mu}|D^{\\gamma}|}{(1+\\sqrt{\\mu}|D^{ \\gamma}|)^{1/2}}u\\big{|}_{H^{s}}.\\] In the above estimates, \\(c_{1},\\)\\(c_{2},\\)\\(c_{1}^{\\prime}\\) and \\(c_{2}^{\\prime}\\) are nonnegative constants which depend only on \\(\\chi.\\) Proof.: Write classically (with \\(|\\xi^{\\gamma}|=\\sqrt{\\xi_{1}^{2}+\\gamma^{2}\\xi_{2}^{2}}\\)), \\[\\|\\chi(\\sqrt{\\mu}z|D^{\\gamma}|)u\\|_{s,0}^{2} =\\int_{\\mathbb{R}^{2}}\\int_{-1}^{0}\\langle\\xi\\rangle^{2s}\\chi( \\sqrt{\\mu}z|\\xi^{\\gamma}|)^{ Under this assumption, one can define a diffeomorphism \\(S\\) mapping \\(\\mathcal{S}\\) onto the fluid domain \\(\\Omega\\): \\[S:\\begin{array}{ccc}\\mathcal{S}&\\rightarrow&\\Omega\\\\ (X,z)&\\mapsto S(X,z):=\\big{(}X,z+\\sigma(X,z)\\big{)},\\end{array}\\] with \\[\\sigma(X,z)=-\\beta zb(X)+\\varepsilon(z+1)\\zeta(X). \\tag{2.5}\\] **Remark 2.1**.: The mapping \\(\\sigma\\) used in (2.5) to define the diffeomorphism \\(S\\) is the most simple one can think of. If one wanted to have optimal estimates with respect to the fluid or bottom parameterization (but unfortunately not uniform with respect to the parameters), one should use instead _regularizing diffeormorphisms_ as in Prop. 2.13 of [29]. From Proposition 2.7 of [29], we know that the BVP (2.3) is equivalent to the BVP (recall that we use the convention (1.11) for normal derivatives), \\[\\left\\{\\begin{aligned} &\ abla_{X,z}\\cdot P[\\sigma]\ abla_{X,z} \\phi=0,\\qquad\\text{ in }\\mathcal{S},\\\\ &\\phi_{|_{z=0}}=\\psi,\\qquad\\partial_{n}\\phi_{|_{z=-1}}=0,\\end{aligned}\\right. \\tag{2.6}\\] with \\(\\phi=\\Phi\\circ S\\) and with the \\((2+1)\\times(2+1)\\) matrix \\(P[\\sigma]\\) given by \\[P[\\sigma]:=P_{\\mu,\\gamma}[\\sigma]=\\left(\\begin{array}{ccc}\\mu(1+\\partial_{z }\\sigma)&0&-\\mu\\partial_{x}\\sigma\\\\ 0&\\gamma^{2}\\mu(1+\\partial_{z}\\sigma)&-\\gamma^{2}\\mu\\partial_{y}\\sigma\\\\ -\\mu\\partial_{x}\\sigma&-\\gamma^{2}\\mu\\partial_{y}\\sigma&\\frac{1+\\mu( \\partial_{x}\\sigma)^{2}+\\gamma^{2}\\mu(\\partial_{y}\\sigma)^{2}}{1+\\partial_{z} \\sigma}\\end{array}\\right).\\] Remark also that it follows from the expression of \\(P[\\sigma]\\) that \\[\ abla_{X,z}\\cdot P[\\sigma]\ abla_{X,z}=\ abla^{\\mu,\\gamma}\\cdot(1+Q[\\sigma]) \ abla^{\\mu,\\gamma},\\] where \\[Q[\\sigma]:=Q_{\\mu,\\gamma}[\\sigma]=\\left(\\begin{array}{ccc}\\partial_{z}\\sigma &0&-\\sqrt{\\mu}\\partial_{x}\\sigma\\\\ 0&\\partial_{z}\\sigma&-\\gamma\\sqrt{\\mu}\\partial_{y}\\sigma\\\\ -\\sqrt{\\mu}\\partial_{x}\\sigma&-\\gamma\\sqrt{\\mu}\\partial_{y}\\sigma&\\frac{- \\partial_{z}\\sigma+\\mu(\\partial_{x}\\sigma)^{2}+\\gamma^{2}\\mu(\\partial_{y} \\sigma)^{2}}{1+\\partial_{z}\\sigma}\\end{array}\\right). \\tag{2.7}\\] Below we provide two important properties satisfied by \\(Q[\\sigma]\\). **Proposition 2.3**.: Let \\(t_{0}>1\\), \\(s\\geq 0\\), and \\(\\zeta,b\\in H^{t_{0}+1}\\cap H^{s+1}(\\mathbb{R}^{2})\\) be such that (2.4) is satisfied. Assume also that \\(\\sigma\\) is as defined in (2.5). Then: **i.** One has \\[\\|Q[\\sigma]\\|_{L^{\\infty}H^{s}}\\leq C\\big{(}\\frac{1}{h_{0}},\\|\ abla^{\\mu, \\gamma}\\sigma\\|_{L^{\\infty}H^{t_{0}}}\\big{)}\\|\ abla^{\\mu,\\gamma}\\sigma\\|_{L^{ \\infty}H^{s}}\\]and, when \\(\\sigma\\) is also time dependent, \\[\\|\\partial_{t}Q[\\sigma]\\|_{\\infty,T}\\leq C\\big{(}\\frac{1}{h_{0}},\\|\ abla^{\\mu, \\gamma}\\sigma\\|_{\\infty,T}\\big{)}\\|\ abla^{\\mu,\\gamma}\\partial_{t}\\sigma\\|_{ \\infty,T}.\\] **ii.** For all \\(j\\geq 1\\) and \\({\\bf h}\\in H^{t_{0}+1}\\cap H^{s+1}(\\mathbb{R}^{2})^{j}\\), and denoting by \\(Q^{(j)}[\\sigma]\\cdot{\\bf h}\\) the \\(j\\)-th derivative of \\(\\zeta\\mapsto Q[\\sigma]\\) in the direction \\({\\bf h}\\), one has \\[\\|Q^{(j)}[\\sigma]\\cdot{\\bf h}\\|_{L^{\\infty}H^{s}}\\leq\\big{(} \\frac{\\varepsilon}{\ u}\\big{)}^{j}C\\big{(}\\frac{1}{h_{0}},\\varepsilon\\sqrt{ \\mu},\\|\ abla^{\\mu,\\gamma}\\sigma\\|_{L^{\\infty}H^{t_{0}}}\\big{)}\\] \\[\\times\\Big{(}\\sum_{k=1}^{j}|h_{k}|_{H^{s+1}}\\prod_{l\ eq k}|h_{l} |_{H^{t_{0}+1}}+\\big{\\langle}(1+\\|\ abla^{\\mu,\\gamma}\\sigma\\|_{L^{\\infty}H^{s }})\\prod_{k=1}^{j}|h_{k}|_{H^{t_{0}+1}}\\big{\\rangle}_{s>t_{0}}\\Big{)}.\\] **iii.** The matrix \\(1+Q[\\sigma]\\) is coercive in the sense that \\[\\forall\\Theta\\in\\mathbb{R}^{2+1},\\qquad|\\Theta|^{2}\\lesssim k[\\sigma](1+Q[ \\sigma])\\Theta\\cdot\\Theta,\\] with \\[k[\\sigma]:=k_{\\mu,\\gamma}[\\sigma]=1+\\|\\partial_{z}\\sigma\\|_{\\infty}+\\frac{1}{h _{0}}\\Big{(}1+\\sqrt{\\mu}\\|\ abla^{\\gamma}\\sigma\\|_{\\infty}\\Big{)}^{2}.\\] Proof: The first two points follow directly from the tame product and Moser's estimate (2.1) and (2.2), and the explicit expression of \\(Q[\\sigma]\\). It is not difficult to see that \\((1+Q[\\sigma])\\Theta\\cdot\\Theta=\\frac{1}{1+\\partial_{z}\\sigma}|B\\Theta|^{2}\\), where \\[B=\\left(\\begin{array}{ccc}1+\\partial_{z}\\sigma&0&-\\sqrt{\\mu}\\partial_{x} \\sigma\\\\ 0&1+\\partial_{z}\\sigma&-\\gamma\\sqrt{\\mu}\\partial_{y}\\sigma\\\\ 0&0&1\\end{array}\\right).\\] The matrix \\(B\\) is invertible and its inverse is given by \\[B^{-1}=\\frac{1}{1+\\partial_{z}\\sigma}\\left(\\begin{array}{ccc}1&0&\\sqrt{\\mu} \\partial_{x}\\sigma\\\\ 0&1&\\gamma\\sqrt{\\mu}\\partial_{y}\\sigma\\\\ 0&0&1+\\partial_{z}\\sigma\\end{array}\\right).\\] Remark now that owing to (2.4), the mapping \\(\\sigma\\), as given by (2.5), satisfies \\((1+\\partial_{z}\\sigma)^{-1}\\leq h_{0}^{-1}\\), so that \\[\\sqrt{1+\\partial_{z}\\sigma}|B^{-1}|_{\\mathbb{R}^{3}\\mapsto\\mathbb{R}^{3}} \\lesssim\\sqrt{1+\\|\\partial_{z}\\sigma\\|_{\\infty}}+\\frac{1}{\\sqrt{h_{0}}}\\Big{(} 1+\\sqrt{\\mu}\\|\ abla^{\\gamma}\\sigma\\|_{\\infty}\\Big{)}.\\] Since \\(|B\\Theta||B^{-1}|_{\\mathbb{R}^{3}\\mapsto\\mathbb{R}^{3}}\\geq|\\Theta|\\), the third claim of the proposition follows. Since the Dirichlet condition in (2.6) can be 'lifted' in order to take homogeneous Dirichlet boundary condition, we are led to study the following class of elliptic BVPs: \\[\\left\\{\\begin{aligned} &\ abla^{\\mu,\\gamma}\\cdot(1+Q[\\sigma]) \ abla^{\\mu,\\gamma}u=\ abla^{\\mu,\\gamma}\\cdot\\mathbf{g},\\qquad\\text{ in }\\mathcal{S},\\\\ & u_{|_{z=0}}=0,\\qquad\\partial_{n}u_{|_{z=-1}}=-\\mathbf{e_{z}} \\cdot\\mathbf{g}_{|_{z=-1}},\\end{aligned}\\right. \\tag{2.8}\\] where, according to the notation (1.11), \\(\\partial_{n}u_{|_{z=-1}}\\) stands for \\[\\partial_{n}u_{|_{z=-1}}=-\\mathbf{e_{z}}\\cdot(1+Q[\\sigma])\ abla^{\\mu,\\gamma}u _{|_{z=-1}}.\\] Before stating the main result of this section let us introduce a notation: **Notation 2.1**.: We generically write \\[M[\\sigma]:=C\\big{(}\\varepsilon\\sqrt{\\mu},\\frac{1}{h_{0}},\\|\ abla^{\\mu,\\gamma }\\sigma\\|_{L^{\\infty}H^{t_{0}+1}}\\big{)}, \\tag{2.9}\\] where, as usual, \\(C(\\cdot)\\) is a nondecreasing function of its arguments. **Proposition 2.4**.: Let \\(t_{0}>1\\), \\(s\\geq 0\\) and \\(\\zeta,b\\in H^{t_{0}+2}\\cap H^{s+1}(\\mathbb{R}^{2})\\) be such that (2.4) is satisfied, and let \\(\\sigma\\) be given by (2.5). Then for all \\(\\mathbf{g}\\in C([-1,0];H^{s}(\\mathbb{R}^{2})^{3})\\), there exists a unique variational solution \\(u\\in H^{1}(\\mathcal{S})\\) to the BVP (2.8) and \\[\\|\\Lambda^{s}\ abla^{\\mu,\\gamma}u\\|_{2}\\leq M[\\sigma]\\big{(}\\|\\Lambda^{s} \\mathbf{g}\\|_{2}+\\big{\\langle}\\|\ abla^{\\mu,\\gamma}\\sigma\\|_{L^{\\infty}H^{s}} \\big{\\|}\\Lambda^{t_{0}}\\mathbf{g}\\|_{2}\\big{\\rangle}_{s>t_{0}+1}\\big{)},\\] where \\(M[\\sigma]\\) is defined in (2.9). Proof.: The existence of the solution can be obtained with very classical tools and we therefore omit it. We thus focus our attention on the proof of the estimate. Let \\(\\chi(\\cdot)\\) be a smooth, compactly supported function such that \\(\\chi(\\xi)=1\\) in a neighborhood of \\(\\xi=0\\), and define \\(\\Lambda_{h}:=\\Lambda*\\chi(hD)\\). Using \\(\\Lambda_{h}^{2s}u\\) as test function in the variational formulation of (2.8), one gets \\[\\int_{\\mathcal{S}}(1+Q[\\sigma])\ abla^{\\mu,\\gamma}u\\cdot\ abla^{\\mu,\\gamma} \\Lambda_{h}^{2s}u=\\int_{\\mathcal{S}}\\mathbf{g}\\cdot\ abla^{\\mu,\\gamma}\\Lambda _{h}^{2s}u,\\] so that using the fact that \\(\\Lambda_{h}^{s}\\) is \\(L^{2}\\)-self-adjoint, one gets, with \\(v_{h}=\\Lambda_{h}^{s}u\\), \\[\\int_{\\mathcal{S}}\\Lambda_{h}^{s}(1+Q[\\sigma])\ abla^{\\mu,\\gamma}u\\cdot\ abla^ {\\mu,\\gamma}v_{h}=\\int_{\\mathcal{S}}\\Lambda_{h}^{s}\\mathbf{g}\\cdot\ abla^{\\mu,\\gamma}v_{h},\\] and thus \\[\\int_{\\mathcal{S}}(1+Q[\\sigma])\ abla^{\\mu,\\gamma}v_{h}\\cdot\ abla^{\\mu,\\gamma }v_{h}=\\int_{\\mathcal{S}}\\bigl{(}\\Lambda_{h}^{s}\\mathbf{g}\\cdot\ abla^{\\mu, \\gamma}v_{h}-\\bigl{[}\\Lambda_{h}^{s},Q[\\sigma]\\bigr{]}\ abla^{\\mu,\\gamma}u \\cdot\ abla^{\\mu,\\gamma}v_{h}\\bigr{)}.\\]Thanks to the coercitivity property of Proposition 2.3, one gets \\[k[\\sigma]^{-1}\\|\\Lambda^{s}_{h}\ abla^{\\mu,\\gamma}u\\|_{2}\\lesssim\\|\\bigl{[}\\Lambda ^{s}_{h},Q[\\sigma]\\bigr{]}\ abla^{\\mu,\\gamma}u\\|_{2}+\\|\\Lambda^{s}_{h}{\\bf g}\\| _{2}; \\tag{2.10}\\] since \\([\\Lambda^{s},Q[\\sigma]]\\) is of order \\(s-1\\), the above estimates allows one to conclude, after letting \\(h\\) go to zero, that \\(\\Lambda^{s}\ abla^{\\mu,\\gamma}u\\in L^{2}({\\mathcal{S}})\\); more precisely, thanks to Corollary 2.1, one deduces \\[k[\\sigma]^{-1}\\|\\Lambda^{s}\ abla^{\\mu,\\gamma}u\\|_{2} \\lesssim\\|\\Lambda^{s}{\\bf g}\\|_{2}+\\bigl{\\|}Q[\\sigma]\\bigr{\\|}_{L^ {\\infty}H^{t_{0}+1}}\\bigl{\\|}\\Lambda^{s-1}\ abla^{\\mu,\\gamma}u\\|_{2}\\] \\[\\quad+\\bigl{\\langle}\\|Q[\\sigma]\\|_{L^{\\infty}H^{s}}\\bigl{\\|} \\Lambda^{t_{0}}\ abla^{\\mu,\\gamma}u\\|_{2}\\bigr{\\rangle}_{s>t_{0}+1},\\] and thus \\[\\|\\Lambda^{s}\ abla^{\\mu,\\gamma}u\\|_{2}\\leq C\\bigl{(}k[\\sigma], \\bigl{\\|}Q[\\sigma]\\bigr{\\|}_{L^{\\infty}H^{t_{0}+1}}\\bigr{)} \\tag{2.11}\\] \\[\\quad\\times\\bigl{(}\\|\\Lambda^{s}{\\bf g}\\|_{2}+\\|\ abla^{\\mu,\\gamma }u\\|_{2}+\\bigl{\\langle}\\|Q[\\sigma]\\|_{L^{\\infty}H^{s}}\\bigl{\\|}\\Lambda^{t_{0}} \ abla^{\\mu,\\gamma}u\\|_{2}\\bigr{\\rangle}_{s>t_{0}+1}\\bigr{)}.\\] One also gets \\(\\|\ abla^{\\mu,\\gamma}u\\|_{2}\\leq k[\\sigma]\\|{\\bf g}\\|_{2}\\) from (2.10) after remarking that the commutator in the r.h.s. vanishes when \\(s=h=0\\); taking \\(s=t_{0}\\) in (2.11) then gives \\(\\|\\Lambda^{t_{0}}\ abla^{\\mu,\\gamma}u\\|_{2}\\leq C\\bigl{(}k[\\sigma],\\bigl{\\|}Q[ \\sigma]\\bigr{\\|}_{L^{\\infty}H^{t_{0}+1}}\\bigr{)}\\|\\Lambda^{t_{0}}{\\bf g}\\|_{2}\\), so that the r.h.s. of (2.11) is bounded from above by \\[C\\bigl{(}k[\\sigma],\\|Q[\\sigma]\\|_{L^{\\infty}H^{t_{0}+1}}\\bigr{)}\\bigl{(}\\| \\Lambda^{s}{\\bf g}\\|_{2}+\\bigl{\\langle}\\|Q[\\sigma]\\|_{L^{\\infty}H^{s}}\\bigl{\\|} \\Lambda^{t_{0}}{\\bf g}\\|_{2}\\bigr{\\rangle}_{s>t_{0}+1}\\bigr{)}.\\] The proposition follows therefore from Proposition 2.3. Before stating a corollary to Proposition 2.4, let us introduce a few notations: **Notation 2.2. i.** For all \\(u\\in H^{3/2}({\\mathbb{R}}^{2})\\), we define \\(u^{\\flat}\\) as the solution to the BVP \\[\\begin{cases}\ abla^{\\mu,\\gamma}\\cdot(1+Q[\\sigma])\ abla^{\\mu,\\gamma}u^{\\flat }=0\\\\ u^{\\flat}_{|_{z=0}}=u,\\qquad\\partial_{n}u^{\\flat}_{|_{z=-1}}=0.\\end{cases} \\tag{2.12}\\] **ii.** For all \\(u\\in{\\mathfrak{S}}^{\\prime}({\\mathbb{R}}^{2})\\), one defines \\(u^{\\dagger}\\) as \\[\\forall z\\in[-1,0],u^{\\dagger}(\\cdot,z)=\\chi(\\sqrt{\\mu}z|D^{\\gamma}|)u,\\] where \\(\\chi\\) is a smooth, compactly supported function such that \\(\\chi(0)=1\\). The following corollary gives some control on the extension mapping \\(u\\mapsto u^{\\flat}\\). **Corollary 2.2.** Let \\(t_{0}>1\\) and \\(s\\geq 0\\). Let also \\(\\zeta,b\\in H^{t_{0}+2}\\cap H^{s+1}({\\mathbb{R}}^{2})\\) be such that (2.4) is satisfied, and \\(\\sigma\\) be given by (2.5). Then for all \\(u\\in H^{s+1/2}({\\mathbb{R}}^{2})\\), there exists a unique solution \\(u^{\\flat}\\in H^{1}({\\mathcal{S}})\\) and \\[\\|\\Lambda^{s}\ abla^{\\mu,\\gamma}u^{\\flat}\\|_{2}\\leq\\sqrt{\\mu\ u}M[\\sigma] \\bigl{(}|{\\mathfrak{P}}u|_{H^{s}}+\\bigl{\\langle}\\|\ abla^{\\mu,\\gamma}\\sigma \\|_{L^{\\infty}H^{s}}|{\\mathfrak{P}}u|_{H^{t_{0}}}\\bigr{\\rangle}_{\\!\\!s>t_{0}+1 }\\bigr{)},\\] with \\({\\mathfrak{P}}\\) as defined in (1.8). Proof: Looking for \\(u^{\\flat}\\) under the form \\(u^{\\flat}=v+u^{\\dagger}\\), with \\(u^{\\dagger}\\) given by Notation 2.2, one must solve \\[\\left\\{\\begin{aligned} &\ abla^{\\mu,\\gamma}\\cdot(1+Q[\\sigma]) \ abla^{\\mu,\\gamma}v=-\ abla^{\\mu,\\gamma}\\cdot(1+Q[\\sigma])\ abla^{\\mu,\\gamma}u ^{\\dagger},\\\\ & v|_{z=0}=0,\\qquad\\partial_{n}v|_{z=-1}=\\mathbf{e_{z}}\\cdot(1+Q [\\sigma])\ abla^{\\mu,\\gamma}u^{\\dagger}|_{z=-1}.\\end{aligned}\\right. \\tag{2.13}\\] Applying Proposition 2.4 (with \\(\\mathbf{g}=-(1+Q[\\sigma])\ abla^{\\mu,\\gamma}u^{\\dagger}\\)), one gets \\[\\|A^{s}\ abla^{\\mu,\\gamma}\\!v\\|_{2}\\leq\\] and since \\(u^{\\flat}=u^{\\dagger}+v\\), the corollary follows from Proposition 2.2. Remark 2.2: From the variational formulation of (2.13), one gets easily \\[\\|(1+Q[\\sigma])^{1/2}\ abla^{\\mu,\\gamma}v\\|_{2}\\leq\\|(1+Q[\\sigma])^{1/2} \ abla^{\\mu,\\gamma}u^{\\dagger}\\|_{2}.\\] ## 3 The Dirichlet-Neumann operator As seen in the introduction, we define the Dirichlet-Neumann operator \\(\\mathcal{G}_{\\mu,\\gamma}[\\varepsilon\\zeta,\\beta b]\\cdot\\) as \\[\\mathcal{G}_{\\mu,\\gamma}[\\varepsilon\\zeta,\\beta b]\\psi=\\sqrt{1+|\\varepsilon \ abla\\zeta|^{2}}\\partial_{n}\\Phi_{|_{z=\\varepsilon\\zeta}},\\] where \\(\\Phi\\) solves (2.3). Using Notation 2.2, one can give an alternate definition of \\(\\mathcal{G}_{\\mu,\\gamma}[\\varepsilon\\zeta,\\beta b]\\cdot\\) (see Proposition 3.4 of [29]), namely, \\[\\mathcal{G}_{\\mu,\\gamma}[\\varepsilon\\zeta,\\beta b]\\psi=\\partial_{n}\\psi^{ \\flat}|_{z=0}\\quad\\big{(}=\\mathbf{e_{z}}\\cdot P[\\sigma]\ abla_{X,z}\\psi^{ \\flat}|_{z=0}\\big{)}.\\] More, precisely one has: Proposition 3.1: Let \\(t_{0}>1\\), \\(s\\geq 0\\) and \\(\\zeta,b\\in H^{t_{0}+2}\\cap H^{s+1}(\\mathbb{R}^{2})\\) be such that (2.4) is satisfied, and let \\(\\sigma\\) be given by (2.5). Then one can define the mapping \\(\\mathcal{G}_{\\mu,\\gamma}[\\varepsilon\\zeta,\\beta b]\\cdot\\) (or simply \\(\\mathcal{G}[\\varepsilon\\zeta]\\cdot\\) when no confusion is possible) as \\[\\mathcal{G}_{\\mu,\\gamma}[\\varepsilon\\zeta,\\beta b]\\,(=\\mathcal{G}[\\varepsilon \\zeta])\\,:\\,H^{s+1/2}(\\mathbb{R}^{2})\\begin{array}{rcl}\\to&H^{s-1/2}( \\mathbb{R}^{2})\\\\ u&\\mapsto&\\partial_{n}u^{\\flat}|_{z=0}\\end{array}.\\] Proof: The extension \\(u^{\\flat}\\) is well-defined owing to Corollary 2.2. Moreover, we can use the definition of \\(P[\\sigma]\\) and \\(Q[\\sigma]\\) to see that \\[\\mathbf{e_{z}}\\cdot P[\\sigma]\ abla_{X,z}u^{\\flat}=\\mathbf{e_{z}}\\cdot(1+Q[ \\sigma])\ abla^{\\mu,\\gamma}u^{\\flat}.\\] We will now show that it makes sense to take the trace of the above expression at \\(z=0\\). This is trivially true for \\(Q[\\sigma]\\), so that we are left with \\(u^{\\flat}\\). After a brief look at the proof of Corollary 2.2, and using the same notations, one gets \\(u^{\\flat}=v+u^{\\dagger}\\). Since one obviously has\\(\ abla^{\\mu,\\gamma}u^{\\dagger}\\in C([-1,0];H^{s-1/2}(\\mathbb{R}^{2})^{3})\\), the trace \\(\ abla^{\\mu,\\gamma}u^{\\dagger}|_{z=0}\\) makes sense. In order to prove that \\(\ abla^{\\mu,\\gamma}v|_{z=0}\\) is also defined, remark that \\(v\\), which solves (2.13) satisfies \\(\ abla^{\\mu,\\gamma}v\\in L^{2}((-1,0);H^{s}(\\mathbb{R}^{2})^{3})\\) and, using the equation, \\(\\partial_{z}\ abla^{\\mu,\\gamma}v\\in L^{2}((-1,0);H^{s-1}(\\mathbb{R}^{2})^{3})\\). By the trace theorem, these two properties show that \\(\ abla^{\\mu,\\gamma}v|_{z=0}\\in H^{s-1/2}(\\mathbb{R}^{2})^{3}\\). ### Fundamental properties We begin this section with two basic properties of the Dirichlet-Neumann operator which play a key role in the energy estimates. Proposition 3.2: Let \\(t_{0}>1\\) and \\(\\zeta,b\\in H^{t_{0}+2}(\\mathbb{R}^{2})\\) be such that (2.4) is satisfied. Then **i.** The Dirichlet-Neumann operator is self-adjoint: \\[\\forall u,v\\in H^{1/2}(\\mathbb{R}^{2}),\\qquad(u,\\mathcal{G}[\\varepsilon\\zeta] v)=(v,\\mathcal{G}[\\varepsilon\\zeta]u).\\] **ii.** One has \\[\\forall u,v\\in H^{1/2}(\\mathbb{R}^{2}),\\ \\ \\big{|}(u,\\mathcal{G}[\\varepsilon \\zeta]v)\\big{|}\\leq(u,\\mathcal{G}[\\varepsilon\\zeta]u)^{1/2}(v,\\mathcal{G}[ \\varepsilon\\zeta]v)^{1/2}.\\] Demonstration Proof: Using Notation 2.2, one gets by Green's identity that \\[(u,\\mathcal{G}[\\varepsilon\\zeta]v) = \\int_{\\mathcal{S}}(1+Q[\\sigma])\ abla^{\\mu,\\gamma}u^{\\flat}\\cdot \ abla^{\\mu,\\gamma}v^{\\flat} \\tag{3.1}\\] \\[= \\int_{\\mathcal{S}}(1+Q[\\sigma])^{1/2}\ abla^{\\mu,\\gamma}u^{\\flat} \\cdot(1+Q[\\sigma])^{1/2}\ abla^{\\mu,\\gamma}v^{\\flat}, \\tag{3.2}\\] where \\((1+Q[\\sigma])^{1/2}\\) stands for the square root of the positive definite matrix \\((1+Q[\\sigma])\\) (note that the symmetry in \\(u\\) and \\(v\\) of the above expression proves the -very classical- first point of the proposition). It follows therefore from Cauchy-Schwartz inequality that \\[(u,\\mathcal{G}[\\varepsilon\\zeta]v)\\leq\\big{\\|}(1+Q[\\sigma])^{1/2}\ abla^{\\mu, \\gamma}u^{\\flat}\\big{\\|}_{2}\\,\\big{\\|}(1+Q[\\sigma])^{1/2}\ abla^{\\mu,\\gamma}v ^{\\flat}\\big{\\|}_{2},\\] which yields the second point of the proposition, since one has \\[(u,\\mathcal{G}[\\varepsilon\\zeta]u)=\\big{\\|}(1+Q[\\sigma])^{1/2}\ abla^{\\mu, \\gamma}u^{\\flat}\\big{\\|}_{2}^{2} \\tag{3.3}\\] (just take \\(u=v\\) in (3.2)). The next proposition is related to the variational formula of Hadamard and gives a uniform control of the operator norm of the DN operator and its derivatives (recall that we use the convention (1.10) and that \\(d_{\\zeta}^{j}\\mathcal{G}[\\varepsilon\\cdot]u\\cdot\\mathbf{h}=\\mathcal{G}[ \\varepsilon\\zeta]u\\) when \\(j=0\\)). **Proposition 3.3**.: Let \\(t_{0}>1\\), \\(s\\geq 0\\) and \\(\\zeta,b\\in H^{t_{0}+2}\\cap H^{s+1}(\\mathbb{R}^{2})\\) be such that (2.4) is satisfied, and let \\(\\sigma\\) be given by (2.5). For all \\(u\\in H^{s+1/2}(\\mathbb{R}^{2})\\), \\(j\\in\\mathbb{N}\\) and \\(\\mathbf{h}\\in H^{t_{0}+2}\\cap H^{s+1}(\\mathbb{R}^{2})^{j}\\), one has \\[\\big{|}\\frac{1}{\\sqrt{\\mu}}d^{j}_{\\zeta}\\mathcal{G}[\\varepsilon \\cdot]u\\cdot\\mathbf{h}\\big{|}_{H^{s-1/2}}\\leq(\\frac{\\varepsilon}{\ u})^{j}M[ \\sigma]\\Big{(}\\big{|}\\mathfrak{P}u\\big{|}_{H^{s}}\\prod_{k=1}^{j}|h_{k}|_{H^{t_{ 0}+1}}\\] \\[\\quad+\\big{\\langle}(1+\\|\ abla^{\\mu,\\gamma}\\sigma\\|_{L^{\\infty}H^ {s}})\\big{|}\\mathfrak{P}u\\big{|}_{H^{t_{0}}}\\prod_{k=1}^{j}|h_{k}|_{H^{t_{0}+ 1}}\\big{\\rangle}_{s>t_{0}}\\] \\[\\quad+\\big{\\langle}\\big{|}\\mathfrak{P}u\\big{|}_{H^{t_{0}}}\\sum_{ k=1}^{j}|h_{k}|_{H^{s+1}}\\prod_{l\ eq k}|h_{l}|_{H^{t_{0}+1}}\\big{\\rangle}_{s>t_{0}} \\Big{)},\\] with \\(M[\\sigma]\\) as in (2.9) while \\(\\mathfrak{P}\\) is defined in (1.8). **Remark 3.1**.: When \\(j=0\\), the proposition gives a much more precise estimate on \\(|\\mathcal{G}[\\varepsilon\\zeta]u|_{H^{s-1/2}}\\) than Theorem 3.6 of [29], but requires \\(\\zeta\\in H^{s+1}(\\mathbb{R}^{2})\\) while \\(\\zeta\\in H^{s+1/2}(\\mathbb{R}^{2})\\) is enough, as shown in [29] through the use of regularizing diffeomorphisms. This lack of optimality in the \\(\\zeta\\)-dependence is the price to pay to obtain uniform estimates in terms of a \\(\\mathcal{E}^{s}(\\cdot)\\) rather than Sobolev-type norm. **Remark 3.2**.: The r.h.s. of the estimate given in the proposition (when \\(j=0\\)) is itself bounded from above by \\[M[\\sigma]\\big{(}|u|_{H^{s+1}}+\\big{\\langle}\\|\ abla^{\\mu,\\gamma}\\sigma\\|_{L^{ \\infty}H^{s}}|u\\big{|}_{H^{t_{0}+1}}\\big{\\rangle}_{s>t_{0}}\\big{)}.\\] Proof.: First remark that one has \\(\\Lambda^{s-1/2}v^{\\dagger}|_{z=0}=\\Lambda^{s-1/2}v\\) (with \\(v^{\\dagger}\\) as in Notation 2.2), so that one gets by Green's identity, \\[(\\Lambda^{s-1/2}\\mathcal{G}[\\varepsilon\\zeta]u,v) =(\\mathcal{G}[\\varepsilon\\zeta]u,\\Lambda^{s-1/2}v)\\] \\[=\\int_{\\mathcal{S}}(1+Q[\\sigma])\ abla^{\\mu,\\gamma}u^{\\flat}\\cdot \\Lambda^{s-1/2}\ abla^{\\mu,\\gamma}v^{\\dagger}\\] \\[=\\int_{\\mathcal{S}}\\Lambda^{s}(1+Q[\\sigma])\ abla^{\\mu,\\gamma}u^{ \\flat}\\cdot\\Lambda^{-1/2}\ abla^{\\mu,\\gamma}v^{\\dagger}. \\tag{3.4}\\] A Cauchy-Schwartz inequality then yields, \\[(\\Lambda^{s-1/2}\\mathcal{G}[\\varepsilon\\zeta]u,v)\\leq\\|\\Lambda^{s}(1+Q[\\sigma ])\ abla^{\\mu,\\gamma}u^{\\flat}\\|_{2}\\|\\Lambda^{-1/2}\ abla^{\\mu,\\gamma}v^{ \\dagger}\\|_{2}, \\tag{3.5}\\] and since it follows from the product estimate (2.1) that \\(\\|\\Lambda^{s}(1+Q[\\sigma])\ abla^{\\mu,\\gamma}u^{\\flat}\\|_{2}\\) is bounded from above by \\[(1+\\|Q[\\sigma]\\|_{L^{\\infty}H^{t_{0}}})\\|\\Lambda^{s}\ abla^{\\mu,\\gamma}u^{ \\flat}\\|_{2}+\\big{\\langle}\\|Q[\\sigma]\\|_{L^{\\infty}H^{s}}\\|\\Lambda^{t_{0}} \ abla^{\\mu,\\gamma}u^{\\flat}\\|_{2}\\big{\\rangle}_{s>t_{0}},\\]one can deduce from Propositions 2.2 and 2.3 that (recall that \\(\ u=\\frac{1}{1+\\sqrt{\\mu}}\\)), \\[(\\Lambda^{s-1/2}\\mathcal{G}[\\varepsilon\\zeta]u,v)\\leq\ u^{-1/2}M[ \\sigma]|v|_{2}\\] \\[\\times\\,\\Big{(}\\|\\Lambda^{s}\ abla^{\\mu,\\gamma}u^{\\flat}\\|_{2}+ \\big{\\langle}\\|\ abla^{\\mu,\\gamma}\\sigma\\|_{L^{\\infty}H^{s}}\\|\\Lambda^{t_{0}} \ abla^{\\mu,\\gamma}u^{\\flat}\\|_{2}\\big{\\rangle}_{s>t_{0}}\\Big{)},\\] and the proposition thus follows directly from Corollary 2.2 and a duality argument in the case \\(j=0\\). In the case \\(j\ eq 0\\), after differentiating (3.4), and using the same notation as in Proposition 2.3, one gets \\[(\\Lambda^{s-1/2}d^{j}_{\\zeta}\\mathcal{G}[\\varepsilon\\cdot]u\\cdot \\mathbf{h},v)=\\int_{\\mathcal{S}}\\Lambda^{s}(Q^{(j)}[\\sigma]\\cdot\\mathbf{h}) \ abla^{\\mu,\\gamma}u^{\\flat}\\cdot\\Lambda^{-1/2}\ abla^{\\mu,\\gamma}v^{\\dagger}\\] \\[\\qquad\\qquad+\\sum_{k=1}^{j}\\sum_{\\mathbf{h}_{k},\\mathbf{h}_{j-k} }\\int_{\\mathcal{S}}\\Lambda^{s}B(\\mathbf{h}_{k},\\mathbf{h}_{j-k})\\cdot\\Lambda^ {-1/2}\ abla^{\\mu,\\gamma}v^{\\dagger}, \\tag{3.6}\\] where the second summation is over all the \\(k\\)-uplets \\(\\mathbf{h}_{k}\\) and \\((j-k)\\)-uplets \\(\\mathbf{h}_{j-k}\\) such that \\((\\mathbf{h}_{k},\\mathbf{h}_{j-k})\\) is a permutation of \\(\\mathbf{h}\\), and where \\(B(\\mathbf{h}_{k},\\mathbf{h}_{j-k})\\) is given by \\[B(\\mathbf{h}_{k},\\mathbf{h}_{j-k})=(Q^{(j-k)}[\\sigma]\\cdot\\mathbf{h}_{j-k}) \ abla^{\\mu,\\gamma}(u^{\\flat,k}\\cdot\\mathbf{h}_{k})\\] (\\(u^{\\flat,k}\\cdot\\mathbf{h}_{k}\\) standing for the \\(k\\)-th order derivative of \\(\\zeta\\mapsto u^{\\flat}\\) at \\(\\underline{\\zeta}\\) and in the direction \\(\\mathbf{h}_{k}\\)). Proceeding as for the case \\(j=0\\) and using the estimates on \\(\\|Q^{(j)}[\\sigma]\\cdot\\mathbf{h}\\|_{L^{\\infty}H^{s}}\\) provided by Proposition 2.3, one arrives at the desired estimate for the first term of the r.h.s. of (3.6). For the other terms, one has to remark first that \\(u^{\\flat,k}\\cdot\\mathbf{h}_{k}\\) solves a bvp like (2.8) with \\[\\mathbf{g}=-\\sum_{l=0}^{k-1}\\sum_{\\mathbf{h}_{k,l},\\mathbf{h}_{k,k-l}}B( \\mathbf{h}_{k,l},\\mathbf{h}_{k,k-l}),\\] where the second summation is taken over all the \\(l\\) and \\(k-l\\)-uplets such that \\((\\mathbf{h}_{k,l},\\mathbf{h}_{k,k-l})\\) is a permutation of \\(\\mathbf{h}_{k}\\). A control of \\(\\|\ abla^{\\mu,\\gamma}u^{\\flat,k}\\cdot\\mathbf{h}_{k}\\|_{L^{\\infty}H^{s}}\\) in terms of \\(\\|Q^{(k-l)}[\\sigma]\\cdot\\mathbf{h}_{k,k-l}\\|_{L^{\\infty}H^{s}}\\) and \\(\\|\ abla^{\\mu,\\gamma}u^{\\flat,l}\\cdot\\mathbf{h}_{k,l}\\|_{L^{\\infty}H^{s}}\\) is therefore provided by Proposition 2.4. It is then easy to conclude by a simple induction. **Remark 3.3**.: Instead of (3.5), one can easily get \\[(\\Lambda^{s-1/2}\\mathcal{G}[\\varepsilon\\zeta]u,v)\\leq\\|\\Lambda^{s+1/2}(1+Q[ \\sigma])\ abla^{\\mu,\\gamma}u^{\\flat}\\|_{2}\\|\\Lambda^{-1}\ abla^{\\mu,\\gamma}v^ {\\dagger}\\|_{2},\\]and since \\(\\|A^{-1}\ abla^{\\mu,\\gamma}v^{\\dagger}\\|_{2}\\lesssim\\sqrt{\\mu}|v|_{2}\\) one also has the estimate (with \\(\\zeta=0\\) for the sake of simplicity): \\[\\big{|}\\frac{1}{\\mu}\\mathcal{G}[0]\\psi\\big{|}_{H^{s-1/2}}\\leq C(\\frac{1}{h_{0}},\\beta\\sqrt{\\mu},|b|_{H^{s+3/2}})\\big{|}\\frac{|D^{\\gamma}|}{(1+\\sqrt{\\mu}|D^{ \\gamma}|)^{1/2}}\\psi\\big{|}_{H^{s+1/2}},\\] showing that \\(\\frac{1}{\\mu}\\mathcal{G}[0]\\cdot\\) can be uniformly controlled when \\(\\mu\\) goes to zero. The proposition below show that controls in terms of \\(|\\mathfrak{P}u|_{2}\\) or \\((u,\\frac{1}{\\mu\ u}\\mathcal{G}[\\varepsilon\\zeta]u)^{1/2}\\) are equivalent. This result can be seen as a version of the Garding inequality for the DN operator. Proposition 3.4: Let \\(t_{0}>1\\) and \\(\\zeta,b\\in H^{t_{0}+2}(\\mathbb{R}^{2})\\) be such that (2.4) is satisfied, and let \\(\\sigma\\) be given by (2.5), \\(k[\\sigma]\\) be as defined in Proposition 2.3 and \\(\\mathfrak{P}\\) be given by (1.8). For all \\(u\\in H^{1/2}(\\mathbb{R}^{2})\\), one has \\[(u,\\frac{1}{\\mu\ u}\\mathcal{G}[\\varepsilon\\zeta]u)\\leq M[\\sigma]|\\mathfrak{P}u |_{2}^{2}\\quad\\text{ and }\\quad k[\\sigma]^{-1}|\\mathfrak{P}u|_{2}^{2}\\lesssim(u,\\frac{1}{\\mu\ u} \\mathcal{G}[\\varepsilon\\zeta]u).\\] Proof: The first estimate of the proposition follows directly from (3.3) and Corollary 2.2. The second estimate is more delicate. Let \\(\\varphi\\) be a smooth function, with compact support in \\((-1,0]\\) and such that \\(\\varphi(0)=1\\); define also \\(v(X,z)=\\varphi(z)u^{\\flat}\\) (with \\(u^{\\flat}\\) defined as in Notation 2.2). Since \\(v_{|_{z=-1}}=0\\), one can get, after taking the Fourier transform with respect to the horizontal variables, \\[\\frac{|\\xi^{\\gamma}|^{2}}{1+\\sqrt{\\mu}|\\xi^{\\gamma}|}|\\widehat{u}(\\xi)|^{2} \\leq 2\\int_{-1}^{0}\\frac{|\\xi^{\\gamma}|^{2}}{1+\\sqrt{\\mu}|\\xi^{\\gamma}|}| \\widehat{v}(\\xi,z)|\\,|\\partial_{z}\\widehat{v}(\\xi,z)|dz.\\] Remarking that \\[|\\widehat{v}|\\leq|\\varphi|_{\\infty}|\\widehat{u^{\\flat}}|\\quad\\text{ and }\\quad| \\partial_{z}\\widehat{v}|\\leq|\\partial_{z}\\varphi|_{\\infty}|\\widehat{u^{\\flat} }|+|\\varphi|_{\\infty}|\\partial_{z}\\widehat{u^{\\flat}}|,\\] one gets \\[\\frac{|\\xi^{\\gamma}|^{2}}{1+\\sqrt{\\mu}|\\xi^{\\gamma}|}|\\widehat{u }(\\xi)|^{2}\\leq 2|\\varphi|_{\\infty}|\\partial_{z}\\varphi|_{\\infty}\\int_{-1}^{0} \\frac{|\\xi^{\\gamma}|^{2}}{1+\\sqrt{\\mu}|\\xi^{\\gamma}|}|\\widehat{u^{\\flat}}(\\xi,z)|^{2}dz\\] \\[\\quad+2|\\varphi|_{\\infty}^{2}\\int_{-1}^{0}\\frac{|\\xi^{\\gamma}|^{2 }}{1+\\sqrt{\\mu}|\\xi^{\\gamma}|}|\\widehat{u^{\\flat}}(\\xi,z)|\\,|\\partial_{z} \\widehat{u^{\\flat}}(\\xi,z)|dz,\\] \\[\\leq 2|\\varphi|_{\\infty}|\\partial_{z}\\varphi|_{\\infty}\\int_{-1}^{ 0}|\\xi^{\\gamma}|^{2}|\\widehat{u^{\\flat}}(\\xi,z)|^{2}dz\\] \\[\\quad+|\\varphi|_{\\infty}^{2}\\int_{-1}^{0}\\frac{\\mu|\\xi^{\\gamma}|^ {4}}{(1+\\sqrt{\\mu}|\\xi^{\\gamma}|)^{2}}|\\widehat{u^{\\flat}}(\\xi,z)|^{2}dz+| \\varphi|_{\\infty}^{2}\\int_{-1}^{0}\\frac{1}{\\mu}|\\partial_{z}\\widehat{u^{\\flat }}(\\xi,z)|^{2}dz,\\]where Young's inequality has been used to obtain the last line. Remarking now that \\(\\frac{\\mu|\\xi^{\\gamma}|^{4}}{(1+\\sqrt{\\mu}|\\xi^{\\gamma}|)^{2}}\\leq|\\xi^{\\gamma}|^{2}\\), one has \\[\\frac{|\\xi^{\\gamma}|^{2}}{1+\\sqrt{\\mu}|\\xi^{\\gamma}|}|\\widehat{u}(\\xi)|^{2} \\lesssim\\int_{-1}^{0}|\\xi^{\\gamma}|^{2}|\\widehat{u^{\\flat}}(\\xi,z)|^{2}dz+ \\int_{-1}^{0}\\frac{1}{\\mu}|\\partial_{z}\\widehat{u^{\\flat}}(\\xi,z)|^{2}dz,\\] so that, integrating with respect to \\(\\xi\\), one gets \\[\\Big{|}\\frac{|D^{\\gamma}|}{(1+\\sqrt{\\mu}|D^{\\gamma}|)^{1/2}}u\\Big{|}_{2}^{2} \\lesssim\\frac{1}{\\mu}\\|\ abla^{\\mu,\\gamma}u^{\\flat}\\|_{2}^{2}.\\] Owing to Proposition 2.3 and (3.1) (with \\(v=u\\)), one has \\[\\|\ abla^{\\mu,\\gamma}u^{\\flat}\\|_{2}^{2}\\lesssim k[\\sigma](u,\\mathcal{G}[ \\varepsilon\\zeta]u),\\] and the proposition follows. ### Commutator estimates In the following proposition, we show how to control in the energy estimates, the terms involving commutators between the Dirichlet-Neumann operator and spatial or time derivatives and in terms of \\(\\mathcal{E}^{s}(\\cdot)\\) rather than Sobolev-type norms. **Proposition 3.5**.: Let \\(t_{0}>1\\), \\(s\\geq 0\\) and \\(\\zeta,b\\in H^{t_{0}+2}\\cap H^{s+2}(\\mathbb{R}^{2})\\) be such that (2.4) is satisfied, and let \\(\\sigma\\) be given by (2.5). Then, for all \\(v\\in H^{s+1/2}(\\mathbb{R}^{2})\\), \\[\\big{|}[\\Lambda^{s},\\frac{1}{\\mu\ u}\\mathcal{G}[\\varepsilon\\zeta] ]v\\big{|}_{2} \\leq M[\\sigma]\\Big{(}\\|\ abla^{\\mu,\\gamma}\\sigma\\|_{L^{\\infty}H^{t _{0}+1}}|\\mathfrak{P}v|_{H^{s}}\\] \\[+\\big{\\langle}\\|\ abla^{\\mu,\\gamma}\\sigma\\|_{L^{\\infty}H^{s+1}}| \\mathfrak{P}v|_{H^{t_{0}}}\\big{\\rangle}_{s>t_{0}}\\Big{)},\\] where \\(M[\\sigma]\\) is as in (2.9) while \\(\\mathfrak{P}\\) is defined in (1.8). Proof.: First remark that for all \\(u\\in\\mathfrak{S}(\\mathbb{R}^{2})\\), \\[\\big{(}u,[\\mathcal{G}[\\varepsilon\\zeta],\\Lambda^{s}]v\\big{)}=\\big{(}u, \\mathcal{G}[\\varepsilon\\zeta]\\Lambda^{s}v\\big{)}-\\big{(}\\Lambda^{s}u, \\mathcal{G}[\\varepsilon\\zeta]v\\big{)}.\\] Since \\(u^{\\dagger}|_{z=0}=u\\) and \\(\\Lambda^{s}u^{\\dagger}|_{z=0}=\\Lambda^{s}u\\) (we use here Notation 2.2), it follows from Green's identity that \\[\\big{(}u,[\\mathcal{G}[\\varepsilon\\zeta],\\Lambda^{s}]v\\big{)}= \\int_{\\mathcal{S}}\ abla^{\\mu,\\gamma}u^{\\dagger}\\cdot(1+Q[\\sigma])\ abla^{\\mu, \\gamma}(\\Lambda^{s}v)^{\\flat} \\tag{3.7}\\] \\[\\quad-\\int_{\\mathcal{S}}(1+Q[\\sigma])\ abla^{\\mu,\\gamma}v^{\\flat} \\cdot\ abla^{\\mu,\\gamma}\\Lambda^{s}u^{\\dagger}\\] \\[=\\int_{\\mathcal{S}}\ abla^{\\mu,\\gamma}u^{\\dagger}\\cdot\\big{(}(1+Q [\\sigma])\ abla^{\\mu,\\gamma}\\big{(}(\\Lambda^{s}v)^{\\flat}-\\Lambda^{s}v^{ \\flat}\\big{)}-[\\Lambda^{s},Q[\\sigma]]\ abla^{\\mu,\\gamma}v^{\\flat}\\big{)}.\\] Let us now prove the following lemma: **Lemma 3.1**.: For all \\(f\\in L^{2}(\\mathbb{R}^{2})\\) and \\(\\mathbf{g}\\in H^{1}(\\mathcal{S})^{3}\\), one has \\[\\int_{\\mathcal{S}}\ abla^{\\mu,\\gamma}f^{\\dagger}\\cdot\\mathbf{g}\\lesssim\\sqrt{\\mu }\\sqrt{\ u}|f|_{2}\\big{\\|}\\Lambda\\mathbf{g}\\big{\\|}_{2}.\\] Proof.: By definition of \\(f^{\\dagger}\\), one has \\[\ abla^{\\mu,\\gamma}f^{\\dagger}\\!=\\!\\sqrt{\\mu}\\left(\\begin{array}{c}\\chi(z \\sqrt{\\mu}|D^{\\gamma}|)\\partial_{x}f\\\\ \\gamma\\chi(z\\sqrt{\\mu}|D^{\\gamma}|)\\partial_{y}f\\\\ \\chi^{\\prime}(z\\sqrt{\\mu}|D^{\\gamma}|)|D^{\\gamma}|f\\end{array}\\right).\\] Replacing \\(\ abla^{\\mu,\\gamma}f^{\\dagger}\\) in the integral to control by this expression, and using the self-adjointness of \\(\\Lambda\\), one gets easily from Proposition 2.2 that \\[\\int_{\\mathcal{S}}\ abla^{\\mu,\\gamma}f^{\\dagger}\\cdot\\mathbf{g}\\lesssim\\sqrt{ \\mu}\\sqrt{\ u}|\\mathfrak{P}\\Lambda^{-1}f|_{2}\\big{\\|}\\Lambda\\mathbf{g}\\big{\\|} _{2};\\] recalling that \\(\ u=\\frac{1}{1+\\sqrt{\\mu}}\\) and \\(\\gamma\\leq 1\\), one can check that \\(|\\mathfrak{P}\\Lambda^{-1}f|_{2}\\lesssim|f|_{2}\\), _uniformly with respect to \\(\\mu\\) and \\(\\gamma\\)_, and the lemma follows. It is then a simple consequence of the lemma, (3.7) and Proposition 2.3 that \\[\\big{(}u,[\\mathcal{G}[\\varepsilon\\zeta],\\Lambda^{s}]v\\big{)} \\lesssim\\sqrt{\\mu}\\sqrt{\ u}|u|_{2}\\Big{(}\\|\\Lambda[\\Lambda^{s},Q[\\sigma]] \ abla^{\\mu,\\gamma}v^{\\flat}\\|_{2}\\] \\[+(1+\\|\ abla^{\\mu,\\gamma}\\sigma\\|_{L^{\\infty}H^{t_{0}+1}})\\| \\Lambda\ abla^{\\mu,\\gamma}\\big{(}(\\Lambda^{s}v)^{\\flat}-\\Lambda^{s}v^{\\flat} \\big{)}\\|_{2}\\Big{)}, \\tag{3.8}\\] which motivates the following lemma: **Lemma 3.2**.: One has \\[\\|\\Lambda\ abla^{\\mu,\\gamma}\\big{(}(\\Lambda^{s}v)^{\\flat}-\\Lambda^{s}v^{\\flat }\\big{)}\\|_{2}\\leq M[\\sigma]\\|\\Lambda[\\Lambda^{s},Q[\\sigma]]\ abla^{\\mu, \\gamma}v^{\\flat}\\|_{2}.\\] Proof.: Just remark that \\(w:=(\\Lambda^{s}v)^{\\flat}-\\Lambda^{s}v^{\\flat}\\) solves \\[\\left\\{\\begin{aligned} &\ abla^{\\mu,\\gamma}\\cdot(1+Q[\\sigma]) \ abla^{\\mu,\\gamma}w=\ abla^{\\mu,\\gamma}\\cdot\\mathbf{g},\\\\ & w|_{z=0}=0,\\qquad\\partial_{n}w|_{z=-1}=-\\mathbf{e_{z}}\\cdot \\mathbf{g}|_{z=-1},\\end{aligned}\\right.\\] with \\(\\mathbf{g}=[\\Lambda^{s},Q[\\sigma]]\ abla^{\\mu,\\gamma}v^{\\flat}\\), and use Proposition 2.4. With the help of the lemma, one deduces from (3.8) that \\[\\big{(}u,[\\mathcal{G}[\\varepsilon\\zeta],\\Lambda^{s}]v\\big{)}\\leq\\sqrt{\\mu} \\sqrt{\ u}M[\\sigma]\\big{\\|}\\Lambda[\\Lambda^{s},Q[\\sigma]]\ abla^{\\mu,\\gamma}v ^{\\flat}\\big{\\|}_{2}|u|_{2},\\] and thus, owing to Corollary 2.1 and Proposition 2.3 \\[\\big{(}u,[\\mathcal{G}[\\varepsilon\\zeta],\\Lambda^{s}]v\\big{)}\\leq\\sqrt{\\mu} \\sqrt{\ u}M[\\sigma]|u|_{2}\\] \\[\\times\\big{(}\\|\ abla^{\\mu,\\gamma}\\!\\sigma\\|_{L^{\\infty}H^{t_{0}+1 }}\\|\\Lambda^{s}\ abla^{\\mu,\\gamma}\\!v^{\\flat}\\|_{2}\\!+\\!\\big{\\langle}\\|\ abla ^{\\mu,\\gamma}\\!\\sigma\\|_{L^{\\infty}H^{s+1}}\\|\\Lambda^{t_{0}}\ abla^{\\mu,\\gamma} \\!v^{\\flat}\\|_{2}\\big{\\rangle}_{s>t_{0}}\\big{)},\\] and the result follows therefore from Corollary 2.2 and a duality argument. The next proposition gives control of the commutator between the Dirichlet-Neumann operator and a time derivative. Proposition 3.6: Let \\(t_{0}>1\\), \\(T>0\\) and \\(\\zeta,b\\in C^{1}([0,T];H^{t_{0}+2}(\\mathbb{R}^{2}))\\) be such that (2.4) is satisfied (uniformly with respect to \\(t\\)), and let \\(\\sigma\\) be given by (2.5). Then, for all \\(u\\in C^{1}([0,T];H^{1/2}(\\mathbb{R}^{2}))\\) and \\(t\\in[0,T]\\), \\[\\big{|}\\big{(}[\\partial_{t},\\frac{1}{\\mu\ u}\\mathcal{G}[\\varepsilon\\zeta]]u(t),u(t)\\big{)}\\big{|}\\leq M[\\sigma(t)]\\,\\|\ abla^{\\mu,\\gamma}\\partial_{t}\\sigma \\|_{\\infty,T}|\\mathfrak{P}u(t)|_{2}^{2},\\] where \\(M[\\sigma(t)]\\) is as in (2.9) while \\(\\mathfrak{P}\\) is defined in (1.8). Proof: First remark that \\[(u,[\\partial_{t},\\mathcal{G}[\\varepsilon\\zeta]]u)=\\partial_{t}(u,\\mathcal{G}[ \\varepsilon\\zeta]u)-2(u,\\mathcal{G}[\\varepsilon\\zeta]\\partial_{t}u),\\] so that using Green's identity, one gets \\[(u,[\\partial_{t},\\mathcal{G}[\\varepsilon\\zeta]]u)=\\partial_{t} \\int_{\\mathcal{S}}(1+Q[\\sigma])\ abla^{\\mu,\\gamma}u^{\\flat}\\cdot\ abla^{\\mu, \\gamma}u^{\\flat}\\] \\[\\quad-2\\int_{\\mathcal{S}}(1+Q[\\sigma])\ abla^{\\mu,\\gamma}( \\partial_{t}u)^{\\flat}\\cdot\ abla^{\\mu,\\gamma}u^{\\flat}\\] \\[=\\int_{\\mathcal{S}}(\\partial_{t}Q[\\sigma])\ abla^{\\mu,\\gamma}u^{ \\flat}\\cdot\ abla^{\\mu,\\gamma}u^{\\flat}\\] \\[\\quad-2\\int_{\\mathcal{S}}(1+Q[\\sigma])\ abla^{\\mu,\\gamma}\\big{(} (\\partial_{t}u)^{\\flat}-\\partial_{t}u^{\\flat}\\big{)}\\cdot\ abla^{\\mu,\\gamma}u ^{\\flat}.\\] It follows directly that \\[(u,[\\partial_{t},\\mathcal{G}[\\varepsilon\\zeta]]u) \\lesssim\\|\\partial_{t}Q[\\sigma]\\|_{\\infty}\\|\ abla^{\\mu,\\gamma}u^{ \\flat}\\|_{2}^{2}\\] \\[+(1+\\|Q[\\sigma]\\|_{\\infty})\\|\ abla^{\\mu,\\gamma}\\big{(}(\\partial _{t}u)^{\\flat}-\\partial_{t}u^{\\flat}\\big{)}\\|_{2}\\big{\\|}\ abla^{\\mu,\\gamma} u^{\\flat}\\|_{2}.\\] Proceeding exactly as in the proof of Lemma 3.2, one gets \\[\\|\ abla^{\\mu,\\gamma}\\big{(}(\\partial_{t}u)^{\\flat}-\\partial_{t}u^{\\flat} \\big{)}\\|_{2}\\lesssim\\|\\partial_{t}Q[\\sigma]\\|_{\\infty}\\|\ abla^{\\mu,\\gamma} u^{\\flat}\\|_{2},\\] and the result follows therefore from Corollary 2.2 and Proposition 2.3. ### Other properties Propositions 3.2 and 3.4 allow one to control \\((u,\\mathcal{G}[\\varepsilon\\zeta]v)\\) in general. However, it is sometimes necessary to have more precise estimates, when \\(u\\) and \\(v\\) have some special structure that can be exploited. so that, \\[\\big{(}(\\underline{\\mathbf{v}}\\cdot\ abla^{\\gamma}u),\\mathcal{G}[ \\varepsilon\\zeta]u\\big{)}=\\int_{\\mathcal{S}}(1+Q[\\sigma])\ abla^{\\mu,\\gamma}u^{ \\flat}\\cdot[\ abla^{\\mu,\\gamma},\\underline{\\mathbf{v}}\\cdot\ abla^{\\gamma}]u^{\\flat}\\] \\[+\\int_{\\mathcal{S}}\ abla^{\\mu,\\gamma}u^{\\flat}\\cdot[Q[\\sigma],( \\underline{\\mathbf{v}}\\cdot\ abla^{\\gamma})]\ abla^{\\mu,\\gamma}u^{\\flat}\\] \\[+\\int_{\\mathcal{S}}\ abla^{\\mu,\\gamma}u^{\\flat}\\cdot(\\underline{ \\mathbf{v}}\\cdot\ abla^{\\gamma})(1+Q[\\sigma])\ abla^{\\mu,\\gamma}u^{\\flat}. \\tag{3.11}\\] Integrating by parts, one finds \\[\\int_{\\mathcal{S}}\ abla^{\\mu,\\gamma}u^{\\flat}\\cdot(\\underline{ \\mathbf{v}}\\cdot\ abla^{\\gamma})(1+Q[\\sigma])\ abla^{\\mu,\\gamma}u^{\\flat}\\] \\[=-\\int_{\\mathcal{S}}\\big{(}(\\operatorname{div}_{\\gamma} \\underline{\\mathbf{v}})+\\underline{\\mathbf{v}}\\cdot\ abla^{\\gamma}\\big{)} \ abla^{\\mu,\\gamma}u^{\\flat}\\cdot(1+Q[\\sigma])\ abla^{\\mu,\\gamma}u^{\\flat} \\tag{3.12}\\] \\[=-\\int_{\\mathcal{S}}(\\operatorname{div}_{\\gamma}\\underline{ \\mathbf{v}})\ abla^{\\mu,\\gamma}u^{\\flat}\\cdot(1+Q[\\sigma])\ abla^{\\mu,\\gamma}u ^{\\flat}\\] \\[\\quad-\\int_{\\mathcal{S}}[\\underline{\\mathbf{v}}\\cdot\ abla^{ \\gamma},\ abla^{\\mu,\\gamma}]u^{\\flat}\\cdot(1+Q[\\sigma])\ abla^{\\mu,\\gamma}u^{\\flat}\\] \\[\\quad-\\int_{\\mathcal{S}}\ abla^{\\mu,\\gamma}(\\underline{\\mathbf{v }}\\cdot\ abla^{\\gamma}u^{\\flat})\\cdot(1+Q[\\sigma])\ abla^{\\mu,\\gamma}u^{\\flat}. \\tag{3.13}\\] From (3.11), (3.12) and (3.13), one gets therefore \\[\\big{(}(\\underline{\\mathbf{v}}\\cdot\ abla^{\\gamma}u),\\mathcal{G}[ \\varepsilon\\zeta]u\\big{)} =\\int_{\\mathcal{S}}(1+Q[\\sigma])\ abla^{\\mu,\\gamma}u^{\\flat}\\cdot[ \ abla^{\\mu,\\gamma},\\underline{\\mathbf{v}}\\cdot\ abla^{\\gamma}]u^{\\flat}\\] \\[+\\frac{1}{2}\\int_{\\mathcal{S}}\ abla^{\\mu,\\gamma}u^{\\flat}\\cdot[ Q[\\sigma],(\\underline{\\mathbf{v}}\\cdot\ abla^{\\gamma})]\ abla^{\\mu,\\gamma}u^{\\flat}\\] \\[-\\frac{1}{2}\\int_{\\mathcal{S}}(\\operatorname{div}_{\\gamma} \\underline{\\mathbf{v}})\ abla^{\\mu,\\gamma}u^{\\flat}\\cdot(1+Q[\\sigma])\ abla^{ \\mu,\\gamma}u^{\\flat}.\\] Remarking that \\([\ abla^{\\mu,\\gamma},\\underline{\\mathbf{v}}\\cdot\ abla^{\\gamma}]=\\left( \\begin{array}{c}\ abla^{\\gamma}\\underline{\\mathbf{v}}_{1}\\sqrt{\\mu}\\partial_ {x}+\ abla^{\\gamma}\\underline{\\mathbf{v}}_{2}\\gamma\\sqrt{\\mu}\\partial_{y}\\\\ 0\\end{array}\\right),\\) one deduces easily that \\[\\big{(}(\\underline{\\mathbf{v}}\\cdot\ abla^{\\gamma}u),\\mathcal{G}[ \\varepsilon\\zeta]u\\big{)}\\lesssim|\\underline{\\mathbf{v}}|_{W^{1,\\infty}}(1+\\| Q[\\sigma]\\|_{W^{1,\\infty}})\\|\ abla^{\\mu,\\gamma}u^{\\flat}\\|_{2}^{2}\\] and the results follows from Corollary 2.2. We finally state the following theorem, which gives an explicit formula for the shape derivative of the Dirichlet-Neumann operator. This theorem is a particular case of Theorem 3.20 of [29]. **Theorem 3.1**.: Let \\(t_{0}>1\\), \\(s\\geq t_{0}\\) and \\(\\underline{\\zeta},b\\in H^{s+3/2}(\\mathbb{R}^{2})\\) be such that (2.4) is satisfied. For all \\(\\underline{\\psi}\\in H^{s+3/2}(\\mathbb{R}^{2})\\), the mapping \\[\\zeta\\mapsto\\mathcal{G}[\\varepsilon\\zeta]\\underline{\\psi}\\in H^{s+1/2}( \\mathbb{R}^{2})\\] is well defined and differentiable in a neighborhood of \\(\\underline{\\zeta}\\) in \\(H^{s+3/2}(\\mathbb{R}^{2})\\), and \\[\\forall h\\in H^{s+3/2}(\\mathbb{R}^{2}),\\qquad d_{\\underline{\\zeta}}\\mathcal{ G}[\\varepsilon\\cdot]\\underline{\\psi}\\cdot h=-\\varepsilon\\mathcal{G}[ \\varepsilon\\underline{\\zeta}](h\\underline{Z})-\\varepsilon\\mu\ abla^{\\gamma} \\cdot(h\\underline{\\mathbf{v}}),\\] with \\(\\underline{Z}:=\\mathcal{Z}[\\varepsilon\\underline{\\zeta}]\\underline{\\psi}\\) and \\(\\underline{\\mathbf{v}}:=\ abla^{\\gamma}\\underline{\\psi}-\\varepsilon \\underline{Z}\ abla^{\\gamma}\\underline{\\zeta}\\), and where \\[\\mathcal{Z}[\\varepsilon\\underline{\\zeta}]:=\\frac{1}{1+\\varepsilon^{2}\\mu| \ abla^{\\gamma}\\underline{\\zeta}|^{2}}(\\mathcal{G}[\\varepsilon\\underline{ \\zeta}]+\\varepsilon\\mu\ abla^{\\gamma}\\underline{\\zeta}\\cdot\ abla^{\\gamma}).\\] **Remark 3.4**.: We take this opportunity to correct a harmless misprint in the statement of Theorem 3.20 of [29]. It should read \\[d_{\\underline{a}}G(\\cdot,b)f\\cdot h=-G(\\underline{a},b)(h\\underline{Z})- \\left(\\begin{array}{c}\ abla_{X}\\\\ 0\\end{array}\\right)\\cdot\\Big{[}hP\\left(\\frac{\\underline{\\mathbf{v}}}{ \\underline{Z}}\\right)\\Big{]},\\] and \\(\\widetilde{P}_{\\underline{a}}\\) should be replaced by \\(P\\) on the right hand side of the equation in the statement of Lemma 3.24. ### Asymptotic expansions This subsection is devoted to the asymptotic expansion of the DN operator \\(\\mathcal{G}[\\varepsilon\\zeta]\\psi(=\\mathcal{G}_{\\mu,\\gamma}[\\varepsilon\\zeta, \\beta b]\\psi)\\) in terms of one or several of the parameters \\(\\varepsilon\\), \\(\\mu\\), \\(\\gamma\\) and \\(\\beta\\). We consider two cases which cover all the physical regimes described in the introduction. #### 3.4.1 Expansions in shallow-water (\\(\\mu\\ll 1\\)) In shallow water, that is when \\(\\mu\\ll 1\\), the Laplace equation (2.3) -or its straightened version (2.6)- reduces at first order to the ODE \\(\\partial_{z}^{2}\\Phi=0\\). This fact can be exploited to find an approximate solution \\(\\Phi_{app}\\) of the Laplace equation by a standard BKW expansion. This method has been used in the long-waves regime in [5; 32; 9] (see also [39]) where the corresponding expansions of the DN operator can be found. We prove here that it can be used uniformly with respect to \\(\\varepsilon\\) and \\(\\beta\\), which allows one to consider at once the shallow-water/Green-Naghdi and Serre scalings. The difference between both regimes is that \\(\\varepsilon=\\beta=1\\) in the former (large amplitude for the surface and bottom variations), while \\(\\varepsilon=\\beta=\\sqrt{\\mu}\\) in the latter (medium amplitude variations for thesurface and bottom variations). Let us first define the first order linear operator \\(\\mathcal{T}[h,b]\\) as \\[\\mathcal{T}[h,b]V\\text{:=}-\\frac{1}{3}\ abla(h^{3}\ abla\\cdot V)+\\frac{1}{2} \\big{[}\ abla(h^{2}\ abla b\\cdot V)-h^{2}\ abla b\ abla\\cdot V\\big{]}+h\ abla b \ abla b\\cdot V. \\tag{3.14}\\] **Proposition 3.8** (Shallow-water and Serre scalings).: Let \\(\\gamma=1\\), \\(s\\geq t_{0}>1\\), \\(\ abla\\psi\\in H^{s+11/2}(\\mathbb{R}^{2})\\), \\(b\\in H^{s+11/2}(\\mathbb{R}^{2})\\) and \\(\\zeta\\in H^{s+9/2}(\\mathbb{R}^{2})\\) and assume that (2.4) is satisfied. With \\(h:=1+\\varepsilon\\zeta-\\beta b\\), one then has \\[\\big{|}\\mathcal{G}[\\varepsilon\\zeta]\\psi-\ abla\\cdot\\big{(}-\\mu h \ abla\\psi\\big{)}\\big{|}_{H^{s}} \\leq\\mu^{2}C_{0}\\] \\[\\big{|}\\mathcal{G}[\\varepsilon\\zeta]\\psi-\ abla\\cdot\\big{(}-\\mu h \ abla\\psi+\\mu^{2}\\mathcal{T}[h,\\beta b]\ abla\\psi\\big{)}\\big{|}_{H^{s}} \\leq\\mu^{3}C_{1},\\] with \\(C_{j}=C(\\frac{1}{h_{0}},|\\zeta|_{H^{s+5/2+2j}},|b|_{H^{s+7/2+2j}},|\ abla^{ \\gamma}\\psi|_{H^{s+7/2+2j}})\\)\\((j=0,1)\\), and uniformly with respect to \\(\\varepsilon,\\beta\\in[0,1]\\). Proof.: We look for an approximate solution \\(\\phi_{app}\\) to the exact solution \\(\\phi\\) of the potential equation (2.6) under the form \\[\\phi_{app}(X,z)=\\psi(X)+\\mu\\phi_{1}(X,z).\\] Plugging this ansatz into (2.6), and expanding the result into powers of \\(\\mu\\), one can cancel the leading term by a good choice of \\(\\phi_{1}\\), namely, \\[\\phi_{1}(X,z)=-h\\big{(}h(\\frac{z^{2}}{2}+z)\\Delta\\psi-z\\beta\ abla b\\cdot \ abla\\psi\\big{)}.\\] One can then check that \\[\\left\\{\\begin{aligned} &\ abla_{X,z}\\cdot P[\\sigma]\ abla_{X,z} \\phi_{app}=\\mu^{2}R_{\\mu},\\qquad\\text{ in }\\mathcal{S},\\\\ &\\phi_{app\\mid_{z=0}}=\\psi,\\qquad\\partial_{n}\\phi_{app\\mid_{z=-1} }=\\mu^{2}r_{\\mu},\\end{aligned}\\right.\\] with \\((R_{\\mu},r_{\\mu})\\) satisfying, uniformly with respect to \\(\\mu\\in(0,1)\\), \\[\\|\\Lambda^{s+1/2}R_{\\mu}\\|_{2}+\\left|r_{\\mu}\\right|_{H^{s+1/2}}\\leq C(|\\zeta|_ {H^{s+5/2}},|b|_{H^{s+7/2}},|\ abla^{\\gamma}\\psi|_{H^{s+7/2}}). \\tag{3.15}\\] Since \\(\\mathcal{G}[\\varepsilon\\zeta]\\psi-\\partial_{n}\\phi_{app\\mid_{z=0}}=\\partial_{ n}(\\phi-\\phi_{app})_{|_{z=0}}\\), the truncation error can be estimated using the trace theorem and an elliptic estimate on the BVP solved by \\(\\phi-\\phi_{app}\\); this is exactly what is done in Theorem 1.6 of [9] for instance, which gives here: \\[|\\mathcal{G}[\\varepsilon\\zeta]\\psi-\\partial_{n}\\phi_{app\\mid_{z=0}}|_{H^{s}} \\leq\\mu^{2}C_{s}(\\|\\Lambda^{s+1/2}R_{\\mu}\\|_{2}+\\left|r_{\\mu}\\right|_{H^{s+1/2 }}),\\] with \\(C_{s}=C(|\\zeta|_{H^{s+5/2}},|b|_{H^{s+5/2}})\\). Together with (3.15), this gives the result. In order to prove the second estimate of the proposition, one must look for a higher order approximate solution of (2.6), namely \\(\\phi_{app}=\\psi+\\mu\\phi_{1}+\\mu^{2}\\phi_{2}\\). The computations can be performed by any software of symbolic calculus and the estimates are exactly the same as above; we thus omit this technical step #### 3.4.2 The case of small amplitude waves (\\(\\varepsilon\\ll 1\\)) Expansions of the Dirichlet-Neumann operator for small amplitude waves has been developed in [17; 16]. This method is very efficient to compute the formal expansion, but instead of adapting it in the present case to give uniform estimates on the truncation error, we rather propose a very simple method based on Theorem 3.1. Proposition 3.9: Let \\(s\\geq t_{0}>1\\), \\(\\mathfrak{P}\\psi\\in H^{s+1/2}(\\mathbb{R}^{2})\\) and \\(\\zeta\\in H^{s+3/2}(\\mathbb{R}^{2})\\) be such that (2.4) is satisfied for some \\(h_{0}>0\\). Then one has \\[\\big{|}\\mathcal{G}[\\varepsilon\\zeta]\\psi-\\big{[}\\mathcal{G}[0] \\psi-\\varepsilon\\mathcal{G}[0]\\big{(}\\zeta(\\mathcal{G}[0]\\psi)\\big{)}- \\varepsilon\\mu\ abla^{\\gamma}\\cdot(\\zeta\ abla^{\\gamma}\\psi)\\big{]}\\big{|}_{H^ {s}}\\] \\[\\leq(\\frac{\\varepsilon}{\ u})^{2}\\sqrt{\\mu}C\\big{(}\\frac{1}{h_{0} },\\varepsilon\\sqrt{\\mu},|\\zeta|_{H^{s+3/2}},|\\mathfrak{P}\\psi|_{H^{s+1/2}} \\big{)}.\\] Proof: A second order Taylor expansion of \\(\\mathcal{G}[\\varepsilon\\zeta]\\psi\\) gives \\[\\mathcal{G}[\\varepsilon\\zeta]\\psi=\\mathcal{G}[0]\\psi+d_{0}\\mathcal{G}[ \\varepsilon\\cdot]\\psi\\cdot\\zeta+\\int_{0}^{1}(1-z)d_{z\\zeta}^{2}\\mathcal{G}[ \\varepsilon\\cdot]\\psi\\cdot(\\zeta,\\zeta)dz.\\] Using Theorem 3.1, one computes \\[d_{0}\\mathcal{G}[\\varepsilon\\cdot]\\psi\\cdot\\zeta=-\\varepsilon\\mathcal{G}[0] \\big{(}\\zeta(\\mathcal{G}[0]\\psi)\\big{)}-\\varepsilon\\mu\ abla^{\\gamma}\\cdot( \\zeta\ abla^{\\gamma}\\psi),\\] while for all \\(z\\in[-1,0]\\), Proposition 3.3 controls \\(d_{z\\zeta}^{2}\\mathcal{G}[\\varepsilon\\cdot]\\psi\\cdot(\\zeta,\\zeta)\\) in \\(H^{s}\\) by the r.h.s. of the estimate given in the statement. ## 4 Linear analysis The water-waves equations (1.4) can be written in condensed form as \\[\\partial_{t}U+\\mathcal{L}U+\\frac{\\varepsilon}{\ u}\\mathcal{A}[U]=0,\\] with \\(U=(\\zeta,\\psi)^{T}\\), \\(\\mathcal{A}[U]=(\\mathcal{A}_{1}[U],\\mathcal{A}_{2}[U])^{T}\\) and where \\[\\mathcal{L}:=\\left(\\begin{array}{cc}0&-\\frac{1}{\\mu\ u}\\mathcal{G}[0]\\cdot\\\\ 1&0\\end{array}\\right) \\tag{4.1}\\] and \\[\\mathcal{A}_{1}[U] =-\\frac{1}{\\varepsilon\\mu}(\\mathcal{G}[\\varepsilon\\zeta]\\psi- \\mathcal{G}[0]\\psi), \\tag{4.2}\\] \\[\\mathcal{A}_{2}[U] =\\frac{1}{2}|\ abla^{\\gamma}\\psi|^{2}-\\frac{(\\frac{1}{\\sqrt{\\mu} }\\mathcal{G}[\\varepsilon\\zeta]\\psi+\\varepsilon\\sqrt{\\mu}\ abla^{\\gamma}\\zeta \\cdot\ abla^{\\gamma}\\psi)^{2}}{2(1+\\varepsilon^{2}\\mu|\ abla^{\\gamma}\\zeta|^{2 })}.\\] By definition, the linearized operator \\(\\mathfrak{L}_{(\\underline{\\zeta},\\underline{\\psi})}\\) around some reference state \\(\\underline{U}=(\\underline{\\zeta},\\underline{\\psi})^{T}\\) is given by \\[\\mathfrak{L}_{(\\underline{\\zeta},\\underline{\\psi})}=\\partial_{t}+\\mathcal{L} +\\frac{\\varepsilon}{\ u}d_{\\underline{U}}\\mathcal{A};\\]assuming that \\(\\underline{U}\\) is such that the assumptions of Theorem 3.1 are satisfied, one computes that \\(\\mathfrak{L}_{(\\underline{\\zeta},\\underline{\\psi})}\\) is equal to \\[\\partial_{t}+\\left(\\begin{array}{cc}\\frac{\\varepsilon}{\\mu\ u}\\mathcal{G}[ \\varepsilon\\underline{\\zeta}](\\underline{Z}\\cdot)+\\frac{\\varepsilon}{\ u} \ abla^{\\gamma}\\cdot(\\underline{\\mathbf{v}})&-\\frac{1}{\\mu\ u}\\mathcal{G}[ \\varepsilon\\underline{\\zeta}]\\cdot\\\\ \\frac{\\varepsilon^{2}}{\\mu\ u}\\underline{Z}\\mathcal{G}[\\varepsilon\\underline{ \\zeta}](\\underline{Z}\\cdot)+(1+\\frac{\\varepsilon^{2}}{\ u}\\underline{Z} \ abla^{\\gamma}\\cdot\\underline{\\mathbf{v}})&\\frac{\\varepsilon}{\ u} \\underline{\\mathbf{v}}\\cdot\ abla^{\\gamma}\\cdot-\\frac{\\varepsilon}{\\mu} \\underline{Z}\\mathcal{G}[\\varepsilon\\underline{\\zeta}]\\cdot\\end{array}\\right), \\tag{4.3}\\] where \\(\\underline{\\mathbf{v}}\\) and \\(\\underline{Z}\\) are as in the statement of Theorem 3.1. This section is devoted to the proof of energy estimates for the associated initial value problem, \\[\\left\\{\\begin{array}{l}\\mathfrak{L}_{(\\underline{\\zeta},\\underline{\\psi})} U=\\frac{\\varepsilon}{\ u}G\\\\ U_{|_{t=0}}=U^{0}.\\end{array}\\right. \\tag{4.4}\\] Defining \\[\\underline{\\mathfrak{a}}=1+\\frac{\\varepsilon}{\ u}\\underline{\\mathfrak{b}}, \\quad\\text{ and }\\quad\\underline{\\mathfrak{b}}=\\varepsilon\\underline{ \\mathbf{v}}\\cdot\ abla^{\\gamma}\\underline{Z}+\ u\\partial_{t}\\underline{Z}, \\tag{4.5}\\] we first introduce the notion of _admissible_ reference state: **Definition 4.1**.: Let \\(t_{0}>1\\), \\(T>0\\) and \\(b\\in H^{t_{0}+2}(\\mathbb{R}^{2})\\). We say that \\(\\underline{U}=(\\underline{\\zeta},\\underline{\\psi})\\) is _admissible on_\\([0,\\frac{\ u T}{\\varepsilon}]\\) if * The surface and bottom parameterizations \\(\\underline{\\zeta}\\) and \\(b\\) satisfy (2.4) for some \\(h_{0}>0\\), uniformly on \\([0,\\frac{\ u T}{\\varepsilon}]\\); * There exists \\(c_{0}>0\\) such that \\(\\underline{\\mathfrak{a}}\\geq c_{0}\\), uniformly on \\([0,\\frac{\ u T}{\\varepsilon}]\\). We also need to define some functional spaces and notations linked to the energy (1.6) mentioned in the introduction. **Definition 4.2**.: For all \\(s\\in\\mathbb{R}\\) and \\(T>0\\), **i.** We denote by \\(X^{s}\\) the vector space \\(H^{s}(\\mathbb{R}^{2})\\times H^{s+1/2}(\\mathbb{R}^{2})\\) endowed with the norm \\[\\forall U=(\\zeta,\\psi)^{T}\\in X^{s},\\qquad|U|_{X^{s}}:=|\\zeta|_{H^{s}}+\\frac{ \\varepsilon}{\ u}|\\psi|_{H^{s}}+|\\mathfrak{P}\\psi|_{H^{s}},\\] while \\(X^{s}_{T}\\) stands for \\(C([0,\\frac{\ u T}{\\varepsilon}];X^{s})\\) endowed with its canonical norm. **ii.** We define the space \\(\\widetilde{X}^{s}\\) as \\[\\widetilde{X}^{s}:=\\{U=(\\zeta,\\psi)^{T},\\zeta\\in H^{s}(\\mathbb{R}^{2}),\ abla \\psi\\in H^{s-1/2}(\\mathbb{R}^{2})^{2}\\},\\] and endow it with the semi-norm \\(|U|_{\\widetilde{X}^{s}}:=|\\zeta|_{H^{s}}+|\\mathfrak{P}\\psi|_{H^{s}}\\). **iii.** We define the semi-normed space \\((Y^{s}_{T},|\\cdot|_{Y^{s}_{T}})\\) as \\[Y^{s}_{T}:=\\bigcap_{k=0}^{2}C^{k}([0,\\frac{\ u T}{\\varepsilon}];\\widetilde{X} ^{s-\\frac{3}{2}k})\\quad\\text{and}\\quad|U|_{Y^{s}_{T}}=\\sum_{k=0}^{2}\\sup_{[0, \\frac{\ u T}{\\varepsilon}]}|\\partial_{t}^{k}U|_{\\widetilde{X}^{s-\\frac{3}{2} k}}.\\] **iv.** For all \\((G,U^{0})\\in X_{T}^{s}\\times X^{s}\\), we define \\[\\mathcal{I}^{s}(t,U^{0},G):=|U^{0}|_{X^{s}}+\\frac{\\varepsilon}{\ u}\\int_{0}^{t} \\sup_{0\\leq t^{\\prime\\prime}\\leq t^{\\prime}}|G(t^{\\prime\\prime})|_{X^{s}}dt^{ \\prime}.\\] We can now state the energy estimate associated to (4.4), and whose proof is given in the next two subsections. Proposition 4.1: Let \\(s\\geq t_{0}>1\\), \\(T>0\\), \\(b\\in H^{s+9/2}(\\mathbb{R}^{2})\\), and \\(\\underline{U}=(\\underline{\\zeta},\\underline{\\psi})\\in Y_{T}^{s+9/2}\\) be admissible on \\([0,\\frac{\ u T}{\\varepsilon}]\\) for some \\(h_{0}>0\\) and \\(c_{0}>0\\). Let also \\((G,U^{0})\\in X_{T}^{s+2}\\times X^{s+2}\\). There exists a unique solution \\(U\\in X_{T}^{s}\\) to (4.4); moreover, for all \\(0\\leq t\\leq\\frac{\ u T}{\\varepsilon}\\), one has \\[|U(t)|_{X^{s}}\\leq\\underline{C}\\big{(}\\mathcal{I}^{s+2}(t,U^{0},G)+| \\underline{U}|_{Y_{T}^{s+9/2}}\\mathcal{I}^{t_{0}+2}(t,U^{0},G)\\big{)},\\] where \\(\\underline{C}=C\\big{(}T,\\frac{1}{h_{0}},\\frac{1}{c_{0}},\\frac{\\varepsilon}{ \ u},\\frac{\\beta}{\\varepsilon},|b|_{H^{s+9/2}},|\\underline{U}|_{Y_{T}^{t_{0}+ 9/2}}\\big{)}\\). ### Energy estimates for the trigonalized linearized operator As shown in [29], the operator \\(\\mathfrak{L}_{(\\underline{\\zeta},\\underline{\\psi})}\\) is non-strictly hyperbolic, in the sense that its principal symbol has a double purely imaginary eigenvalue, with a nontrivial Jordan block. It was shown in Prop. 4.2 of [29] that a simple change of basis can be used to put the principal symbol of \\(\\mathfrak{L}_{(\\underline{\\zeta},\\underline{\\psi})}\\) under a canonical trigonal form. This result is generalized to the present case. More precisely, with \\(\\underline{\\mathfrak{a}}\\) as defined in (4.5) and defining the operator \\(\\mathfrak{M}_{(\\underline{\\zeta},\\underline{\\psi})}=\\partial_{t}+M_{( \\underline{\\zeta},\\underline{\\psi})}\\) with \\[M_{(\\underline{\\zeta},\\underline{\\psi})}=\\left(\\begin{array}{cc}\\frac{ \\varepsilon}{\ u}\ abla^{\\gamma}\\cdot(\\cdot\\underline{\\mathbf{v}})&-\\frac{1}{ \\mu\ u}\\mathcal{G}[\\varepsilon\\underline{\\zeta}]\\\\ \\underline{\\mathfrak{a}}&\\frac{\\varepsilon}{\ u}\\underline{\\mathbf{v}}\\cdot \ abla^{\\gamma}.\\end{array}\\right), \\tag{4.6}\\] one reduces the study of (4.4) to the study of the initial value problem \\[\\left\\{\\begin{array}{l}\\mathfrak{M}_{(\\underline{\\zeta},\\underline{\\psi}) }V=\\frac{\\varepsilon}{\ u}H\\\\ V_{|_{t=0}}=V^{0},\\end{array}\\right. \\tag{4.7}\\] as shown in the following proposition (whose proof relies on simple computations and is omitted). Proposition 4.2: The following two assertions are equivalent: * The pair \\(U=(\\zeta,\\psi)^{T}\\) solves (4.4); * The pair \\(V=(\\zeta,\\psi-\\varepsilon\\underline{Z}\\zeta)^{T}\\) solves (4.7), with \\(H=(G_{1},G_{2}-\\varepsilon\\underline{Z}G_{1})^{T}\\) and \\(V^{0}=(\\zeta^{0},\\psi^{0}-\\varepsilon\\underline{Z}_{|_{t=0}}\\zeta^{0})^{T}\\). In view of this proposition, it is a key step to understand (4.7), and the rest of this subsection is thus devoted to the proof of energy estimates for this initial value problem. First remark that a symmetrizer for \\(\\mathfrak{M}_{(\\underline{\\zeta},\\underline{\\psi})}\\) is given by \\[S=\\left(\\begin{array}{cc}\\underline{\\mathfrak{a}}&0\\\\ 0&\\frac{\\varepsilon^{2}}{\ u^{2}}+\\frac{1}{\\mu\ u}\\mathcal{G}[\\underline{ \\zeta}].\\end{array}\\right), \\tag{4.8}\\] so that (provided that \\(\\underline{\\mathfrak{a}}\\) is nonnegative), a natural energy for the IVP (4.7) is given by \\[E^{s}(V)^{2} =(\\Lambda^{s}V,S\\Lambda^{s}V)\\] \\[=|\\sqrt{\\underline{\\mathfrak{a}}}\\Lambda^{s}V_{1}|_{2}^{2}+\\frac{ \\varepsilon^{2}}{\ u^{2}}|V_{2}|_{H^{s}}^{2}+(\\Lambda^{s}V_{2},\\frac{1}{\\mu \ u}\\mathcal{G}[\\underline{\\varepsilon}\\underline{\\zeta}]\\Lambda^{s}V_{2}). \\tag{4.9}\\] **Remark 4.1**.: The introduction of the term \\(\\varepsilon^{2}/\ u^{2}\\) in (4.8) -and thus of \\(\\varepsilon^{2}/\ u^{2}|V_{2}|_{H^{s}}^{2}\\) in (4.9)- is not necessary to the energy estimate below. But this constant term plays a crucial role in the iterative scheme used to solve the nonlinear problem because it controls the low frequencies. It also turns out that the order \\(O(\\varepsilon^{2}/\ u^{2})\\) of this constant term is the only one which allows uniform estimates. We can now give the energy estimate associated to (4.7); in the statement below, we use the notation \\[I^{s}(t,V^{0},H):=E^{s}(V^{0})+\\frac{\\varepsilon}{\ u}\\int_{0}^{t}\\sup_{0\\leq t ^{\\prime\\prime}\\leq t^{\\prime}}E^{s}(H(t^{\\prime\\prime}))dt^{\\prime},\\] while \\(s\\lor t_{0}:=\\max\\{s,t_{0}\\}\\) and \\(\\underline{C}\\) is as defined in Proposition 4.1. **Proposition 4.3**.: Let \\(s\\geq 0\\), \\(t_{0}>1\\), \\(T>0\\), \\(b\\in H^{s\\lor t_{0}+9/2}(\\mathbb{R}^{2})\\), and \\(\\underline{U}=(\\underline{\\zeta},\\underline{\\psi})\\in Y_{T}^{s\\lor t_{0}+9/2}\\) be admissible on \\([0,\\frac{\ u T}{\\varepsilon}]\\) for some \\(h_{0}>0\\) and \\(c_{0}>0\\). Then, for all \\((H,V^{0})\\in X_{T}^{s}\\times X^{s}\\), there exists a unique solution \\(V\\in X_{T}^{s}\\) to (4.7) and for all \\(0\\leq t\\leq\\frac{\ u T}{\\varepsilon}\\), \\[E^{s}(V(t))\\leq\\underline{C}\\big{(}I^{s}(t,V^{0},H)+\\big{\\langle}|\\underline{ U}|_{Y_{T}^{s+7/2}}I^{t_{0}+1}(t,V^{0},H)\\big{\\rangle}_{s>t_{0}+1}\\big{)}.\\] Proof.: _Throughout this proof, \\(\\underline{C}_{0}\\) denotes a nondecreasing function of \\(\\frac{1}{c_{0}}\\), \\(\\frac{\\varepsilon}{\ u}\\), \\(M[\\underline{\\sigma}]\\), \\(|\\underline{\\mathbf{v}}|_{H^{t_{0}+2}}\\), \\(|\\underline{\\mathbf{b}}|_{H^{t_{0}+2}}\\), and \\(|\\partial_{t}\\underline{\\mathbf{b}}|_{\\infty}\\) which may vary from one line to another, and \\(\\underline{\\sigma}\\) is given by (2.5) with \\(\\zeta=\\underline{\\zeta}\\)._ Existence of a solution to the IVP (4.7) is achieved by classical means,and we thus focus our attention on the proof of the energy estimate. For any given \\(\\kappa\\in\\mathbb{R}\\), we compute \\[e^{\\frac{\\varepsilon\\kappa}{\ u}t}\\frac{d}{dt}(e^{-\\frac{ \\varepsilon\\kappa}{\ u}t}E^{s}(V)^{2})=-\\frac{\\varepsilon\\kappa}{\ u}E^{s}(V)^ {2}+2\\frac{\\varepsilon}{\ u}(\\Lambda^{s}H,SA^{s}V)\\] \\[-2(\\Lambda^{s}M_{(\\underline{\\zeta},\\underline{\\psi})}V,S \\Lambda^{s}V)+(\\Lambda^{s}V,[\\partial_{t},S]\\Lambda^{s}V). \\tag{4.10}\\] We now turn to bound from above the different components of the r.h.s. of (4.10). \\(\\bullet\\) Estimate of \\((\\Lambda^{s}H,SA^{s}V)\\). We can rewrite this term as \\[(\\sqrt{\\underline{a}}\\Lambda^{s}H_{1},\\sqrt{\\underline{a}}\\Lambda^{s}V_{1})+( \\frac{\\varepsilon}{\ u}\\Lambda^{s}H_{2},\\frac{\\varepsilon}{\ u}\\Lambda^{s}V_{ 2})+(\\Lambda^{s}H_{2},\\frac{1}{\\mu\ u}\\mathcal{G}[\\varepsilon\\underline{\\zeta }]\\Lambda^{s}V_{2}),\\] so that Cauchy-Schwartz inequality and Proposition 3.2 yield \\[(\\Lambda^{s}H,SA^{s}V)\\leq E^{s}(H)E^{s}(V). \\tag{4.11}\\] \\(\\bullet\\) Estimate of \\((\\Lambda^{s}M_{(\\underline{\\zeta},\\underline{\\psi})}V,S\\Lambda^{s}V)\\). One computes \\[(\\Lambda^{s}M_{(\\underline{\\zeta},\\underline{\\psi})}V,SA^{s}V)= \\big{(}\\Lambda^{s}(\\frac{\\varepsilon}{\ u}\\mathrm{div}_{\\gamma}(\\underline{ \\underline{v}}V_{1})-\\frac{1}{\\mu\ u}\\mathcal{G}[\\varepsilon\\underline{\\zeta }]V_{2}),\\underline{a}\\Lambda^{s}V_{1}\\big{)}\\] \\[+\\big{(}\\Lambda^{s}(\\underline{a}V_{1}+\\frac{\\varepsilon}{\ u} \\underline{\\mathbf{v}}\\cdot\ abla^{\\gamma}V_{2}),(\\frac{\\varepsilon^{2}}{\ u ^{2}}+\\frac{1}{\\mu\ u}\\mathcal{G}[\\varepsilon\\underline{\\zeta}])\\Lambda^{s}V _{2}\\big{)},\\] so that one can write \\[(\\Lambda^{s}M_{(\\underline{\\zeta},\\underline{\\psi})}V,S\\Lambda^{s}V)=A_{1}+A_ {2}+A_{3}+A_{4}+A_{5},\\] with \\[A_{1}=\\frac{\\varepsilon}{\ u}\\big{(}\\Lambda^{s}\\mathrm{div}_{ \\gamma}(\\underline{\\underline{v}}V_{1}),\\underline{a}\\Lambda^{s}V_{1}\\big{)},\\] \\[A_{2}=\\frac{\\varepsilon}{\ u}\\big{(}\\Lambda^{s}(\\underline{ \\mathbf{v}}\\cdot\ abla^{\\gamma}V_{2}),\\frac{\\varepsilon^{2}}{\ u^{2}}\\Lambda^ {s}V_{2}\\big{)},\\] \\[A_{3}=\\frac{\\varepsilon}{\ u}\\big{(}\\Lambda^{s}(\\underline{ \\mathbf{v}}\\cdot\ abla^{\\gamma}V_{2}),\\frac{1}{\\mu\ u}\\mathcal{G}[\\varepsilon \\underline{\\zeta}]\\Lambda^{s}V_{2}\\big{)},\\] \\[A_{4}=\\big{(}\\Lambda^{s}(\\underline{a}V_{1}),\\frac{1}{\\mu\ u} \\mathcal{G}[\\varepsilon\\underline{\\zeta}]\\Lambda^{s}V_{2}\\big{)}-\\big{(} \\underline{a}\\Lambda^{s}V_{1},\\Lambda^{s}(\\frac{1}{\\mu\ u}\\mathcal{G}[ \\varepsilon\\underline{\\zeta}]V_{2})\\big{)},\\] \\[A_{5}=\\big{(}\\Lambda^{s}(\\underline{a}V_{1}),\\frac{\\varepsilon^ {2}}{\ u^{2}}\\Lambda^{s}V_{2}).\\] We now turn to prove the following estimates: \\[A_{j}\\leq\\frac{\\varepsilon}{\ u}C_{\\eta}E^{s}(V)\\Big{(}\\big{(}1+ \\frac{\ u}{\\varepsilon}\\|\ abla^{\\mu,\\gamma}\\underline{\\sigma}\\|_{L^{\\infty} H^{t_{0}+2}}\\big{)}E^{s}(V)\\] \\[+\\big{(}\\big{|}\\underline{\\mathbf{v}}|_{H^{s+1}}+|\\underline{ \\mathbf{b}}|_{H^{s+1}}+\\frac{\ u}{\\varepsilon}\\|\ abla^{\\mu,\\gamma}\\underline{ \\sigma}\\|_{L^{\\infty}H^{s+1}}\\big{)}E^{t_{0}+1}(V)\\big{\\rangle}_{s>t_{0}+1} \\Big{)}, \\tag{4.12}\\] for \\(j=1,\\ldots,5\\). * Control of \\(A_{1}\\) and \\(A_{2}\\). Integrating by parts, one obtains \\[\\frac{\ u}{\\varepsilon}A_{1} = \\big{(}[\\Lambda^{s},\\operatorname{div}_{\\gamma}(\\underline{\\mathbf{ v}}\\cdot)]V_{1},\\underline{\\mathbf{a}}\\Lambda^{s}V_{1}\\big{)}-\\frac{1}{2}\\big{(} \\Lambda^{s}V_{1},(\\underline{\\mathbf{v}}\\cdot\ abla^{\\gamma}\\underline{ \\mathbf{a}})\\Lambda^{s}V_{1}\\big{)}\\] \\[+\\frac{1}{2}\\big{(}\\underline{\\mathbf{a}}\\Lambda^{s}V_{1},( \\operatorname{div}_{\\gamma}\\underline{\\mathbf{v}})\\Lambda^{s}V_{1}\\big{)}\\] and \\[\\frac{\ u}{\\varepsilon}A_{2}=\\big{(}[\\Lambda^{s},\\underline{\\mathbf{v}}]\ abla^{ \\gamma}V_{2},\\frac{\\varepsilon^{2}}{\ u^{2}}\\Lambda^{s}V_{2}\\big{)}-\\frac{ \\varepsilon^{2}}{2\ u^{2}}\\big{(}\\Lambda^{s}V_{2},(\\operatorname{div}_{\\gamma }\\underline{\\mathbf{v}})\\Lambda^{s}V_{2}\\big{)}.\\] Recalling that \\(\\underline{\\mathbf{a}}=1+\\frac{\\varepsilon}{\ u}\\underline{\\mathbf{b}}\\), one can then deduce easily (with the help of Proposition 2.1 and Corollary 2.1 to control the commutators in the above expressions) that (4.12) holds for \\(j=1,2\\). * Control of \\(A_{3}\\). First write \\(A_{3}=A_{31}+A_{32}\\) with \\[A_{31} = \\frac{\\varepsilon}{\ u}\\big{(}\\frac{1}{\\mu\ u}\\mathcal{G}[ \\varepsilon\\underline{\\zeta}]\\Lambda^{s}V_{2},[\\Lambda^{s},\\underline{ \\mathbf{v}}]\\cdot\ abla^{\\gamma}V_{2}\\big{)}\\] \\[A_{32} = \\frac{\\varepsilon}{\ u}\\big{(}\\underline{\\mathbf{v}}\\cdot\ abla^{ \\gamma}\\Lambda^{s}V_{2},\\frac{1}{\\mu\ u}\\mathcal{G}[\\varepsilon\\underline{ \\zeta}]\\Lambda^{s}V_{2}\\big{)}.\\] Thanks to Proposition 3.2, one gets \\[A_{31}\\leq\\frac{\\varepsilon}{\ u}\\big{(}\\frac{1}{\\mu\ u}\\mathcal{G}[ \\varepsilon\\underline{\\zeta}][\\Lambda^{s},\\underline{\\mathbf{v}}]\\cdot \ abla^{\\gamma}V_{2},[\\Lambda^{s},\\underline{\\mathbf{v}}]\\cdot\ abla^{\\gamma}V _{2}\\big{)}^{1/2}E^{s}(V),\\] and Propositions 3.7.**i** and 3.4 can then be used to show that \\(A_{31}\\) is bounded from above by the r.h.s. of (4.12). This is also the case of \\(A_{32}\\), as a direct consequence of Propositions 3.7.**ii** and 3.4. It follows that (4.12) holds for \\(j=3\\). * Control of \\(A_{4}\\). One computes, remarking that \\([\\Lambda^{s},\\underline{\\mathbf{a}}]=\\frac{\\varepsilon}{\ u}[\\Lambda^{s}, \\underline{\\mathbf{b}}]\\), \\[A_{4} = \\big{(}\\underline{\\mathbf{a}}\\Lambda^{s}V_{1},[\\frac{1}{\\mu\ u} \\mathcal{G}[\\varepsilon\\underline{\\zeta}],\\Lambda^{s}]V_{2}\\big{)}+\\frac{ \\varepsilon}{\ u}\\big{(}[\\Lambda^{s},\\underline{\\mathbf{b}}]V_{1},\\frac{1}{\\mu \ u}\\mathcal{G}[\\varepsilon\\underline{\\zeta}]\\Lambda^{s}V_{2}\\big{)}\\] \\[:= A_{41}+A_{42}.\\] Using successively Cauchy-Schwartz inequality, Proposition 3.5, and Proposition 3.4, one obtains directly that \\(A_{41}\\) is bounded from above by the r.h.s. of (4.12). In order to control \\(A_{42}\\), first remark that using Propositions 3.2 and 3.4, one gets \\[\\frac{\ u}{\\varepsilon}A_{42} \\leq \\big{(}[\\Lambda^{s},\\underline{\\mathbf{b}}]V_{1},\\frac{1}{\\mu\ u }\\mathcal{G}[\\varepsilon\\underline{\\zeta}][\\Lambda^{s},\\underline{\\mathbf{b}}] V_{1}\\big{)}^{1/2}E^{s}(V)\\] \\[\\leq M[\\underline{\\sigma}]\\big{|}\\mathfrak{P}[\\Lambda^{s}, \\underline{\\mathbf{b}}]V_{1}\\big{|}_{2}E^{s}(V).\\]Recalling that \\(\ u=\\frac{1}{1+\\sqrt{\\mu}}\\) one can check that for all \\(\\xi\\in\\mathbb{R}^{2}\\), \\(\\frac{\ u^{-1/2}|\\xi^{\\gamma}|}{(1+\\sqrt{\\mu}|\\xi^{\\gamma}|)^{1/2}}\\lesssim \\langle\\xi\\rangle,\\)_uniformly with respect to \\(\\mu\\) and \\(\\gamma\\)_, so that one deduces \\[\\frac{\ u}{\\varepsilon}A_{42}\\leq M[\\underline{\\sigma}]\\big{|}[\\Lambda^{s}, \\underline{\\mathfrak{b}}]V_{1}\\big{|}_{H^{1}}E^{s}(V).\\] Remarking that owing to Proposition 2.1, one has \\[\\big{|}[\\Lambda^{s},\\underline{\\mathfrak{b}}]V_{1}|_{H^{1}} \\leq\\] \\[\\leq\\frac{1}{\\sqrt{c_{0}}}\\big{(}|\\underline{\\mathfrak{b}}|_{H^{t _{0}+2}}E^{s}(V)+\\big{\\langle}|\\underline{\\mathfrak{b}}|_{H^{s+1}}E^{t_{0}+1}( V)\\big{\\rangle}_{s>t_{0}+1}\\big{)},\\] and \\(A_{42}\\) is thus bounded from above by the r.h.s. of (4.12). This shows that (4.12) also holds for \\(j=4\\). * Control of \\(A_{5}\\). First remark that \\[A_{5}=(\\Lambda^{s}V_{1},\\frac{\\varepsilon^{2}}{\ u^{2}}\\Lambda^{s}V_{2})+ \\frac{\\varepsilon}{\ u}(\\Lambda^{s}(\\underline{\\mathfrak{b}}V_{1}),\\frac{ \\varepsilon^{2}}{\ u^{2}}\\Lambda^{s}V_{2}),\\] so that Cauchy-Schwartz inequality and the tame product estimate (2.1) yield \\[A_{5}\\leq\\frac{\\varepsilon}{\ u}\\big{(}(1+|\\frac{\\varepsilon}{ \ u}\\underline{\\mathfrak{b}}|_{H^{t_{0}}})|V_{1}|_{H^{s}}+\\big{\\langle}|\\frac{ \\varepsilon}{\ u}\\underline{\\mathfrak{b}}|_{H^{s}}|V_{1}|_{H^{t_{0}}}\\big{ \\rangle}_{s>t_{0}}\\big{)}\\frac{\\varepsilon}{\ u}|V_{2}|_{H^{s}}\\] \\[\\leq\\frac{\\varepsilon}{\ u}\\frac{1}{\\sqrt{c_{0}}}\\big{(}(1+|\\frac {\\varepsilon}{\ u}\\underline{\\mathfrak{b}}|_{H^{t_{0}+1}})E^{s}(V)+\\big{\\langle} |\\frac{\\varepsilon}{\ u}\\underline{\\mathfrak{b}}|_{H^{s}}E^{t_{0}+1}(V)\\big{ \\rangle}_{s>t_{0}+1}\\big{)}E^{s}(V),\\] and (4.12) thus holds for \\(j=5\\). From (4.12), we obtain directly \\[(\\Lambda^{s}M_{(\\underline{\\zeta},\\underline{\\psi})}V,SA^{s}V) \\leq\\frac{\\varepsilon}{\ u}\\underline{C}_{0}E^{s}(V)\\big{(}\\big{(}1+\\frac{ \ u}{\\varepsilon}\\|\ abla^{\\mu,\\gamma}\\underline{\\sigma}\\|_{L^{\\infty}H^{t_{0 }+2}}\\big{)}E^{s}(V)\\] \\[+\\big{\\langle}\\big{(}|\\underline{\\mathbf{v}}|_{H^{s+1}}+| \\underline{\\mathfrak{b}}|_{H^{s+1}}+\\frac{\ u}{\\varepsilon}\\|\ abla^{\\mu, \\gamma}\\underline{\\sigma}\\|_{L^{\\infty}H^{s+1}}\\big{)}E^{t_{0}+1}(V)\\big{ \\rangle}_{s>t_{0}+1}\\big{)}.\\] (4.13) \\(\\bullet\\) Estimate of \\((\\Lambda^{s}V,[\\partial_{t},S]\\Lambda^{s}V)\\). One has \\[(\\Lambda^{s}V,[\\partial_{t},S]\\Lambda^{s}V)=\\frac{\\varepsilon}{\ u}(\\Lambda^{ s}V_{1},\\partial_{t}\\underline{\\mathfrak{b}}\\Lambda^{s}V_{1})+(\\Lambda^{s}V_{2},[ \\partial_{t},\\frac{1}{\\mu\ u}\\mathcal{G}[\\varepsilon\\underline{\\zeta}]] \\Lambda^{s}V_{2}),\\] so that, using Proposition 3.6 to control the second component of the r.h.s., one gets easily \\[(\\Lambda^{s}V,[\\partial_{t},S]\\Lambda^{s}V)\\leq\\frac{\\varepsilon}{\ u} \\underline{C}_{0}(1+\\frac{\ u}{\\varepsilon}\\|\ abla^{\\mu,\\gamma}\\partial_{t} \\underline{\\sigma}\\|_{\\infty})E^{s}(V)^{2}. \\tag{4.14}\\]According to (4.10), (4.11), (4.13) and (4.14), we have \\[e^{\\frac{\\varepsilon\\kappa}{\ u}t}\\frac{d}{dt}(e^{-\\frac{\\varepsilon\\kappa}{\ u} t}E^{s}(V)^{2})\\leq\\frac{\\varepsilon}{\ u}E^{s}(V)\\big{(}2E^{s}(H)+\\underline{C}_{0} \\big{\\langle}D_{s}E^{t_{0}+1}(V)\\big{\\rangle}_{s>t_{0}+1}\\big{)}, \\tag{4.15}\\] with \\(D_{s}:=\\big{(}|\\underline{\\mathbf{v}}|_{H^{s+1}}+\\frac{\ u}{\\varepsilon}\\| \ abla^{\\mu,\\gamma}\\underline{\\sigma}\\|_{L^{\\infty}H^{s+1}}+|\\underline{ \\mathbf{b}}|_{H^{s+1}}\\big{)}\\), provided that \\(\\kappa\\) is large enough, how large depending only on \\[\\sup_{t\\in[0,\\frac{\ u T}{\\varepsilon}]}\\Big{[}\\underline{C}_{0}(t)\\big{(}1+ \\frac{\ u}{\\varepsilon}\\|\ abla^{\\mu,\\gamma}\\underline{\\sigma}(t)\\|_{L^{ \\infty}H^{t_{0}+2}}+\\frac{\ u}{\\varepsilon}\\|\ abla^{\\mu,\\gamma}\\partial_{t} \\underline{\\sigma}(t)\\|_{\\infty}\\big{)}\\Big{]}.\\] It follows from (4.15) that, \\[E^{s}(V(t)) \\leq e^{\\frac{\\varepsilon\\kappa}{\ u}t}E^{s}(V^{0})+\\frac{ \\varepsilon}{\ u}\\int_{0}^{t}e^{\\frac{\\varepsilon\\kappa}{\ u}(t-t^{\\prime})}E^ {s}(H(t^{\\prime}))dt^{\\prime}\\] \\[+\\big{\\langle}\\frac{\\varepsilon}{\ u}\\underline{C}_{0}(\\sup_{[0, \ u T/\\varepsilon]}D_{s})\\int_{0}^{t}e^{\\frac{\\varepsilon\\kappa}{\ u}(t-t^{ \\prime})}E^{t_{0}+1}(V(t^{\\prime}))dt^{\\prime}\\big{\\rangle}_{s>t_{0}+1}; \\tag{4.16}\\] using (4.16) with \\(s=t_{0}+1\\) gives \\[E^{t_{0}+1}(V(t))\\leq e^{\\frac{\\varepsilon\\kappa}{\ u}t}E^{t_{0}+1}(V^{0})+ \\frac{\\varepsilon}{\ u}te^{\\frac{\\varepsilon\\kappa}{\ u}t}\\sup_{0\\leq t^{ \\prime}\\leq t}E^{t_{0}+1}(H(t^{\\prime})),\\] and plugging this expression back into (4.16) gives therefore \\[E^{s}(V(t))\\leq\\underline{C}_{1}\\big{(}I^{s}(t,V^{0},H)+\\big{\\langle}(\\sup_{ t\\in[0,\ u T/\\varepsilon]}D_{s})I^{t_{0}+1}(t,V^{0},H)\\big{\\rangle}_{s>t_{0}+1} \\big{)},\\] where \\(\\underline{C}_{1}\\) is a nondecreasing function of \\(T,\\frac{1}{c_{0}},\\frac{1}{h_{0}},\\frac{\\varepsilon}{\ u}\\) and of the supremum on the time interval \\([0,\\frac{\ u T}{\\varepsilon}]\\) of \\(\\frac{\ u}{\\varepsilon}\\|\ abla^{\\mu,\\gamma}\\underline{\\sigma}\\|_{L^{\\infty} H^{t_{0}+2}}\\), \\(\\frac{\ u}{\\varepsilon}\\|\ abla^{\\mu,\\gamma}\\partial_{t}\\underline{\\sigma}\\|_{\\infty}\\), \\(|\\underline{\\mathbf{v}}|_{H^{t_{0}+2}}\\), \\(|\\underline{\\mathbf{b}}|_{H^{t_{0}+2}}\\) and \\(|\\partial_{t}\\underline{\\mathbf{b}}|_{L^{\\infty}}\\). The proposition follows therefore from the following lemma: **Lemma 4.1**.: With \\(\\underline{C}\\) and \\(|\\cdot|_{Y_{T}^{s}}\\) as defined in the statement of Proposition 4.1 and Definition 4.2, one has, \\[\\forall s\\geq t_{0}+1,\\qquad\\sup_{t\\in[0,\ u T/\\varepsilon]}D_{s}(t)\\leq \\underline{C}|\\underline{U}|_{Y_{T}^{s+7/2}}\\quad\\text{ and }\\quad\\underline{C}_{1}\\leq \\underline{C}.\\] Proof.: Remark first that, as a consequence of Proposition 3.3, one has for all \\(r\\geq t_{0}+1\\), \\[\\big{|}\\frac{1}{\\sqrt{\\mu}}\\mathcal{G}[\\varepsilon\\underline{\\zeta}]\\underline {\\psi}\\big{|}_{H^{r}}\\leq C\\big{(}\\frac{1}{h_{0}},\\frac{1}{c_{0}},\\frac{ \\varepsilon}{\ u},\\frac{\\beta}{\\varepsilon},|\\underline{U}|_{\\widetilde{X}^{t _{0}+2}},|b|_{H^{r+3/2}}\\big{)}|\\underline{U}|_{\\widetilde{X}^{r+3/2}}; \\tag{4.17}\\]since \\(|\\xi^{\\gamma}|\\leq\\frac{\ u^{-1/2}|\\xi^{\\gamma}|(1+|\\xi|)^{1/2}}{(1+\\sqrt{\\mu}| \\xi^{\\gamma}|)^{1/2}}\\), uniformly with respect to \\(\\gamma\\in(0,1]\\) and \\(\\mu>0\\), one also has \\[|\ abla^{\\gamma}\\underline{\\psi}|_{H^{r}}\\leq|\\mathfrak{P}\\underline{\\psi}|_{H ^{r+1/2}}\\leq|\\underline{U}|_{\\tilde{X}^{r+1/2}}. \\tag{4.18}\\] It follows from the explicit expression of \\(\\underline{Z}\\) given in Theorem 3.1 that \\(\\varepsilon\\underline{Z}\\) is a smooth function of \\(\\varepsilon\\sqrt{\\mu}\\leq\\varepsilon/\ u\\), \\(\ abla^{\\gamma}\\underline{\\zeta}\\), \\(\ abla^{\\gamma}\\underline{\\psi}\\) and \\(\\frac{1}{\\sqrt{\\mu}}\\mathcal{G}[\\varepsilon\\underline{\\zeta}]\\underline{\\psi}\\). Moser's type estimates then imply that for all \\(r\\geq t_{0}+1\\), \\(|\\varepsilon\\underline{Z}|_{H^{r}}\\) - and hence \\(|\\underline{\\mathbf{v}}|_{H^{s+1}}\\) - is bounded from above by \\(\\underline{C}|\\underline{U}|_{Y_{T}^{s+7/2}}\\). This is also the case of the second component of \\(D_{s}\\), as a direct consequence of (2.5), and because \\(\ u\\sqrt{\\mu}\\leq 1\\). To control the third component of \\(D_{s}\\), namely, \\(\\sup_{[0,\\frac{\ u T}{\\varepsilon}]}|\\underline{\\mathfrak{b}}|_{H^{s+1}}\\) (with \\(\\underline{\\mathfrak{b}}\\) given by (4.5)), we need to bound \\(|\ u\\partial_{t}\\underline{Z}|_{H^{s+1}}\\) from above. Using Theorem 3.1 to compute explicitly \\(\\partial_{t}\\underline{Z}\\), one finds \\[\ u\\partial_{t}\\underline{Z}=\\frac{\\sqrt{\\mu}\ u}{1+\\varepsilon^{ 2}\\mu|\ abla^{\\gamma}\\underline{\\zeta}|^{2}}\\Big{(}\\frac{1}{\\sqrt{\\mu}} \\mathcal{G}[\\varepsilon\\underline{\\zeta}]\\partial_{t}\\underline{\\psi}+ \\varepsilon\\sqrt{\\mu}\ abla^{\\gamma}\\underline{\\zeta}\\cdot\ abla^{\\gamma} \\partial_{t}\\underline{\\psi}\\] \\[-\\varepsilon\\sqrt{\\mu}\ abla^{\\gamma}\\partial_{t}\\underline{ \\zeta}\\cdot(\\varepsilon\\underline{Z})\ abla^{\\gamma}\\underline{\\zeta}- \\varepsilon\\sqrt{\\mu}\\partial_{t}\\underline{\\zeta}\\mathrm{div}_{\\gamma} \\underline{\\mathbf{v}}-\\frac{1}{\\sqrt{\\mu}}\\mathcal{G}[\\varepsilon\\underline{ \\zeta}]\\big{(}\\partial_{t}\\underline{\\zeta}(\\varepsilon\\underline{Z})\\big{)} \\Big{)},\\] which is a smooth function of \\(\\varepsilon\\sqrt{\\mu}\\leq\\varepsilon/\ u\\), \\(\ abla^{\\gamma}\\underline{\\zeta}\\), \\(\\partial_{t}\\underline{\\zeta}\\), \\(\ abla^{\\gamma}\\partial_{t}\\underline{\\zeta}\\), \\(\ abla^{\\gamma}\\partial_{t}\\underline{\\psi}\\), \\(\\varepsilon\\underline{Z}\\), \\(\\frac{1}{\\sqrt{\\mu}}\\mathcal{G}[\\varepsilon\\underline{\\zeta}]\\partial_{t} \\underline{\\psi}\\) and \\(\\frac{1}{\\sqrt{\\mu}}\\mathcal{G}[\\varepsilon\\underline{\\zeta}]\\big{(} \\partial_{t}\\underline{\\zeta}(\\varepsilon\\underline{Z})\\big{)}\\). The sought after estimate on \\(|\\underline{\\mathfrak{b}}|_{H^{s+1}}\\) thus follows from Moser's type estimates (note that Remark 3.2 is used to control \\(\\frac{1}{\\sqrt{\\mu}}\\mathcal{G}[\\varepsilon\\underline{\\zeta}]\\big{(}\\partial_ {t}\\underline{\\zeta}(\\varepsilon\\underline{Z})\\big{)}\\) in terms of Sobolev norms of \\(\\partial_{t}\\underline{\\zeta}\\) and \\(\\varepsilon\\underline{Z}\\)). The estimate on \\(\\underline{C}_{1}\\) is obtained exactly in the same way and we omit the proof. ### Proof of Proposition 4.1 Deducing Proposition 4.1 from Proposition 4.3 is only a technical step, essentially based on the equivalence of the norms \\(E^{s}\\) and \\(|\\cdot|_{X^{s}}\\) stemming from Proposition 3.4. We only give the main steps of the proof. **Step 1.** Since \\(U=(V_{1},V_{2}+\\varepsilon\\underline{Z}V_{1})\\), one can expand \\(|U|_{X^{s}}\\) in terms of \\(V_{1}\\) and \\(V_{2}\\) and control the different components using the norm \\(E^{s}\\) to obtain: \\[|U|_{X^{s}}\\leq\\underline{C}\\times\\big{(}E^{s+1}(V)+\\langle|\\underline{U}|_{ \\tilde{X}^{s+5/2}}E^{t_{0}+1}(V)\\rangle_{s>t_{0}}\\big{)}. \\tag{4.19}\\] **Step 2.** Using Proposition 4.3 to control \\(E^{s+1}(V)\\) and \\(E^{t_{0}+1}(V)\\) in terms of \\(V^{0}=(U_{1}^{0},U_{2}^{0}-\\varepsilon\\underline{Z}|_{t=0}U_{1}^{0})\\) and \\(H=(G_{1},G_{2}-\\varepsilon\\underline{Z}G_{1})\\) in (4.19), one gets \\[|U(t)|_{X^{s}}\\leq\\underline{C}\\big{(}\\mathcal{I}^{s+1}(t,V^{0},H)+\\langle| \\underline{U}|_{Y_{T}^{s+9/2}}(\\mathcal{I}^{t_{0}+1}(t,V^{0},H))\\rangle_{s>t_ {0}}\\big{)}.\\] **Step 3.** Replacing \\(H\\) by \\((G_{1},G_{2}-\\varepsilon\\underline{Z}G_{1})\\) and \\(V^{0}\\) by \\((U^{1}_{0},U^{0}_{2}-\\varepsilon\\underline{\\zeta}|_{t=0}U^{0}_{1})\\) one obtains the following control on \\({\\mathcal{I}}^{r+1}(t,V^{0},H)\\) (\\(r=s,t_{0}\\)): \\[{\\mathcal{I}}^{r+1}(t,V^{0},H)\\leq\\underline{C}({\\mathcal{I}}^{r+2}(t,U^{0},G) +\\langle|\\underline{U}|_{\\widetilde{X}^{r+7/2}}{\\mathcal{I}}^{t_{0}+2}(t,U^{0 },G)\\rangle_{r>t_{0}}).\\] **Step 4.** The proposition follows from Steps 2 and 3. ## 5 Main results ### Large time existence for the water-waves equations In this section we prove the main result of this paper, which proves the well-posedness of the water-waves equations over large times and provides a uniform energy control which will allow us to justify all the asymptotic regimes evoked in the introduction. Recall first that the semi-normed spaces \\((\\widetilde{X}^{s},|\\cdot|_{\\widetilde{X}^{s}})\\) have been defined in Definition 4.2 as \\[\\widetilde{X}^{s}:=\\{(\\zeta,\\psi),\\zeta\\in H^{s}({\\mathbb{R}}^{2}),\ abla \\psi\\in H^{s-1/2}({\\mathbb{R}}^{2})^{2}\\},\\] and \\(|(\\zeta,\\psi)|_{\\widetilde{X}^{s}}:=|\\zeta|_{H^{s}}+|\\mathfrak{P}\\psi|_{H^{s}}\\), and define also the mapping \\(\\mathfrak{a}\\) by \\[\\mathfrak{a}(\\zeta,\\psi):= \\ \\frac{\\varepsilon^{2}}{\ u}(\ abla^{\\gamma}\\psi-\\varepsilon \\mathcal{Z}[\\varepsilon\\zeta]\ abla^{\\gamma}\\zeta)\\cdot\ abla^{\\gamma} \\mathcal{Z}[\\varepsilon\\zeta]\\psi\\] \\[- \\ \\varepsilon\\mathcal{Z}[\\varepsilon\\zeta](\\zeta+\\mathcal{A}_{2}[( \\zeta,\\psi)])+\\varepsilon d_{\\zeta}\\mathcal{Z}[\\varepsilon\\cdot]\\psi\\cdot \\mathcal{G}[\\varepsilon\\zeta]\\psi+1,\\] where \\(\\mathcal{Z}[\\varepsilon\\zeta]\\) is as defined in Theorem 3.1 and \\(\\mathcal{A}_{2}\\) is defined in (4.2), and where \\(\ u=(1+\\sqrt{\\mu})^{-1}\\). The only condition we impose on the parameters is that the steepness \\(\\varepsilon\\sqrt{\\mu}\\) and the ratio \\(\\beta/\\varepsilon\\) remain bounded. More precisely, \\((\\varepsilon,\\mu,\\gamma,\\beta)\\in\\mathcal{P}_{M}\\) (\\(M>0\\)) with \\[\\mathcal{P}_{M}\\!\\!=\\!\\!\\{(\\varepsilon,\\mu,\\gamma,\\beta)\\!\\in\\!(0,1]\\!\\times\\!( 0,\\infty)\\!\\times\\!(0,1]\\!\\times\\![0,1],\\varepsilon\\sqrt{\\mu}\\!\\leq\\!M\\ \\mbox{and}\\ \\frac{\\beta}{ \\varepsilon}\\!\\leq\\!M\\}.\\] We can now state the theorem: **Theorem 5.1**.: Let \\(t_{0}>1\\), \\(M>0\\) and \\(\\mathcal{P}\\subset\\mathcal{P}_{M}\\). There exists \\(P>D>0\\) such that for all \\(s\\geq s_{0}\\), \\(b\\in H^{s+P}({\\mathbb{R}}^{2})\\), and all family \\((\\zeta^{0}_{p},\\psi^{0}_{p})_{p\\in\\mathcal{P}}\\) bounded in \\(\\widetilde{X}^{s+P}\\) satisfying \\[\\inf_{{\\mathbb{R}}^{s}}1+\\varepsilon\\zeta^{0}_{p}-\\beta b>0\\quad\\mbox{ and }\\quad\\inf_{{\\mathbb{R}}^{2}}\\mathfrak{a}(\\zeta^{0}_{p},\\psi^{0}_{p})>0\\] (uniformly with respect to \\(p=(\\varepsilon,\\mu,\\gamma,\\beta)\\in\\mathcal{P}\\)), there exist \\(T>0\\) and a unique family \\((\\zeta_{p},\\psi_{p})_{p\\in\\mathcal{P}}\\) bounded in \\(C([0,\\frac{\ u T}{\\varepsilon}];\\widetilde{X}^{s+D})\\) solving (1.4) with initial condition \\((\\zeta^{0}_{p},\\psi^{0}_{p})_{p\\in\\mathcal{P}}\\). **Remark 5.1**.: The time interval of the solution varies with \\(p\\in\\mathcal{P}\\) (through \\(\\varepsilon\\) and \\(\ u\\)); when we say that \\((\\zeta_{p},\\psi_{p})_{p\\in\\mathcal{P}}\\) is bounded in \\(C([0,\\frac{\ u T}{\\varepsilon}];\\widetilde{X}^{s+D})\\), we mean that there exists \\(C\\) such that \\[\\forall p\\in\\mathcal{P},\\quad\\forall t\\in[0,\\frac{\ u T}{\\varepsilon}],\\qquad \\left|\\zeta_{p}(t)\\right|_{H^{s+D}}+\\left|\\mathfrak{P}\\psi_{p}(t)\\right|_{H^{s +D}}\\leq C.\\] **Remark 5.2**.: For the shallow water regime for instance, one has \\(\\varepsilon=\\beta=\\gamma=1\\) and \\(\\mu\\) is small (say, \\(\\mu<1\\)); thus, we can take \\(\\mathcal{P}=\\{1\\}\\times(0,1)\\times\\{1\\}\\times\\{1\\}\\); for the KP regime (with flat bottom), one takes \\(\\mathcal{P}=\\{(\\varepsilon,\\varepsilon,\\sqrt{\\varepsilon}),\\varepsilon\\in(0,1 )\\}\\times\\{0\\}\\), etc. **Remark 5.3**.: The condition \\(\\inf_{\\mathbb{R}^{2}}\\mathfrak{a}(\\zeta_{p}^{0},\\psi_{p}^{0})>0\\) is the classical Taylor sign condition proper to the water-wave equations ([48; 49; 29; 34; 35; 3; 12; 45], among others). It is obviously true for small data and we give in Proposition 5.1 some simple sufficient conditions on the initial data and the bottom parameterization \\(b\\) which ensure that it is satisfied. **Remark 5.4**.: One also has the following stability property (see Corollary 1 in [2]): let \\(\\underline{T}>0\\) and \\((U_{p}^{app})_{p\\in\\mathcal{P}}=(\\zeta_{p}^{app},\\psi_{p}^{app})_{p\\in\\mathcal{ P}}\\), bounded in \\(Y_{\\underline{T}}^{s+P}\\) (see Definition 4.2), be an approximate solution of (1.4) in the sense that \\[\\partial_{t}U_{p}^{app}+\\mathcal{L}U_{p}^{app}+\\frac{\\varepsilon}{\ u} \\mathcal{A}[U_{p}^{app}]=\\frac{\\varepsilon}{\ u}\\delta_{p}R_{p},\\qquad U_{p}^{ app}\\mid_{t=0}=(\\zeta_{p}^{0},\\psi_{p}^{0})+\\delta_{p}r_{p},\\] with \\((R_{p},r_{p})_{p}\\) bounded in \\(C([0,\\frac{\ u T}{\\varepsilon}];X^{s+P})\\cap C^{1}([0,\\frac{\ u T}{\\varepsilon }];X^{s+P-5/2})\\times X^{s+P}\\) (and \\(\\delta_{p}\\geq 0\\)). If moreover the \\(U_{p}^{app}\\) are admissible, then one has \\[\\forall t\\in[0,\\frac{\ u}{\\varepsilon}\\inf\\{T,\\underline{T}\\}],\\qquad\\left|U_{ p}(t)-U_{p}^{app}(t)\\right|_{\\widetilde{X}^{s+D}}\\leq\\text{Cst }\\delta_{p},\\] where \\(U_{p}\\in C([0,\\frac{\ u T}{\\varepsilon}];\\widetilde{X}^{s+D})\\) is the solution furnished by the theorem. For \\(\\delta_{p}\\) small enough, one can take \\(T=\\underline{T}\\). **Remark 5.5**.: The numbers \\(P\\) and \\(D\\) could be explicited in the above theorem (as in Theorem 1 of [2] for instance), but since the focus here is not on the regularity of the solutions, we chose to alleviate the proof as much as possible. For the same reason, we use a Nash-Moser iterative scheme which allows us to deal with all the different regimes at once, though it is possible in some cases to push further the analysis of the linearized operator and use a standard iterative scheme (as shown in [24] for the shallow-water regime). Proof.: Let us denote in this proof \\(\\epsilon=\\varepsilon/\ u\\) and omit the index \\(p\\) for the sake of clarity. Rescaling the time by \\(t\\rightsquigarrow t/\\epsilon\\) and using the samenotations as in (4.1) and (4.2), the theorem reduces to proving the well-posedness of the IVP \\[\\left\\{\\begin{aligned} &\\partial_{t}U+\\frac{1}{\\epsilon}\\mathcal{ L}U+\\mathcal{A}[U]=0,\\\\ & U|_{t=0}=U^{0},\\end{aligned}\\right.\\] on a time interval \\([0,T]\\), with \\(T>0\\)_independent of all the parameters_. Define first the evolution operator \\(S^{\\varepsilon}(\\cdot)\\) associated to the linear part of the above IVP. The following lemma shows that the definition \\[S^{\\epsilon}(t)U^{0}:=U(t),\\quad\\text{ with }\\quad\\partial_{t}U+\\frac{1}{ \\epsilon}\\mathcal{L}U=0\\quad\\text{ and }\\quad U_{|_{t=0}}=U^{0} \\tag{5.1}\\] makes sense for all data \\(U^{0}\\in\\widetilde{X}^{s}\\). **Lemma 5.1**.: For all \\(U^{0}\\in\\widetilde{X}^{s}\\), \\(S^{\\epsilon}(\\cdot)U^{0}\\) is well defined in \\(C([0,T];\\widetilde{X}^{s})\\) by (5.1). Moreover, for all \\(0\\leq t\\leq T\\), \\[|S^{\\epsilon}(t)U^{0}|_{\\widetilde{X}^{s}}\\leq C(T,\\frac{1}{h_{0}},|b|_{H^{s+ 7/2}},\\frac{\\beta}{\\varepsilon},\\frac{\\varepsilon}{\ u})|U^{0}|_{\\widetilde{ X}^{s}}.\\] Proof.: Proceeding as in the proof of Proposition 4.1 (in the very simple case \\(\\underline{U}=(0,0)^{T}\\)), one checks that \\(S^{\\epsilon}(t)U^{0}\\) makes sense and that the estimate of the lemma holds if \\(U^{0}\\in\\underline{X}^{s}\\). Now, let us extend this result to data \\(U^{0}\\in\\widetilde{X}^{s}\\). Let \\(\\iota\\) be a smooth function vanishing in a neighborhood of the origin and being constant equal to one outside the unit disc, and define, for all \\(\\delta>0\\), \\(\\iota^{\\delta}=\\iota(|D|/\\delta)\\). The couple \\(U^{0,\\delta}:=(\\zeta^{0},\\iota^{\\delta}\\psi^{0})^{T}=(\\zeta^{0},\\psi^{0,\\delta })^{T}\\) then belongs to \\(X^{s}\\) and \\(U^{\\delta}(t):=S^{\\epsilon}(t)U^{0,\\delta}=(\\zeta^{\\delta}(t),\\psi^{\\delta}(t) )^{T}\\) is well defined in \\(X^{s}\\). Since \\[|U^{\\delta}(t)-U^{\\delta^{\\prime}}(t)|_{\\widetilde{X}^{s}}\\leq C(T,\\frac{1}{h _{0}},|b|_{H^{s+7/2}},\\frac{\\beta}{\\varepsilon},\\frac{\\varepsilon}{\ u})|U^{0,\\delta}-U^{0,\\delta^{\\prime}}|_{\\widetilde{X}^{s}},\\] it follows by dominated convergence that \\((\\zeta^{\\delta})_{\\delta\\to 0}\\) and \\((\\mathfrak{P}\\psi^{\\delta})_{\\delta\\to 0}\\) are Cauchy sequences in \\(C([0,T];H^{s}(\\mathbb{R}^{2}))\\). Therefore, \\((\\zeta^{\\delta})\\to\\zeta\\) and \\((\\mathfrak{P}\\psi^{\\delta})\\to\\omega\\) in \\(C([0,T];H^{s}(\\mathbb{R}^{2}))\\), as \\(\\delta\\) goes to \\(0\\). Defining \\(\\psi(t)=\\psi^{0}-\\frac{1}{\\epsilon}\\int_{0}^{t}\\zeta(t^{\\prime})dt^{\\prime}\\) and using \\(\\psi^{\\delta}(t)=\\psi^{0,\\delta}-\\frac{1}{\\epsilon}\\int_{0}^{t}\\zeta^{\\delta}(t ^{\\prime})dt^{\\prime}\\), one deduces \\(\\omega=\\mathfrak{P}\\psi\\), from which one infers that \\(\ abla\\psi\\in C([0,T];H^{s-1/2}(\\mathbb{R}^{2})^{2})\\). From the convergence \\(\\mathfrak{P}\\psi^{\\delta}\\to\\omega=\\mathfrak{P}\\psi\\) in \\(H^{s}(\\mathbb{R}^{2})\\) and Proposition 3.3 one deduces also that \\(\\mathcal{G}[0]\\psi^{\\delta}\\to\\mathcal{G}[0]\\psi\\) in \\(H^{s-1/2}(\\mathbb{R}^{2})\\). One can thus take the limit as \\(\\delta\\to 0\\) in the relation \\(\\partial_{t}\\zeta^{\\delta}(t)-\\frac{1}{\\epsilon}\\frac{1}{\\mu\ u}\\mathcal{G}[0] \\psi^{\\delta}(t)=0\\), thus proving that \\((\\zeta,\\psi)\\in\\widetilde{X}^{s}\\) solves the IVP (5.1). Since the solution to this IVP is obviously unique, this shows that \\(S^{\\varepsilon}(\\cdot)U^{0}\\) makes sense in \\(\\widetilde{X}^{s}\\) when \\(U^{0}\\in\\widetilde{X}^{s}\\). The last assertion of the lemma follows by taking the limit when \\(\\delta\\to 0\\) in the following expression \\[|S^{\\epsilon}(t)U^{0,\\delta}|_{\\widetilde{X}^{s}}\\leq C(T,\\frac{1}{h_{0}},|b|_{H^ {s+7/2}},\\frac{\\beta}{\\varepsilon},\\frac{\\varepsilon}{\ u})|U^{0,\\delta}|_{ \\widetilde{X}^{s}}.\\] We now look for the exact solution under the form \\(U=S^{\\varepsilon}(t)U^{0}+V\\), which is equivalent to solving \\[\\left\\{\\begin{aligned} &\\partial_{t}V+\\frac{1}{\\epsilon} \\mathcal{L}V+\\mathcal{F}[t,V]=h\\\\ & V|_{t=0}=(0,0)^{T},\\end{aligned}\\right. \\tag{5.2}\\] with \\(\\mathcal{F}[t,V]:=\\mathcal{A}[S^{\\epsilon}(t)U^{0}+V]-\\mathcal{A}[S^{ \\varepsilon}(t)U^{0}]\\) and \\(h:=-\\mathcal{A}[S^{\\varepsilon}(t)U^{0}]\\). We can now state two important properties satisfied by \\(\\mathcal{L}\\) and \\(\\mathcal{F}\\) (in the statement below, the notation \\(\\mathcal{F}^{(i)}_{(j)}\\) means that \\(\\mathcal{F}\\) has been differentiated \\(i\\) times with respect to time and \\(j\\) with respect to its second argument). Lemma 5.2: Let \\(T>0\\), \\(p=1\\) and \\(m=5/2\\). Then: **i.** For all \\(s\\geq t_{0}\\), the mapping \\(\\mathcal{L}:X^{s+m}\\to X^{s}\\) is well defined and continuous; moreover, the family of evolution operators \\((S^{\\varepsilon}(\\cdot))_{0<\\epsilon<\\epsilon_{0}}\\) is uniformly bounded in \\(C([-T,T];Lin(X^{s+m},X^{s}))\\). **ii.** For all \\(0\\leq i\\leq p\\) and \\(0\\leq i+j\\leq p+2\\), and for all \\(s\\geq t_{0}+im\\), one has \\[\\sup_{t\\in[0,T]}|\\epsilon^{i}\\mathcal{F}^{(i)}_{(j)}[t,U](V_{1}, \\ldots,V_{j})|_{s-im}\\leq C(s,T,|U|_{t_{0}+(i+1)m})\\] \\[\\times\\big{(}\\sum_{k=1}^{j}|V_{k}|_{s+m}\\prod_{l\ eq k}|V_{l}|_{t _{0}+(i+1)m}+|U|_{s+m}\\prod_{k=1}^{j}|V_{k}|_{t_{0}+(i+1)m}\\big{)},\\] for all \\(U\\in H^{s+m}(\\mathbb{R}^{2})\\) and \\((V_{1},\\ldots,V_{j})\\in H^{s+m}(\\mathbb{R}^{2})^{j}\\). Proof: **i.** The property on \\(S^{\\epsilon}(\\cdot)\\) follows from Proposition 4.3 with \\(\\underline{U}=(0,0)\\) (recall that we rescaled the time variable). In order to prove the continuity of \\(\\mathcal{L}\\), let us write, for all \\(W=(\\zeta,\\psi)^{T}\\), \\[|\\mathcal{L}W|_{X^{s}} \\leq|\\frac{1}{\\mu\ u}\\mathcal{G}[0]\\psi|_{H^{s}}+\\frac{ \\varepsilon}{\ u}|\\zeta|_{H^{s}}+|\\mathfrak{P}\\zeta|_{H^{s}}\\] \\[\\leq|\\frac{1}{\\mu\ u}\\mathcal{G}[0]\\psi|_{H^{s}}+C(\\epsilon)|\\zeta |_{H^{s+1}}.\\] One therefore deduces the continuity property on \\(\\mathcal{L}\\) from the following inequality: \\[\\big{|}\\frac{1}{\\mu\ u}\\mathcal{G}[0]\\psi\\big{|}_{H^{s}}\\leq C(\\frac{1}{h_{0} },\\varepsilon\\sqrt{\\mu},\\frac{\\beta}{\\varepsilon},|b|_{H^{s+2}})|\\mathfrak{P} \\psi|_{H^{s+1}};\\]for \\(\\mu\\geq 1\\), one has the uniform bound \\(\\frac{1}{\\mu\ u}\\lesssim 1/\\sqrt{\\mu}\\), and the inequality is a direct consequence of Proposition 3.3; for \\(\\mu\\leq 1\\), one has \\(\ u\\sim 1\\) and we rather use Remark 3.3. **ii.** Since by definition \\({\\mathcal{F}}[t,U]={\\mathcal{A}}[S^{\\varepsilon}(t)U^{0}+U]-{\\mathcal{A}}(S^{ \\varepsilon}(t)U^{0})\\), it follows from the first point that it suffices to prove the estimates in the case \\(i=0\\) and with \\({\\mathcal{F}}\\) replaced by \\({\\mathcal{A}}\\). Recall that \\({\\mathcal{A}}\\) is explicitly given by (4.2) and remark that \\[{\\mathcal{A}}_{1}[U] =-\\frac{1}{\\varepsilon\\mu}\\int_{0}^{1}d_{z\\zeta}{\\mathcal{G}}[ \\varepsilon\\cdot]\\psi\\cdot\\zeta dz,\\] \\[=\\int_{0}^{1}\\frac{1}{\\sqrt{\\mu}}{\\mathcal{G}}[\\varepsilon z \\zeta](\\zeta\\frac{1}{\\sqrt{\\mu}}\\underline{Z})+z\ abla^{\\gamma}\\cdot(\\zeta \\underline{{\\bf v}})dz,\\] where \\(\\underline{Z}\\) and \\(\\underline{{\\bf v}}\\) are as in Theorem 3.1 (with \\(\\zeta=\\zeta\\) and \\(\\psi=\\psi\\)). The estimates on \\({\\mathcal{A}}\\) are therefore a direct consequence of Proposition 3.3. The well-posedness of (5.2) is deduced from the general Nash-Moser theorem for singular evolution equations of [2] (Theorem 1'), provided that the three assumptions (Assumptions 1',2' and 3' in [2]) it requires are satisfied. The first two, which concern the linear operator \\({\\mathcal{L}}\\) and the nonlinearity \\({\\mathcal{F}}[t,\\cdot]\\), are exactly the results stated in Lemma 5.2. The third assumption concerns the linearized operator around \\(\\underline{V}\\) associated to (5.2); after remarking that \\[\\partial_{t}+\\frac{1}{\\varepsilon}{\\mathcal{L}}+d_{\\underline{V}}{\\mathcal{F} }[t,\\cdot]={\\mathfrak{L}}_{(\\underline{\\zeta},\\underline{\\psi})},\\] with \\(\\underline{U}=(\\underline{\\zeta},\\underline{\\psi})^{T}=S^{\\epsilon}(t)U^{0}+ \\underline{V}\\), one can check that this last assumption is exactly the result stated in Proposition 4.1, provided that the following quantity (which is the first iterate of the Nash-Moser scheme, see Remark 3.2.2 of [2]) \\[U_{0}:=t\\mapsto S^{\\epsilon}(t)U^{0}+\\int_{0}^{t}S^{\\epsilon}(t-t^{\\prime}){ \\mathcal{F}}[t^{\\prime},U^{0}]dt^{\\prime} \\tag{5.3}\\] is an admissible reference state in the sense of Definition 4.1 on the time interval \\([0,T]\\) (recall that we rescaled the time variable). Taking a smaller \\(T\\) if necessary, it is sufficient to check the admissibility at \\(t=0\\), which is equivalent to the two assumptions made in the statement of the theorem (after remarking that \\({\\mathfrak{a}}(\\zeta^{0},\\psi^{0})=\\underline{{\\mathfrak{a}}}(t=0)\\), with \\(\\underline{{\\mathfrak{a}}}\\) as defined in (4.5) and \\(\\underline{U}=U_{0}\\)). The proof is thus complete. We end this section with a proposition showing that the Taylor sign condition \\[\\inf_{{\\mathbb{R}}^{2}}{\\mathfrak{a}}(\\zeta^{0}_{p},\\psi^{0}_{p})_{p\\in{ \\mathcal{P}}}>0,\\quad\\mbox{ uniformly with respect to}\\quad p\\in{\\mathcal{P}} \\tag{5.4}\\]can be replaced in Theorem 5.1 by a much simpler condition. We need to introduce first the \"anisotropic Hessian\" \\(\\mathcal{H}_{b}^{\\gamma}\\) associated to the bottom parameterization \\(b\\), \\[\\mathcal{H}_{b}^{\\gamma}:=\\left(\\begin{matrix}\\partial_{x}^{2}b&\\gamma^{2} \\partial_{xy}^{2}b\\\\ \\gamma^{2}\\partial_{xy}^{2}b&\\gamma^{4}\\partial_{y}^{2}b\\end{matrix}\\right)\\] and the initial velocity potential \\(\\Phi_{p}^{0}\\) given by the BVP \\[\\left\\{\\begin{aligned} &\\mu\\partial_{x}^{2}\\Phi_{p}^{0}+\\gamma^{2} \\mu\\partial_{y}^{2}\\Phi_{p}^{0}+\\partial_{z}^{2}\\Phi_{p}^{0}=0,\\qquad-1+\\beta b \\leq z\\leq\\varepsilon\\zeta_{p}^{0},\\\\ &\\Phi_{p}^{0}{}_{|_{z=\\varepsilon\\zeta_{p}^{0}}}=\\psi_{p}^{0}, \\qquad\\partial_{n}\\Phi_{p}^{0}{}_{|_{z=-1+\\beta b}}=0.\\end{aligned}\\right. \\tag{5.5}\\] **Proposition 5.1.** Let \\(t_{0}>1\\), \\(M>0\\) and \\(\\mathcal{P}\\subset\\mathcal{P}_{M}\\); let also \\(b\\in H^{t_{0}+2}(\\mathbb{R}^{2})\\), \\((\\zeta_{p}^{0},\\psi_{p}^{0})_{p\\in\\mathcal{P}}\\) be bounded in \\(\\widetilde{X}^{t_{0}+1}\\) and \\((\\Phi_{p}^{0})_{p\\in\\mathcal{P}}\\) solve the BVPs (5.5). Then, **i.** There exists \\(\\epsilon_{0}>0\\) such that (5.4) is satisfied if one replaces \\(\\mathcal{P}\\) by \\(\\mathcal{P}_{\\varepsilon_{0}}:=\\{p=(\\varepsilon,\\mu,\\gamma,\\beta)\\in\\mathcal{ P},\\varepsilon\ u^{-1}\\leq\\epsilon_{0}\\}\\); **ii.** If there exist \\(\\mu_{1}>0\\) and \\(\\underline{\\gamma}\\in C((0,1]\\times(0,\\mu_{1}])\\) such that for all \\(p=(\\varepsilon,\\mu,\\gamma,\\beta)\\in\\mathcal{P}\\) one has \\(\\mu\\leq\\mu_{1}\\) and \\(\\gamma=\\underline{\\gamma}(\\varepsilon,\\mu)\\), and if \\[-\\varepsilon^{2}\\beta\\mu\\mathcal{H}_{b}^{\\gamma}(\ abla\\Phi_{p}^{0}{}_{|_{z= -1+\\beta b}})\\leq 1,\\quad\\text{ for all }\\quad p\\in\\mathcal{P},\\] then the Taylor sign condition (5.4) is satisfied. **Remark 5.6.** The first point of the proposition is used to check the Taylor condition in deep water regime; in this latter case, one has indeed \\(\\varepsilon/\ u\\sim\\varepsilon\\sqrt{\\mu}\\) which is the _steepness_ of the wave, the small parameter with respect to which asymptotic models are derived. The second point of the proposition is essential in the shallow-water regime (since \\(\\varepsilon/\ u\\) does not go to zero as \\(\\mu\\to 0\\)). It is important to notice that it implies that the Taylor condition _is automatically satisfied for flat bottoms_. **Remark 5.7.** S. Wu proved in [48; 49] that the Taylor sign condition (5.4) is automatically satisfied in infinite depth; this result was extended in [29] to finite depth with flat bottoms. The result needed here is stronger, since we want (5.4) to be satisfied uniformly with respect to the parameters. In the \\(1DH\\)-case, for flat bottoms, and in the particular case of the shallow-water regime, such a result was established in [33]. Proof: _As in the proof of Theorem 5.1, we omit the index \\(p\\) to alleviate the notations_. **i.** As seen in the proof of Theorem 5.1, one has \\(\\mathfrak{a}(\\zeta^{0},\\psi^{0})=\\underline{\\mathfrak{a}}(t=0)\\), where \\(\\underline{\\mathfrak{a}}\\) is as defined in (4.5) (with \\(\\underline{U}=U_{0}\\) and \\(U_{0}\\) given by (5.3)). Thus, \\(\\mathfrak{a}(\\zeta^{0},\\psi^{0})=1+\\frac{\\varepsilon}{\ u}\\underline{ \\mathfrak{b}}\\) and \\(|\\underline{\\mathfrak{a}}|_{L^{\\infty}}\\geq 1-\\epsilon_{0}|\\underline{ \\mathfrak{b}}|_{L^{\\infty}}\\). It follows from Lemma 4.1 that for the range of parameters considered here, \\(|\\underline{\\mathfrak{b}}|_{L^{\\infty}}\\) is uniformly bounded on \\([0,T]\\), so that the result follows when \\(\\epsilon_{0}\\) is small enough. **ii.** Step 1: _There exists \\(\\mu_{0}>0\\) such that (5.4) is satisfied for all \\(p=(\\varepsilon,\\mu,\\gamma,\\beta)\\in\\mathcal{P}\\) such that \\(\\mu\\leq\\mu_{0}\\)_. It is indeed a consequence of Remark 3.3 that \\(|\\underline{\\mathfrak{b}}|_{L^{\\infty}}=O(\\sqrt{\\mu})\\) as \\(\\mu\\to 0\\); since moreover \\(\\epsilon=\\frac{\\varepsilon}{\ u}\\) remains bounded, one can conclude as in the first step. Step 2. _The case \\(\\mu\\geq\\mu_{0}\\)_. For all time \\(t\\), let \\(\\Phi(t)\\) denote the solution of the BVP (5.5), with the Dirichlet condition at the surface replaced by \\(\\Phi^{0}\\mid_{{}_{z=\\varepsilon^{0}}}=\\psi_{0}(t)\\), where \\(U_{0}(t)=(\\zeta_{0}(t),\\psi_{0}(t))\\) is given by (5.3). Let us also define the \"pressure\" \\(P\\) as \\[-\\frac{1}{\\varepsilon}P:=\\partial_{t}\\Phi+\\frac{1}{2}\\big{(}\\frac{\\varepsilon }{\ u}|\ abla^{\\gamma}\\Phi|^{2}+\\frac{\\varepsilon}{\\mu\ u}(\\partial_{z}\\Phi)^ {2}\\big{)}+\\frac{1}{\\varepsilon}z.\\] Since \\(U_{0}=(\\zeta_{0},\\psi_{0})\\) solves (1.4) at \\(t=0\\), one can check as in Proposition 4.4 of [29] that \\(P(t=0,X,\\varepsilon\\zeta^{0}(X))=0\\). Differentiating this relation with respect to \\(X\\) shows that \\(-\ abla^{\\gamma}\\zeta^{0}\\cdot\ abla^{\\gamma}P=\\varepsilon|\ abla^{\\gamma} \\zeta^{0}|^{2}\\partial_{z}P\\) on the surface, from which one deduces the identity, \\[(1+\\varepsilon^{2}|\ abla\\zeta^{0}|^{2})^{1/2}\\partial_{n}P_{|_{z=\\varepsilon \\zeta^{0}}}=(1+\\varepsilon^{2}\\mu|\ abla^{\\gamma}\\zeta^{0}|^{2})\\partial_{z}P _{|_{z=\\varepsilon\\zeta^{0}}},\\] where \\(\\partial_{n}P_{|_{z=\\varepsilon\\zeta^{0}}}\\) stands for the outwards conormal derivative associated to the elliptic operator \\(\\mu\\partial_{x}^{2}+\\gamma^{2}\\mu\\partial_{y}^{2}+\\partial_{z}^{2}\\). Expressing \\(\\Phi\\) and its derivatives evaluated at the surface in terms of \\(\\Psi\\), one can then check that \\[\\underline{\\mathfrak{a}}(t=0)=-\\frac{1}{1+\\varepsilon^{2}\\mu^{2}|\ abla^{ \\gamma}\\zeta^{0}|^{2}}(1+\\varepsilon^{2}|\ abla\\zeta^{0}|^{2})^{1/2}\\partial_ {n}P_{|_{z=\\varepsilon\\zeta^{0}}}. \\tag{5.6}\\] Let us now remark that \\(P\\) solves the BVP \\[\\left\\{\\begin{aligned} &(\\mu\\partial_{x}^{2}+\\gamma^{2}\\mu \\partial_{y}^{2}+\\partial_{z}^{2})P=h,\\qquad-1+\\beta b\\leq z\\leq\\varepsilon \\zeta^{0},\\\\ & P_{|_{z=\\varepsilon\\zeta^{0}}}=0,\\qquad\\partial_{n}P_{|_{z=-1+ \\beta b}}=g,\\end{aligned}\\right.\\] with \\[h :=-\\frac{1}{2}(\\mu\\partial_{x}^{2}+\\gamma^{2}\\mu\\partial_{y}^{2 }+\\partial_{z}^{2})(\\frac{\\varepsilon^{2}}{\ u}|\ abla^{\\gamma}\\Phi|^{2}+ \\frac{\\varepsilon^{2}}{\\mu\ u}(\\partial_{z}\\Phi)^{2})\\] \\[g :=-\\frac{1}{2}\\partial_{n}(\\frac{\\varepsilon^{2}}{\ u}|\ abla^{ \\gamma}\\Phi|^{2}+\\frac{\\varepsilon^{2}}{\\mu\ u}(\\partial_{z}\\Phi)^{2})_{|_{z=- 1+\\beta b}}-\\partial_{n}(z)_{|_{z=-1+\\beta b}}.\\] Exactly as in the proof of Proposition 4.15 of [29], one can check that \\(h\\leq 0\\) and use a maximum principle (using the fact that (5.6) links \\(\\underline{\\mathfrak{a}}\\) to the normal derivative of \\(P\\) at the surface) to show that if \\(g\\leq 0\\) then there exists a constant \\(c(\\varepsilon,\\mu,\\beta)>0\\) such that \\(\\underline{\\mathfrak{a}}(t_{0})\\geq c(\\varepsilon,\\mu,\\beta)\\). We thus turn to prove that \\(g\\leq 0\\). Recall that by construction of \\(\\Phi\\), \\[(1+\\beta^{2}|\ abla b|^{2})^{1/2}\\partial_{n}\\Phi_{|_{z=-1+\\beta b}}\\big{(}=\\beta \\mu\ abla^{\\gamma}b\\cdot\ abla^{\\gamma}\\Phi_{|_{z=-1+\\beta b}}-\\partial_{z} \\Phi_{|_{z=-1+\\beta b}}\\big{)}=0.\\] Differentiating this relation with respect to \\(j\\) (\\(j=x,y\\)), one gets \\[(1+\\beta^{2}|\ abla b|^{2})^{1/2}\\partial_{n}(\\partial_{j}\\Phi)_{ |_{z=-1+\\beta b}}=-\\beta\\mu\ abla^{\\gamma}\\partial_{j}b\\cdot\ abla^{\\gamma} \\Phi_{|_{z=-1+\\beta b}}\\] \\[+\\beta\\partial_{j}b(1+\\beta^{2}|\ abla b|^{2})^{1/2}\\partial_{n} (\\partial_{z}\\Phi)_{|_{z=-1+\\beta b}},\\] and using this formula one computes \\[\\frac{1}{2}(1+\\beta^{2}|\ abla b|^{2})^{1/2}\\partial_{n}(\\frac{ \\varepsilon^{2}}{\ u}|\ abla^{\\gamma}\\Phi|^{2}+\\frac{\\varepsilon^{2}}{\\mu\ u} (\\partial_{z}\\Phi)^{2})_{|_{z=-1+\\beta b}}\\] \\[=-\\frac{\\varepsilon^{2}\\beta\\mu}{\ u}\\big{(}(\\partial_{x}\\Phi)^{ 2}\\partial_{x}^{2}b+2\\gamma^{2}\\partial_{x}\\Phi\\partial_{y}\\Phi\\partial_{xy}^ {2}b+\\gamma^{4}(\\partial_{y}\\Phi)^{2}\\partial_{y}^{2}b\\big{)}\\] \\[=-\\frac{\\varepsilon^{2}\\beta\\mu}{\ u}\\mathcal{H}_{b}^{\\gamma}( \ abla\\Phi_{|_{z=-1+\\beta b}});\\] since moreover \\((1+\\beta^{2}|\ abla b|^{2})^{1/2}\\partial_{n}(z)_{|_{z=-1+\\beta b}}=1\\), one gets \\(g\\geq 0\\) if the condition given in the statement of the proposition is fulfilled. As detailed above, we therefore have \\(\\underline{\\mathfrak{a}}(t=0)\\geq c(\\varepsilon,\\mu,\\beta)>0\\); moreover, there exists by assumption \\(\\mu_{1}\\) such that for all \\(p=(\\varepsilon,\\mu,\\gamma,\\beta)\\in\\mathcal{P}\\), one has \\(\\mu\\leq\\mu_{1}\\); due to the first point of the proposition, Step 1 and the fact the \\(\\gamma=\\underline{\\gamma}(\\varepsilon,\\mu)\\), it is sufficient to prove the proposition for all the parameters \\(p\\in\\mathcal{P}_{1}\\) with \\[\\mathcal{P}_{1}:=[\\varepsilon_{0},1]\\times[\\mu_{0},\\mu_{1}]\\times\\underline{ \\gamma}([\\varepsilon_{0},1]\\times[\\mu_{0},\\mu_{1}])\\times[0,1]\\ \\ (\\varepsilon_{0}:=(1+\\sqrt{\\mu_{1}})^{-1} \\epsilon_{0}).\\] The dependence of \\(c(\\varepsilon,\\mu,\\beta)>0\\) on \\(\\varepsilon\\), \\(\\mu\\) and \\(\\beta\\) is continuous and therefore, \\(\\inf_{[\\varepsilon_{0},1]\\times[\\mu_{0},\\mu_{1}]\\times[0,1]}c(\\varepsilon,\\mu, \\beta)>0\\). ## 6 Asymptotics for \\(3d\\) water-waves We will now provide a rigorous justification of the main asymptotic models used in coastal oceanography. **Remark 6.1**: _Throughout this section, we assume the following: - \\(P\\) and \\(D\\) are as in the statement of Theorem 5.1; - \\(\\Phi^{0}\\) stands for the initial velocity potential as in Proposition 5.1; - The bottom parameterization satisfies \\(b\\in H^{s+P}(\\mathbb{R}^{2})\\); - Except for the KP equations, we always consider fully transverse waves (\\(\\gamma=1\\)), but one could easily use the methods set in this paper to derive and justify weakly transverse models in the other regimes._ ### Shallow-water and Serre regimes We recall that the so-called \"shallow-water\" regime corresponds to the conditions \\(\\mu\\ll 1\\) (so that \\(\ u\\sim 1\\)) and \\(\\varepsilon=\\gamma=1\\); we also consider bottom variations which can be of large amplitude (\\(\\beta=1\\)). Without restriction, we can assume that \\(\ u=1\\) (which corresponds to the nondimensionalization (1.4)). The shallow-water model - which goes back to Airy [1] and Friedrichs [19]- consists of neglecting the \\(O(\\mu)\\) terms in the water-waves equations, while the Green-Naghdi equations [21, 22, 46] is a more precise approximation, which neglects only the \\(O(\\mu^{2})\\) quantities. The Serre equations [44, 46] are quite similar to the Green-Naghdi equations, but assume that the bottom and surface variations are of medium amplitude: \\(\\varepsilon=\\beta=\\sqrt{\\mu}\\). #### 6.1.1 The shallow-water equations The shallow water equations are \\[\\left\\{\\begin{aligned} &\\partial_{t}V+\ abla\\zeta+\\frac{1}{2} \ abla|V|^{2}=0,\\\\ &\\partial_{t}\\zeta+\ abla\\cdot\\big{(}(1+\\zeta-b)V\\big{)}=0,\\end{aligned}\\right. \\tag{6.1}\\] and the following theorem shows that they provide a good approximation to the exact solution of the water-waves equations. **Theorem 6.1** (Shallow-water equations): Let \\(s\\geq t_{0}>1\\) and \\((\\zeta_{\\mu}^{0},\\psi_{\\mu}^{0})_{0<\\mu<1}\\) be bounded in \\(\\widetilde{X}^{s+P}\\). Assume moreover that there exist \\(h_{0}>0\\) and \\(\\mu_{0}>0\\) such that for all \\(\\mu\\in(0,\\mu_{0})\\), \\[\\inf_{\\mathbb{R}^{2}}(1+\\zeta_{\\mu}^{0}-b)\\geq h_{0}\\quad\\text{ and }\\quad-\\mu\\mathcal{H}_{b}^{\\gamma}(\ abla\\Phi_{\\mu\\mid_{z=-1+b}}^{0})\\leq 1.\\] Then there exists \\(T>0\\) and: 1. a unique family \\((\\zeta_{\\mu},\\psi_{\\mu})_{0<\\mu<\\mu_{0}}\\) bounded in \\(C([0,T];\\widetilde{X}^{s+D})\\) and solving (1.4) with initial conditions \\((\\zeta_{\\mu}^{0},\\psi_{\\mu}^{0})_{0<\\mu<\\mu_{0}}\\); 2. a unique family \\((V_{\\mu}^{SW},\\zeta_{\\mu}^{SW})_{0<\\mu<\\mu_{0}}\\) bounded in \\(C([0,T];H^{s+P-1/2}(\\mathbb{R}^{2})^{3})\\) and solving (6.1) with initial conditions \\((\\zeta_{\\mu}^{0},\ abla\\psi_{\\mu}^{0})_{0<\\mu<\\mu_{0}}\\). Moreover, one has, for some \\(C>0\\), \\[\\forall 0<\\mu<\\mu_{0},\\ \\ |\\zeta_{\\mu}-\\zeta_{\\mu}^{SW}|_{L^{\\infty}([0,T] \\times\\mathbb{R}^{2})}+|\ abla\\psi_{\\mu}-V_{\\mu}^{SW}|_{L^{\\infty}([0,T]\\times \\mathbb{R}^{2})}\\leq C\\mu.\\] **Remark 6.2**: The existence time provided by Theorem 5.1 is \\(O(1)\\), but is large in the sense that it does not shrink to zero when \\(\\mu\\to 0\\). **Remark 6.3**: Instead of assuming that the initial data \\((\\zeta_{\\mu}^{0},\\psi_{\\mu}^{0})_{0<\\mu<1}\\) are bounded in \\(\\widetilde{X}^{s+P}\\), we could assume that \\((\\zeta_{\\mu}^{0},\ abla\\psi_{\\mu}^{0})_{0<\\mu<1}\\) is bounded in \\(H^{s+P}(\\mathbb{R}^{2})^{3}\\) (because \\(|\\mathfrak{P}\\psi|_{H^{s+P}}\\lesssim|\ abla\\psi|_{H^{s+P}}\\), uniformly with respect to \\(\\mu\\in(0,1)\\)). **Remark 6.4**.: The \\(2DH\\) shallow-water model has been justified rigorously by Iguchi in a recent work [24], but under two restrictions: a) The velocity potential is assumed to have Sobolev regularity which implies that the velocity must satisfy some restrictive zero mass assumptions and b) the theorem holds only for very small values of \\(\\mu\\). These assumptions are removed in the above result. Proof.: The assumptions allow us to use Theorem 5.1 and Proposition 5.1 with \\(\\mathcal{P}=\\{1\\}\\times(0,\\mu_{0})\\times\\{1\\}\\times\\{1\\}\\), which proves the first part of the theorem. The second point of the theorem is straightforward since \\(|\ abla\\psi^{0}_{\\mu}|_{H^{s+P-1/2}}\\leq|\\mathfrak{P}\\psi^{0}_{\\mu}|_{H^{s+P}}\\) (recall that \\(\\mu<1\\) here), and because (6.1) is a quasilinear hyperbolic system (since \\(\\inf_{\\mathbb{R}^{2}}(1+\\zeta-b)>0\\)). In order to prove the error estimate, plug the expansion furnished by Proposition 3.8 into (1.4) and take the gradient of the second equation in order to obtain a system of equations on \\(\\zeta_{\\mu}\\) and \\(V_{\\mu}=\ abla\\psi_{\\mu}\\). One gets \\[\\left\\{\\begin{aligned} &\\partial_{t}V_{\\mu}+\ abla\\zeta_{\\mu}+ \\frac{1}{2}\ abla|V_{\\mu}|^{2}=\\mu R^{1}_{\\mu},\\\\ &\\partial_{t}\\zeta_{\\mu}+\ abla\\cdot\\big{(}(1+\\zeta_{\\mu}-b)V_{ \\mu}\\big{)}=\\mu R^{2}_{\\mu},\\end{aligned}\\right. \\tag{6.2}\\] with \\((R^{1}_{\\mu},R^{2}_{\\mu})\\) uniformly bounded in \\(L^{\\infty}([0,T];H^{t_{0}}(\\mathbb{R}^{2})^{2+1})\\). An energy estimate on (6.2) thus gives a Sobolev error estimate from which one deduces the \\(L^{\\infty}\\) estimate of the theorem using the classical continuous embedding \\(H^{t_{0}}\\subset L^{\\infty}\\). #### 6.1.2 The Green-Naghdi and Serre equations Though corresponding to two different physical regimes, the Green-Naghdi and Serre equations can both be written at the same time if one assumes that \\(\\varepsilon=1\\) for the Green-Naghdi equations and \\(\\varepsilon=\\sqrt{\\mu}\\) for the Serre equations in the formulation below: \\[\\left\\{\\begin{aligned} &(h+\\mu\\mathcal{T}[h,\\varepsilon b]) \\partial_{t}V+h\ abla\\zeta+\\varepsilon h(V\\cdot\ abla)V\\\\ &\\qquad+\\mu\\varepsilon\\Big{[}\\frac{1}{3}\ abla\\big{(}h^{3} \\mathcal{D}_{V}\\mathrm{div}(V)\\big{)}+\\mathcal{Q}[h,\\varepsilon b](V)\\Big{]} =0\\\\ &\\partial_{t}\\zeta+\ abla\\cdot(hV)=0,\\end{aligned}\\right. \\tag{6.3}\\] where \\(h:=1+\\varepsilon(\\zeta-b)\\) while the linear operators \\(\\mathcal{T}[h,b]\\) and \\(\\mathcal{D}_{V}\\) and the quadratic form \\(\\mathcal{Q}[h,b](\\cdot)\\) are defined as \\[\\mathcal{T}[h,b]V :=-\\frac{1}{3}\ abla(h^{3}\ abla\\cdot V)+\\frac{1}{2}\\big{[} \ abla(h^{2}\ abla b\\cdot V)-h^{2}\ abla b\ abla\\cdot V\\big{]}\\] \\[\\qquad+h\ abla b\ abla b\\cdot V,\\] \\[\\mathcal{D}_{V} :=-(V\\cdot\ abla)+\\mathrm{div}(V),\\] \\[\\mathcal{Q}[h,b](V) :=\\frac{1}{2}\ abla\\big{(}h^{2}(V\\cdot\ abla)^{2}b\\big{)}+h\\big{(} \\frac{h}{2}\\mathcal{D}_{V}\\mathrm{div}(V)+(V\\cdot\ abla)^{2}b\\big{)}\ abla b.\\] Both the Green-Naghdi and Serre models are rigorously justified in the theorem below: **Theorem 6.2** (Green-Naghdi and Serre equations): Let \\(s\\geq t_{0}>1\\) and \\((\\zeta^{0}_{\\mu},\\psi^{0}_{\\mu})_{0<\\mu<1}\\) be bounded in \\(\\widetilde{X}^{s+P}\\). Let \\(\\varepsilon=1\\) (Green-Naghdi) or \\(\\varepsilon=\\sqrt{\\mu}\\) (Serre) and assume that for some \\(h_{0}>0\\), \\(\\mu_{0}>0\\) and for all \\(\\mu\\in(0,\\mu_{0})\\), \\[\\inf_{\\mathbb{R}^{2}}(1+\\varepsilon(\\zeta^{0}_{\\mu}-b))\\geq h_{0}\\quad\\text{ and }\\quad-\\mu\\varepsilon^{3}\\mathcal{H}^{\\gamma}_{b}(\ abla\\Phi^{0}_{\\mu\\;|_{z=-1+ cb}})\\leq 1.\\] Then there exists \\(T>0\\) and: 1. a unique family \\((\\zeta_{\\mu},\\psi_{\\mu})_{0<\\mu<\\mu_{0}}\\) bounded in \\(C([0,\\frac{T}{\\varepsilon}];\\widetilde{X}^{s+D})\\) and solving (1.4) with initial conditions \\((\\zeta^{0}_{\\mu},\\psi^{0}_{\\mu})_{0<\\mu<\\mu_{0}}\\); 2. a unique family \\((V^{GN}_{\\mu},\\zeta^{GN}_{\\mu})_{0<\\mu<\\mu_{0}}\\) bounded in \\(C([0,\\frac{T}{\\varepsilon}];H^{s}(\\mathbb{R}^{2})^{3})\\) and solving (6.3) with initial conditions \\((\\zeta^{0}_{\\mu},(1-\\frac{\\mu}{h^{0}}\\mathcal{T}[h^{0},\\varepsilon b])\ abla \\psi^{0}_{\\mu})\\) (with \\(h^{0}=1+\\varepsilon(\\zeta^{0}-b)\\)). Moreover, one has for some \\(C>0\\) independent of \\(\\mu\\in(0,\\mu_{0})\\), \\[|\\zeta_{\\mu}-\\zeta^{GN}_{\\mu}|_{L^{\\infty}([0,\\frac{T}{\\varepsilon}]\\times \\mathbb{R}^{2})}+|\ abla\\psi_{\\mu}-(1+\\frac{\\mu}{h}\\mathcal{T}[h,\\varepsilon b ])V^{GN}_{\\mu}|_{L^{\\infty}([0,\\frac{T}{\\varepsilon}]\\times\\mathbb{R}^{2})} \\leq C\\frac{\\mu^{2}}{\\varepsilon}.\\] **Remark 6.5**: The precision of the GN approximation (\\(\\varepsilon=\\beta=1\\)) is therefore one order better than the shallow-water equations. This model had been justified in \\(1DH\\) and for flat bottoms by Y. A. Li [33]. The theorem above is stated in \\(2DH\\) but one can cover the open case of \\(1DH\\) non-flat bottoms with a straightforward adaptation. **Remark 6.6**: In the Serre scaling, one has \\(\\varepsilon=\\sqrt{\\mu}\\), and the precision of the theorem is therefore \\(O(\\mu^{3/2})\\), which is worse than the \\(O(\\mu^{2})\\) precision of the GN model, but the approximation remains valid over a larger time scale (namely, \\(O(\\mu^{-1/2})\\) versus \\(O(1)\\) for GN). Notice also that at first order in \\(\\mu\\), the Serre equations reduce to a simple wave equation (speed \\(\\pm 1\\)) on \\(\\zeta\\) and \\(V\\), which is not the case for GN where the shallow-water equations (6.1) are found at first order. Proof: The first assertion of the theorem is exactly the same as in Theorem 6.1 in the GN case. For the Serre equations, it is also a direct consequence of Theorem 5.1 and Proposition 5.1, with \\(\\mathcal{P}=\\{(\\sqrt{\\mu},\\mu,1,\\sqrt{\\mu}),\\mu\\in(0,\\mu_{0})\\}\\). For the second assertion, we replace \\(\\mathcal{G}_{\\mu,\\gamma}[\\varepsilon\\zeta,\\beta b]\\) in (1.4) by the expansion given in Proposition 3.8 and take the gradient of the equation on \\(\\psi\\) to obtain \\[\\left\\{\\begin{aligned} &\\partial_{t}V_{\\mu}+\ abla\\zeta_{\\mu}+ \\frac{\\varepsilon}{2}\ abla|V_{\\mu}|^{2}-\\frac{\\varepsilon\\mu}{2}\ abla(h \ abla\\cdot V_{\\mu}-\\varepsilon\ abla b\\cdot V_{\\mu})^{2}=\\mu^{2}R^{1}_{\\mu}, \\\\ &\\partial_{t}\\zeta_{\\mu}+\ abla\\cdot\\left(hV\\right)=\\mu^{2}R^{2}_ {\\mu},\\end{aligned}\\right. \\tag{6.4}\\] with \\((R^{1}_{\\mu},R^{2}_{\\mu})_{\\mu}\\) bounded in \\(L^{\\infty}([0,\\frac{T}{\\varepsilon}];H^{t_{0}}(\\mathbb{R}^{2})^{2+1})\\) while \\(V\\) is defined as \\(V:=V_{\\mu}-\\frac{\\mu}{h}\\mathcal{T}[h,b]V_{\\mu}\\), so that \\(V_{\\mu}=V+\\frac{\\mu}{h}\\mathcal{T}[h,b]V+O(\\mu^{2})\\). Replacing \\(V_{\\mu}\\) by this expression in (6.4) and neglecting the \\(O(\\mu^{2})\\) terms then gives (6.3). The theorem is then a direct consequence of the well-posedness theorem for the Green-Naghdi and Serre equations proved in [2] and of the error estimates given in Theorem 3 of that reference. ### Long-waves regime: the Boussinesq approximation The long-wave regime is characterized by the scaling \\(\\gamma=1\\), \\(\\mu=\\varepsilon\\ll 1\\), so that one has \\(\ u\\sim 1\\). As for the shallow-water equations, we take \\(\ u=1\\) for notational convenience. When the bottom is non-flat, it is assumed that its variations are of the order of the size of the waves, that is, \\(\\beta=\\varepsilon\\). Since the pioneer work of Boussinesq [8], many formally equivalent systems, generically called Boussinesq systems, have been derived to model the dynamics of the waves under this scaling. Following [7], these systems where derived in a systematic way in [6; 5; 10; 9]. In [5; 9] some interesting _symmetric_ systems where introduced: \\[S^{\\prime}_{\\theta,p_{1},p_{2}}\\left\\{\\begin{aligned} &(1-\\varepsilon a_{2}\\Delta) \\partial_{t}V+\ abla\\zeta+\\varepsilon\\big{(}\\frac{1}{4}\ abla|V|^{2}+\\frac{1} {2}(V\\cdot\ abla)V+\\frac{1}{2}V\ abla\\cdot V\\\\ &\\qquad\\qquad\\qquad\\qquad\\qquad+\\frac{1}{4}\ abla|\\zeta|^{2}- \\frac{1}{2}b\ abla\\zeta+a_{1}\\Delta\ abla\\zeta\\big{)}=0,\\\\ &(1-\\varepsilon a_{4}\\Delta)\\partial_{t}\\zeta+\ abla\\!\\cdot\\!V+ \\frac{\\varepsilon}{2}\\big{(}\ abla\\!\\cdot\\!\\big{(}(\\zeta-b)V\\big{)}+a_{3} \\Delta\ abla\\cdot V)=0,\\end{aligned}\\right.\\] where the coefficients \\(a_{j}\\) (\\(j=1,\\ldots,4\\)) depend on \\(p_{1},p_{2}\\in\\mathbb{R}\\) and \\(\\theta\\in[0,1]\\) through the relations \\(a_{1}=(\\frac{\\theta^{2}}{2}-\\frac{1}{6})p_{1}\\), \\(a_{2}=(\\frac{\\theta^{2}}{2}-\\frac{1}{6})(1-p_{1})\\), \\(a_{3}=\\frac{1-\\theta^{2}}{2}p_{2}\\), and \\(a_{4}=\\frac{1-\\theta^{2}}{2}(1-p_{2})\\); some choices of parameters yield \\(a_{1}=a_{3}\\) and \\(a_{2}\\geq 0\\), \\(a_{4}\\geq 0\\), and the corresponding systems \\(S^{\\prime}_{\\theta,p_{1},p_{2}}\\) are the completely symmetric systems mentioned above. The so-called Boussinesq approximation associated to a family of initial data \\((\\zeta_{\\varepsilon}^{0},\\psi_{\\varepsilon}^{0})_{0<\\varepsilon<1}\\) is given by \\[\\zeta_{\\varepsilon}^{app}=\\zeta_{\\varepsilon}^{B}\\quad\\text{ and }\\quad V_{ \\varepsilon}^{app}=(1\\!-\\!\\frac{\\varepsilon}{2}(1\\!-\\!\\theta^{2})\\Delta)\\big{(} 1\\!-\\!\\frac{\\varepsilon}{2}(\\zeta_{\\varepsilon}^{B}\\!-\\!b)V_{\\varepsilon}^{B} \\big{)}, \\tag{6.5}\\] where \\((V_{\\varepsilon}^{B},\\zeta_{\\varepsilon}^{B})_{0<\\varepsilon<1}\\) solves \\(S^{\\prime}_{\\theta,p_{1},p_{2}}\\) with initial data \\[V_{\\varepsilon}^{B,0}=\\big{(}1+\\frac{\\varepsilon}{2}(\\zeta_{\\varepsilon}^{0}- b)\\big{)}\\big{(}1-\\frac{\\varepsilon}{2}(1-\\theta^{2})\\Delta\\big{)}^{-1}\ abla \\psi_{\\varepsilon}^{0}\\quad\\text{ and }\\quad\\zeta_{\\varepsilon}^{B,0}=\\zeta_{ \\varepsilon}^{0}. \\tag{6.6}\\] The following theorem fully justifies this approximation. Theorem 6.3 (Boussinesq systems): Let \\(s\\geq t_{0}>1\\) and \\((\\zeta_{\\varepsilon}^{0},\\psi_{\\varepsilon}^{0})_{0<\\varepsilon<1}\\) be bounded in \\(\\widetilde{X}^{s+P}\\) and assume that there exist \\(h_{0}>0\\) and \\(\\varepsilon_{0}>0\\) such that for all \\(\\varepsilon\\in(0,\\varepsilon_{0})\\), \\[\\inf_{\\mathbb{R}^{2}}(1+\\varepsilon(\\zeta_{\\varepsilon}^{0}-b))\\geq h_{0} \\quad\\text{ and }\\quad-\\varepsilon^{4}\\mathcal{H}_{b}^{\\gamma}(\ abla\\Phi_{\\mu\\mid_{ \\varepsilon=-1+\\varepsilon b}}^{0})\\leq 1.\\]Then there exists \\(T>0\\) and: 1. a unique family \\((\\zeta_{\\varepsilon},\\psi_{\\varepsilon})_{0<\\varepsilon<\\varepsilon_{0}}\\) bounded in \\(C([0,\\frac{T}{\\varepsilon}];\\widetilde{X}^{s+D})\\) and solving (1.4) with initial conditions \\((\\zeta_{\\varepsilon}^{0},\\psi_{\\varepsilon}^{0})_{0<\\varepsilon<\\varepsilon_{0}}\\); 2. a unique family \\((V_{\\varepsilon}^{B},\\zeta_{\\varepsilon}^{B})_{0<\\varepsilon<\\varepsilon_{0}}\\) bounded in \\(C([0,\\frac{T}{\\varepsilon}];H^{s+P-\\frac{1}{2}}(\\mathbb{R}^{2})^{3})\\) and solving \\(S^{\\prime}_{\\theta,p_{1},p_{2}}\\) with initial conditions (6.6). Moreover, for some \\(C>0\\) independent of \\(\\varepsilon\\in(0,\\varepsilon_{0})\\), one has \\[\\forall 0\\leq t\\leq\\frac{T}{\\varepsilon},\\qquad|\\zeta_{\\varepsilon}(t)-\\zeta_ {\\varepsilon}^{app}(t)|_{\\infty}+|\ abla\\psi_{\\varepsilon}(t)-V_{\\varepsilon }^{app}(t)|_{\\infty}\\leq C\\varepsilon^{2}t,\\] where \\((V_{\\varepsilon}^{app},\\zeta_{\\varepsilon}^{app})\\) is given by (6.5). **Remark 6.7**.: The above theorem justifies _all_ the Boussinesq systems and not only the completely symmetric Boussinesq systems considered here: it is proved in [5, 9] that the justification of _all_ the Boussinesq systems follows directly from the justification of _one_ of them, in the sense that their solutions (if they exist!) provide an approximation of order \\(O(\\varepsilon^{2}t)\\) to the water-waves equations. Proof: The \"if-theorems\" of [5, 9] prove the result assuming that the first statement of the theorem holds, which is a direct consequence of Theorem 5.1 and Proposition 5.1 with \\(\\mathcal{P}=\\{(\\varepsilon,\\varepsilon,1,\\varepsilon),\\varepsilon\\in(0, \\varepsilon_{0})\\}\\). ### Weakly transverse long-waves: the KP approximation We recall that the KP regime is the same as the long-waves regime, but with \\(\\gamma=\\sqrt{\\varepsilon}\\). Moreover, we assume here that the bottom is flat, for the sake of simplicity. The KP approximation [25] consists in replacing the exact water elevation \\(\\zeta_{\\varepsilon}\\) by the sum of two counter propagating waves, slowly modulated by a KP equation; more precisely, one defines \\(\\zeta_{\\varepsilon}^{KP}\\) as \\[\\zeta_{\\varepsilon}^{KP}(t,x)=\\frac{1}{2}\\big{(}\\zeta_{+}(\\varepsilon t,\\sqrt {\\varepsilon}y,x-t)+\\zeta_{-}(\\varepsilon t,\\sqrt{\\varepsilon}y,x+t)\\big{)} \\tag{6.7}\\] where \\(\\zeta_{\\pm}(\\tau,Y,X)\\) solve the KP equation \\[\\partial_{\\tau}\\zeta_{\\pm}\\pm\\frac{1}{2}\\partial_{X}^{-1}\\partial_{Y}^{2} \\zeta_{\\pm}\\pm\\frac{1}{6}\\partial_{X}^{3}\\zeta_{\\pm}\\pm\\frac{3}{2}\\zeta_{\\pm }\\partial_{X}\\zeta_{\\pm}=0.\\qquad(KP)_{\\pm}\\] This approximation is rigorously justified in the theorem below: **Theorem 6.4 (KP equation).** Let \\(s\\geq t_{0}>1\\) and \\((\\zeta^{0},\\psi^{0})\\in\\widetilde{X}^{s+P}\\) and assume that there exist \\(h_{0}>0\\) and \\(\\varepsilon_{0}>0\\) such that for all \\(\\varepsilon\\in(0,\\varepsilon_{0})\\), \\[\\inf_{\\mathbb{R}^{2}}(1+\\varepsilon\\zeta^{0})\\geq h_{0},\\] and assume also that \\((\\partial_{y}^{2}\\partial_{x}\\psi^{0},\\partial_{y}^{2}\\zeta^{0})\\in\\partial_{x }^{2}H^{s+P}(\\mathbb{R}^{2})^{2}\\). Then there exists \\(T>0\\) and: 1. a unique family \\((\\zeta_{\\varepsilon},\\psi_{\\varepsilon})_{0<\\varepsilon<\\varepsilon_{0}}\\) bounded in \\(C([0,\\frac{T}{\\varepsilon}];\\widetilde{X}^{s+D})\\) and solving (1.4) with initial conditions \\((\\zeta_{\\varepsilon}^{0},\\psi_{\\varepsilon}^{0})_{0<\\varepsilon<\\varepsilon_{0}}\\); 2. a unique family \\((V_{\\varepsilon}^{B},\\zeta_{\\varepsilon}^{B})_{0<\\varepsilon<\\varepsilon_{0}}\\) bounded in \\(C([0,\\frac{T}{\\varepsilon}];H^{s+P-\\frac{1}{2}}(\\mathbb{R}^{2})^{3})\\) and solving \\(S^{\\prime}_{\\theta,p_{1},p_{2}}\\) with initial conditions (6.6). Moreover, for some \\(C>0\\) independent of \\(\\varepsilon\\in(0,\\varepsilon_{0})\\), one has \\[\\forall 0\\leq t\\leq\\frac{T}{\\varepsilon},\\qquad|\\zeta_{\\varepsilon}(t)-\\zeta_{ \\varepsilon}^{app}(t)|_{\\infty}+|\ abla\\psi_{\\varepsilon}(t)-V_{\\varepsilon }^{app}(t)|_{\\infty}\\leq C\\varepsilon^{2}t,\\] where \\((V_{\\varepsilon}^{app},\\zeta_{\\varepsilon}^{app})\\) is given by (6.5). **Remark 6.7**.: The above theorem justifies _all_ the Boussinesq systems and not only the completely symmetric Boussinesq systems considered here: it is proved in [5, 9] that the justification of _all_ the Boussinesq systems follows directly from the justification of _one_ of them, in the sense that their solutions (if they exist!) provide an approximation of order \\(O(\\varepsilon^{2}t)\\) to the water-waves equations. Proof: The \"if-theorems\" of [5, 9] prove the result assuming that the first statement of the theorem holds, which is a direct consequence of Theorem 5.1 and Proposition 5.1 with \\(\\mathcal{P}=\\{(\\varepsilon,\\varepsilon,1,\\varepsilon),\\varepsilon\\in(0, \\varepsilon_{0})\\}\\). ### Weakly transverse long-waves: the KP approximation We recall that the KP regime is the same as the long-waves regime, but with \\(\\gamma=\\sqrt{\\varepsilon}\\). Moreover, we assume here that the bottom is flat, for the sake of simplicity. The KP approximation [25] consists in replacing the exact water elevation \\(\\zeta_{\\varepsilon}\\) by the sum of two counter propagating waves, slowly modulated by a KP equation; more precisely, one defines \\(\\zeta_{\\varepsilon}^{KP}\\) as \\[\\zeta_{\\varepsilon}^{KP}(t,x)=\\frac{1}{2}\\big{(}\\zeta_{+}(\\varepsilon t, \\sqrt{\\varepsilon}y,x-t)+\\zeta_{-}(\\varepsilon t,\\sqrt{\\varepsilon}y,x+t) \\big{)} \\tag{6.7}\\] where \\(\\zeta_{\\pm}(\\tau,Y,X)\\) solve the KP equation \\[\\partial_{\\tau}\\zeta_{\\pm}\\pm\\frac{1}{2}\\partial_{X}^{-1}\\partial_{Y}^{2} \\zeta_{\\pm}\\pm\\frac{1}{6}\\partial_{X}^{3}\\zeta_{\\pm}\\pm\\frac{3}{2}\\zeta_{\\pm} \\partial_{X}\\zeta_{\\pm}=0.\\qquad(KP)_{\\pm}\\] This approximation is rigorously justified in the theorem below: **Theorem 6.4 (KP equation).** Let \\(s\\geq t_{0}>1\\) and \\((\\zeta^{0},\\psi^{0})\\in\\widetilde{X}^{s+P}\\) and assume that there exist \\(h_{0}>0\\) and \\(\\varepsilon_{0}>0\\) such that for all \\(\\varepsilon\\in(0,\\varepsilon_{0})\\), \\[\\inf_{\\mathbb{R}^{2}}(1+\\varepsilon\\zeta^{0})\\geq h_{0},\\] and assume also that \\((\\partial_{y}^{2}\\partial_{x}\\psi^{0},\\partial_{y}^{2}\\zeta^{1. a unique family \\((\\zeta_{\\varepsilon},\\psi_{\\varepsilon})_{0<\\varepsilon<\\varepsilon_{0}}\\) solving (1.4) with initial conditions \\((\\zeta^{0},\\psi^{0})\\) and such that \\((\\zeta_{\\varepsilon})_{0<\\varepsilon<\\varepsilon_{0}}\\), \\((\\partial_{x}\\psi_{\\varepsilon})_{0<\\varepsilon<\\varepsilon_{0}}\\) and \\((\\sqrt{\\varepsilon}\\partial_{y}\\psi_{\\varepsilon})_{0<\\varepsilon< \\varepsilon_{0}}\\) are bounded in \\(C([0,\\frac{T}{\\varepsilon}];H^{s+D-1/2})\\); 2. a unique solution \\(\\zeta_{\\pm}\\in C([0,T];H^{s+P-1/2}(\\mathbb{R}^{2}))\\) to (KP)\\({}_{\\pm}\\) with initial condition \\((\\zeta^{0}\\pm\\partial_{x}\\psi^{0})/2\\). Moreover, one has the following error estimate for the approximation (6.7): \\[\\lim_{\\varepsilon\\to 0}\\left|\\zeta_{\\varepsilon}-\\zeta_{\\varepsilon}^{KP} \\right|_{L^{\\infty}([0,\\frac{T}{\\varepsilon}]\\times\\mathbb{R}^{2})}=0.\\] **Remark 6.8**: The very restrictive \"zero mass\" assumptions that \\(\\partial_{y}^{2}\\partial_{x}\\psi^{0}\\) and \\(\\partial_{y}^{2}\\zeta^{0}\\) are twice the derivative of a Sobolev function comes from the singular component \\(\\partial_{X}^{-1}\\partial_{Y}^{2}\\) of the KP equations (KP)\\({}_{\\pm}\\). Furthermore, the error estimate is much worse than for the Boussinesq approximations. These two drawbacks are removed if one replaces the KP approximation by the approximation furnished by the _weakly transverse Boussinesq systems_ introduced in [32]. As shown in [32], the first assertion of the above theorem rigorously justify these systems: they provide an approximation of order \\(O(\\varepsilon^{2}t)\\) on the time interval \\([0,T/\\varepsilon]\\), and do not require the \"zero mass\" assumptions. Proof: As for the Boussinesq systems, we only have to prove the first assertion of the theorem, and the whole result then follows from the \"if-theorem\" of [32]. Taking \\(\\mathcal{P}=\\{(\\varepsilon,\\varepsilon,\\sqrt{\\varepsilon},0),\\varepsilon\\in(0, \\varepsilon_{0})\\}\\), Theorem 5.1 and Proposition 5.1 give a family of solutions \\((\\zeta_{\\varepsilon},\\psi_{\\varepsilon})_{0<\\varepsilon<\\varepsilon_{0}}\\) bounded in \\(C([0,\\frac{T}{\\varepsilon}];\\widetilde{X}^{s+D})\\). In particular, \\((\\left|\\mathfrak{P}\\psi_{\\varepsilon}\\right|_{H^{s+D}})_{\\varepsilon}\\) is bounded, and thus \\((\\left|\ abla^{\\gamma}\\psi_{\\varepsilon}\\right|_{H^{s+D-1/2}})_{\\varepsilon}\\) is also bounded. Since \\(\\gamma=\\sqrt{\\varepsilon}\\), one has \\(\\left|\\partial_{x}\\psi\\right|_{H^{s+D-1/2}}+\\sqrt{\\varepsilon}\\left|\\partial _{y}\\psi_{\\varepsilon}\\right|_{H^{s+D-1/2}}\\lesssim\\left|\ abla^{\\gamma}\\psi_ {\\varepsilon}\\right|_{H^{s+D-1/2}}\\) and the claim follows. ### Deep water #### 6.4.1 Full dispersion model We present here the so-called full dispersion (or Matsuno) model for deep water-waves. Contrary to all the asymptotic models seen above, the shallowness parameter \\(\\mu\\) is allowed to take large values (deep water) provided that the _steepness_ of the waves \\(\\varepsilon\\sqrt{\\mu}\\) remains small; without restriction, we can therefore take \\(\ u=\\mu^{-1/2}\\) here (i.e., we use the nondimensionalization (A.2)). Introducing \\(\\epsilon=\\varepsilon\\sqrt{\\mu}\\) the full dispersion model derived in [36; 37; 11] can be written in the case of flat bottoms (\\(\\beta=0\\)): \\[\\left\\{\\begin{aligned} &\\partial_{t}\\zeta-\\mathcal{T}_{\\mu}V+ \\epsilon\\big{(}\\mathcal{T}_{\\mu}(\\zeta\ abla\\mathcal{T}_{\\mu}V)+\ abla\\cdot( \\zeta V)\\big{)}=0,\\\\ &\\partial_{t}V+\ abla\\zeta+\\epsilon\\big{(}\\tfrac{1}{2}\ abla|V|^{2 }-\ abla\\zeta\\mathcal{T}_{\\mu}\ abla\\zeta\\big{)}=0,\\end{aligned}\\right. \\tag{6.8}\\]where \\(\\mathcal{T}_{\\mu}\\) is a Fourier multiplier defined as \\[\\forall V\\in\\mathfrak{S}(\\mathbb{R}^{2})^{2},\\qquad\\widehat{\\mathcal{T}_{\\mu}} \\widehat{V}(\\xi)=-\\frac{\\tanh(\\sqrt{\\mu}|\\xi|)}{|\\xi|}(i\\xi)\\cdot\\widehat{V}(\\xi).\\] Since \\(\\beta=0\\) (flat bottom) and \\(\\gamma=1\\) (fully transverse), the full dispersion model depends on two parameters \\((\\varepsilon,\\mu)\\) which are linked by a small steepness assumption: \\[\\exists\\epsilon_{0}>0,\\qquad(\\varepsilon,\\mu)\\in\\mathcal{P}_{\\epsilon_{0}} \\subset\\{(\\varepsilon,\\mu)\\in(0,1]\\times[1,\\infty),\\epsilon:=\\varepsilon \\sqrt{\\mu}\\leq\\epsilon_{0}\\}.\\] The well-posedness of the full-dispersion model has not been investigated yet, but we can prove that if a solution exists on \\([0,\\frac{T}{\\epsilon}]\\) (\\(\\epsilon>0\\) small enough), then the solution of the water-waves equations exists over the same time interval and is well approximated by the solution of the full-dispersion model: **Theorem 6.5** (Full-dispersion model): Let \\(\\epsilon_{0}>0\\), \\(Q\\geq P\\) large enough, \\(s\\geq t_{0}>1\\) and \\((\\zeta^{0},\\psi^{0})\\in\\widetilde{X}^{s+P}\\), and assume that \\[\\forall\\varepsilon\\in(0,1],\\qquad\\inf_{\\mathbb{R}^{2}}(1+\\varepsilon(\\zeta^{0 }-b))\\geq h_{0}>0.\\] Let also \\(\\underline{T}>0\\) and let \\((\\zeta_{\\varepsilon,\\mu}^{FD},V_{\\varepsilon,\\mu}^{FD})_{(\\varepsilon,\\mu) \\in\\mathcal{P}_{\\epsilon_{0}}}\\) be bounded in \\(C([0,\\frac{T}{\\epsilon}],H^{s+Q}(\\mathbb{R}^{2})^{3})\\) and solving (6.8) with initial condition \\((\\zeta^{0},\ abla\\psi^{0}-\\epsilon(\\mathcal{T}_{\\mu}\ abla\\psi^{0})\ abla \\zeta^{0})\\). Then, if \\(\\epsilon_{0}\\) is small enough, there is a unique family \\((\\zeta_{\\varepsilon,\\mu},\\psi_{\\varepsilon,\\mu})_{(\\varepsilon,\\mu)\\in \\mathcal{P}_{\\epsilon_{0}}}\\) bounded in \\(C([0,\\frac{T}{\\epsilon}];\\widetilde{X}^{s+D})\\) and solving (1.4) with initial conditions \\((\\zeta^{0},\\psi^{0})\\). In addition, for some \\(C>0\\) independent of \\((\\varepsilon,\\mu)\\in\\mathcal{P}_{\\epsilon_{0}}\\), one has \\[|\\zeta_{\\varepsilon,\\mu}-\\zeta_{\\varepsilon,\\mu}^{FD}|_{L^{\\infty}([0,\\frac{ T}{\\epsilon}]\\times\\mathbb{R}^{2})}+|\ abla\\psi_{\\varepsilon}-V_{\\varepsilon}^{FD}|_{L^{ \\infty}([0,\\frac{T}{\\epsilon}]\\times\\mathbb{R}^{2})}\\leq C\\epsilon\\qquad( \\epsilon:=\\varepsilon\\sqrt{\\mu}).\\] Since \\(\\mathcal{T}_{\\mu}:H^{r}(\\mathbb{R}^{2})^{2}\\mapsto H^{r}(\\mathbb{R}^{2})\\) is continuous with operator norm bounded from above by \\(1\\), the mapping \\(V\\in H^{r}(\\mathbb{R}^{2})^{2}\\mapsto V-\\epsilon(\\mathcal{T}_{\\mu}V)\ abla \\zeta\\in H^{r}(\\mathbb{R}^{2})^{2}\\) is continuous for all \\(r\\geq t_{0}\\) and \\(\\zeta\\in H^{r+1}(\\mathbb{R}^{2})\\). Moreover, this mapping is invertible for \\(\\epsilon\\) small enough, and one can accordingly define \\(\\widehat{V}:=(1-\\epsilon\ abla\\zeta\\mathcal{T}_{\\mu})^{-1}V\\), so that \\(V=\\widetilde{V}-\\epsilon(\\mathcal{T}_{\\mu}\\widetilde{V})\ abla\\zeta\\). Replacing \\(V\\) by this expression in (6.8) gives \\[\\left\\{\\begin{aligned} &\\partial_{t}\\zeta-\\mathcal{T}_{\\mu} \\widetilde{V}+\\epsilon\\big{(}\ abla\\cdot(\\zeta\\widetilde{V})+\\mathcal{T}_{ \\mu}\ abla(\\zeta\\mathcal{T}_{\\mu}\\widetilde{V})\\big{)}=\\epsilon^{2}r_{ \\epsilon}^{1},\\\\ &\\partial_{t}\\widetilde{V}+\ abla\\zeta+\\epsilon\\frac{1}{2}\\big{(} \ abla|\\widetilde{V}|^{2}-\ abla(\\mathcal{T}_{\\mu}\\widetilde{V})^{2}\\big{)}= \\epsilon^{2}\ abla r_{\\epsilon}^{2},\\end{aligned}\\right. \\tag{6.9}\\] and where the exact expression of \\(R_{\\epsilon}:=(r_{\\epsilon}^{1},\ abla r_{\\epsilon}^{2})_{\\epsilon}\\) is of no importance. Now, let \\(\\partial_{t}+L\\) denote the linear part of the above system and \\(S(t)\\) its evolution operator: \\(L:=\\left(\\begin{array}{cc}0&-\\mathcal{T}_{\\mu}\\\\ \ abla&0\\end{array}\\right)\\), and for all \\(U=(\\zeta,V)\\),\\(S(t)U:=u(t)\\), where \\(u\\) solves \\((\\partial_{t}+L)u=0\\), with initial condition \\(u_{|_{t=0}}=U.\\) Since \\(\\mathcal{T}_{\\mu}\\) is a Fourier multiplier, one can find an explicit expression for \\(S(t)\\), but we only need the following property: \\(S(t)\\) is unitary on \\(Z^{r}\\) (\\(r\\in\\mathbb{R}\\)) defined as \\[\\begin{split} Z^{r}:=&\\{U=(\\zeta,V)\\in H^{r}( \\mathbb{R}^{2})^{3},\\\\ &|U|_{Z^{r}}:=|\\zeta|_{H^{r}}+\\big{|}\\big{(}\\frac{\\tanh(\\sqrt{\\mu} |\\xi|)}{|\\xi|}\\big{)}^{1/2}V\\big{|}_{H^{r}}<\\infty\\}.\\end{split}\\] Writing \\(\\widetilde{u}:=(\\zeta,\\widetilde{V})\\), we define \\(w:=(\\widetilde{\\zeta},W)\\) as \\[w:=\\widetilde{u}-\\epsilon^{2}\\int_{0}^{t}S(t-t^{\\prime})R_{\\epsilon}(t^{ \\prime})dt^{\\prime};\\] remarking that \\(|V|_{H^{r-1/2}}\\lesssim|\\big{(}\\frac{\\tanh(\\sqrt{\\mu}|\\xi|)}{|\\xi|}\\big{)}^{ 1/2}V\\big{|}_{H^{r}}\\) and that \\[\\forall f\\in H^{r+1}(\\mathbb{R}^{2}),\\qquad|\\big{(}\\frac{\\tanh(\\sqrt{\\mu}| \\xi|)}{|\\xi|}\\big{)}^{1/2}\ abla f|_{H^{r+1/2}}\\lesssim|f|_{H^{r+1}},\\] uniformly with respect to \\(\\mu\\geq 1\\), and since \\(S(t)\\) is unitary on \\(Z^{r}\\), one gets \\[\\forall r\\geq 0,\\qquad\\sup_{[0,\\frac{T}{\\epsilon}]}|w(t)-\\widetilde{u}(t)|_{H^ {r}}\\lesssim\\epsilon\\underline{T}\\sup_{[0,\\frac{T}{\\epsilon}]}(|r_{\\epsilon}^ {1}|_{H^{r+1/2}}+|r_{\\epsilon}^{2}|_{H^{r+1}}). \\tag{6.10}\\] Furthermore, one immediately checks that \\(w\\) solves \\[\\begin{cases}\\partial_{t}\\widetilde{\\zeta}-\\mathcal{T}_{\\mu}W+\\epsilon f^{1}( \\widetilde{\\zeta},W)=\\epsilon^{2}k_{\\epsilon}^{1},\\\\ \\partial_{t}W+\ abla\\widetilde{\\zeta}+\\epsilon\ abla f^{2}(\\widetilde{\\zeta},W)=\\epsilon^{2}\ abla k_{\\epsilon}^{2},\\end{cases}\\] with initial condition \\(w_{|_{t=0}}=(\\zeta^{0},\ abla\\psi^{0})^{T}\\), and where \\(k_{\\epsilon}^{1}:=\\frac{1}{\\epsilon}(f^{1}(w)-f^{1}(\\widetilde{u}))\\), \\(k_{\\epsilon}^{2}:=\\frac{1}{\\epsilon}(f^{2}(w)-f^{2}(\\widetilde{u}))\\), and \\[f^{1}(\\zeta,V):=\ abla\\cdot(\\zeta V)+\\mathcal{T}_{\\mu}\ abla(\\zeta\\mathcal{T} _{\\mu}V),\\qquad f^{2}(\\zeta,V):=\\frac{1}{2}\\big{(}|V|^{2}-(\\mathcal{T}_{\\mu}V )^{2}\\big{)};\\] from (6.10), one gets in particular that \\[|(k_{\\epsilon}^{1},k_{\\epsilon}^{2})|_{X_{\\underline{T}}^{s+P}}+|(\\partial_{t }k_{\\epsilon}^{1},\\partial_{t}k_{\\epsilon}^{2})|_{X_{\\underline{T}}^{s+P-5/2} }\\lesssim C(\\underline{T},|(\\zeta,V)|_{X_{\\underline{T}}^{s+Q}}), \\tag{6.11}\\] provided that \\(Q\\) is large enough. Using the fact that all the terms in the equation on \\(W\\) are a gradient of a scalar expression, as well as \\(W_{|_{t=0}}\\), it is possible to write \\(w=(\\widetilde{\\zeta},\ abla\\psi)^{T}\\), and \\((\\widetilde{\\zeta},\\psi)\\) solves \\[\\begin{cases}\\partial_{t}\\widetilde{\\zeta}-\\mathcal{T}_{\\mu}\ abla\\psi+ \\epsilon f^{1}(\\widetilde{\\zeta},\ abla\\psi)=\\epsilon^{2}k_{\\epsilon}^{1},\\\\ \\partial_{t}\\psi+\\widetilde{\\zeta}+\\epsilon f^{2}(\\widetilde{\\zeta},\ abla \\psi)=\\epsilon^{2}k_{\\epsilon}^{2},\\end{cases} \\tag{6.12}\\]with initial condition \\((\\widetilde{\\zeta},\\psi)_{|_{t=0}}=(\\zeta^{0},\\psi^{0})\\). Remarking now that \\({\\cal G}[0]\\psi=\\sqrt{\\mu}{\\cal T}_{\\mu}\ abla\\psi\\) and writing \\(U=(\\widetilde{\\zeta},\\psi)\\), one can check that (6.12) can be written \\[\\partial_{t}U+{\\cal L}U+\\epsilon{\\cal A}^{(1)}[U]=\\epsilon^{2}(k_{\\epsilon}^ {1},k_{\\epsilon}^{2})^{T}, \\tag{6.13}\\] where \\({\\cal A}^{(1)}[U]\\) is given by the same formula (4.2) as \\({\\cal A}[U]\\), but with the Dirichlet-Neumann operator \\({\\cal G}[\\varepsilon\\widetilde{\\zeta}]\\psi\\) replaced by the first order expansion given in Proposition 3.9, and with the \\(O(\\epsilon^{2})\\) terms neglected. One thus gets \\[\\partial_{t}U+{\\cal L}U+\\epsilon{\\cal A}[U]=\\epsilon^{2}H_{\\epsilon},\\] with \\(H_{\\epsilon}=(k_{\\epsilon}^{1},k_{\\epsilon}^{2})^{T}+\\frac{1}{\\epsilon}({ \\cal A}^{(1)}[U]-{\\cal A}[U])\\). From (6.10), Proposition 3.9, and (6.11), one has \\((H_{\\epsilon})_{\\epsilon}\\) is uniformly bounded in \\(C([0,\\frac{T}{\\epsilon}],X^{s+P})\\cap C^{1}([0,\\frac{T}{\\epsilon}],X^{s+P-5/2})\\) and we can therefore conclude with Remark 5.4 and Proposition 5.1. #### 6.4.2 A remark on a model used for numerical computations The Dirichlet-Neumann operator is one of the main difficulties in the numerical computation of solutions to the water-waves equations (1.4) because it requires to solve a \\(d+1\\) (\\(d=1,2\\) is the surface dimension) Laplace equation on a domain which changes at each time step. A common strategy is to replace the full Dirichlet-Neumann operator by an approximation which requires less computations. An efficient method, set forth in [15], consists in replacing the Dirichlet-Neumann operator by its \\(n\\)-th order expansion with respect to the surface elevation \\(\\zeta\\). When \\(n=1\\), it turns out that the model thus obtained is exactly the same as the system (6.13) used in the proof of Theorem 6.5. We can therefore use this theorem to state that: _the precision of the modelization in the numerical computations of [15] is of the same order as the steepness of the wave_. One will easily check that when the \\(n\\)-th order expansion is used, then the precision is of the same order as the \\(n\\)-th power of the steepness. ## Appendix A Nondimensionalization(s) of the equations Depending on the value of \\(\\mu\\), two distinct nondimensionalizations are commonly used in oceanography (see for instance [18]). Namely, with dimensionless quantities denoted with a prime: * Shallow-water, ie \\(\\mu\\ll 1\\), one writes \\[\\begin{array}{l}x=\\lambda x^{\\prime},\\,y=\\frac{\\lambda}{\\gamma}y^{\\prime}, \\qquad\\quad z=dz^{\\prime},\\,t=\\frac{\\lambda}{\\sqrt{gd}}t^{\\prime},\\\\ \\zeta=a\\zeta^{\\prime},\\,\\Phi=\\frac{d}{d}\\lambda\\sqrt{gd}\\Phi^{\\prime},\\,b=Bb ^{\\prime}.\\end{array}\\] (A.1)* Deep-water, ie \\(\\mu\\gg 1\\), one writes \\[\\begin{array}{l}x=\\lambda x^{\\prime},\\,y=\\frac{\\lambda}{\\gamma}y^{\\prime}, \\qquad\\ z=\\lambda z^{\\prime},\\,t=\\frac{\\lambda}{\\sqrt{g\\lambda}}t^{\\prime},\\\\ \\zeta=a\\zeta^{\\prime},\\;\\Phi=a\\sqrt{g\\lambda}\\Phi^{\\prime},\\,b=Bb^{\\prime}. \\end{array}\\] (A.2) Remark that when \\(\\mu\\sim 1\\), that is when \\(\\lambda\\sim d\\), both nondimensionalizations are equivalent, we introduce the following general nondimensionalization, which is valid for all \\(\\mu>0\\): \\[\\begin{array}{l}x=\\lambda x^{\\prime},\\,y=\\frac{\\lambda}{\\gamma}y^{\\prime}, \\qquad\\quad z=d\ u z^{\\prime},\\,t=\\frac{\\lambda}{\\sqrt{gd\ u}}t^{\\prime},\\\\ \\zeta=a\\zeta^{\\prime},\\;\\Phi=\\frac{a}{d}\\lambda\\sqrt{\\frac{gd}{\ u}}\\Phi^{ \\prime},\\,b=Bb^{\\prime},\\end{array}\\] where \\(\ u\\) is a smooth function of \\(\\mu\\) such that \\(\ u\\sim 1\\) when \\(\\mu\\ll 1\\) and \\(\ u\\sim\\mu^{-1/2}(=\\lambda/d)\\) when \\(\\mu\\gg 1\\) (say, \\(\ u=(1+\\sqrt{\\mu})^{-1}\\)). The equations of motion (1.1) then become (after dropping the primes for the sake of clarity): \\[\\left\\{\\begin{array}{l}\ u^{2}\\mu\\partial_{x}^{2}\\Phi+\ u^{2}\\gamma^{2}\\mu \\partial_{y}^{2}\\Phi+\\partial_{z}^{2}\\Phi=0,\\qquad\\frac{1}{\ u}(-1+\\beta b)\\leq z \\leq\\frac{\\varepsilon}{\ u}\\zeta,\\\\ -\ u^{2}\\mu\ abla^{\\gamma}(\\frac{\\beta}{\ u}b)\\cdot\ abla^{\\gamma}\\Phi+ \\partial_{z}\\Phi=0,\\qquad z=\\frac{1}{\ u}(-1+\\beta b),\\\\ \\partial_{t}\\zeta-\\frac{1}{\\mu\ u^{2}}\\big{(}-\ u^{2}\\mu\ abla^{ \\gamma}(\\frac{\\varepsilon}{\ u}\\zeta)\\cdot\ abla^{\\gamma}\\Phi+\\partial_{z}\\Phi \\big{)}=0,\\qquad z=\\frac{\\varepsilon}{\ u}\\zeta,\\\\ \\partial_{t}\\Phi+\\frac{1}{2}\\big{(}\\frac{\\varepsilon}{\ u}|\ abla^{\\gamma} \\Phi|^{2}+\\frac{\\varepsilon}{\\mu\ u^{3}}(\\partial_{z}\\Phi)^{2}\\big{)}+\\zeta=0,\\qquad z=\\frac{\\varepsilon}{\ u}\\zeta.\\end{array}\\right.\\] (A.3) In order to reduce this set of equations into a system of two evolution equations, define the Dirichlet-Neumann operator \\(\\mathcal{G}^{\ u}_{\\mu,\\gamma}[\\frac{\\varepsilon}{\ u}\\zeta,\\beta b]\\): as \\[\\mathcal{G}^{\ u}_{\\mu,\\gamma}[\\frac{\\varepsilon}{\ u}\\zeta,\\beta b]\\psi= \\sqrt{1+|\ abla(\\frac{\\varepsilon}{\ u}\\zeta)|^{2}}\\partial_{n}\\Phi_{|_{z= \\frac{\\varepsilon}{\ u}\\zeta}},\\] with \\(\\Phi\\) solving the boundary value problem \\[\\left\\{\\begin{array}{l}\ u^{2}\\mu\\partial_{x}^{2}\\Phi+\ u^{2}\\gamma^{2}\\mu \\partial_{y}^{2}\\Phi+\\partial_{z}^{2}\\Phi=0,\\qquad\\frac{1}{\ u}(-1+\\beta b) \\leq z\\leq\\frac{\\varepsilon}{\ u}\\zeta,\\\\ \\Phi_{|_{z=\\frac{\\varepsilon}{\ u}\\zeta}}=\\psi,\\qquad\\partial_{n}\\Phi_{|_{z= \\frac{1}{\ u}(-1+\\beta b)}}=0,\\end{array}\\right.\\] (as always in this paper, \\(\\partial_{n}\\Phi\\) stands for the upwards conormal derivative associated to the elliptic equation). As remarked in [52, 17, 16], the equations (A.3) are equivalent to a set of two equations on the free surface parameterization \\(\\zeta\\) and the trace of the velocity potential at the surface \\(\\psi=\\Phi_{|_{z=\\varepsilon/\ u\\zeta}}\\) involving the Dirichlet-Neumann operator. Namely, \\[\\left\\{\\begin{aligned} &\\partial_{t}\\zeta-\\frac{1}{\\mu\ u^{2}} \\mathcal{G}^{\ u}_{\\mu,\\gamma}[\\frac{\\varepsilon}{\ u}\\zeta,\\beta b]\\psi=0,\\\\ &\\partial_{t}\\psi+\\zeta+\\frac{\\varepsilon}{2\ u}|\ abla^{\\gamma} \\psi|^{2}-\\frac{\\varepsilon\\mu}{\ u^{3}}\\frac{(\\frac{1}{\\mu}\\mathcal{G}^{\ u }_{\\mu,\\gamma}[\\frac{\\varepsilon}{\ u}\\zeta,\\beta b]\\psi+\ u\ abla^{\\gamma}( \\varepsilon\\zeta)\\cdot\ abla^{\\gamma}\\psi)^{2}}{2(1+\\varepsilon^{2}\\mu|\ abla^ {\\gamma}\\zeta|^{2})}=0.\\end{aligned}\\right.\\] (A.4) In order to derive the system (1.4), let \\(\\mathcal{G}_{\\mu,\\gamma}[\\varepsilon\\zeta,\\beta b]\\cdot\\) be the Dirichlet-Neumann operator \\(\\mathcal{G}^{\ u}_{\\mu,\\gamma}[\\frac{\\varepsilon}{\ u}\\zeta,\\beta b]\\cdot\\) corresponding to the case \\(\ u=1.\\) One will easily check that \\[\\forall\ u>0,\\qquad\\mathcal{G}_{\\mu,\\gamma}[\\varepsilon\\zeta,\\beta b]=\\frac{ 1}{\ u}\\mathcal{G}^{\ u}_{\\mu,\\gamma}[\\frac{\\varepsilon}{\ u}\\zeta,\\beta b],\\] so that plugging this relation into (A.4) yields \\[\\left\\{\\begin{aligned} &\\partial_{t}\\zeta-\\frac{1}{\\mu\ u} \\mathcal{G}_{\\mu,\\gamma}[\\varepsilon\\zeta,\\beta b]\\psi=0,\\\\ &\\partial_{t}\\psi+\\zeta+\\frac{\\varepsilon}{2\ u}|\ abla^{\\gamma} \\psi|^{2}-\\frac{\\varepsilon\\mu}{\ u}\\frac{(\\frac{1}{\\mu}\\mathcal{G}_{\\mu, \\gamma}[\\varepsilon\\zeta,\\beta b]\\psi+\ abla^{\\gamma}(\\varepsilon\\zeta)\\cdot \ abla^{\\gamma}\\psi)^{2}}{2(1+\\varepsilon^{2}\\mu|\ abla^{\\gamma}\\zeta|^{2})}=0. \\end{aligned}\\right.\\] ###### Acknowledgements. This work was supported by the ACI Jeunes Chercheuses et Jeunes Chercheurs \"Dispersion et nonlinearite\". ## References * (1) Airy, G. B.: Tides and waves. Encyclopaedia metropolitana, vol. 5, pp. 241-396. London (1845) * (2) Alvarez-Samaniego, B., Lannes, D.: A Nash-Moser theorem for singular evolution equations. Application to the Serre and Green-Naghdi equations. Indiana Univ. Math. J., to appear. * (3) Ambrose, D., Masmoudi, N.: The zero surface tension limit of two-dimensional water waves. Commun. Pure Appl. Math. **58**, 1287-1315 (2005) * (4) Ben Youssef, W., Lannes, D.: The long wave limit for a general class of 2D quasilinear hyperbolic problems. Commun. Partial Differ. Equations **27**, 979-1020 (2002) * (5) Bona, J. L., Colin, T., Lannes, D.: Long wave approximations for water waves. Arch. Ration. Mech. Anal. **178**, 373-410 (2005) * (6) Bona, J. L., Chen, M., Saut, J.-C.: Boussinesq equations and other systems for small-amplitude long waves in nonlinear dispersive media. I: Derivation and linear theory. J. Nonlinear Sci. **12**, 283-318 (2002) * (7) Bona, J. L., Smith, R.: A model for the two-way propagation of water waves in a channel. Math. Proc. Camb. Philos. Soc **79**, 167-182 (1976) * (8) Boussinesq, M. J.: Theorie de l'intumescence liquide appelee onde solitaire ou de translation se propageant dans un canal rectangulaire. C.R. Acad. Sci. Paris Ser. A-B **72**, 755-759 (1871) * (9) Chazel, F.: Influence of topography on water waves. M2AN, to appear. * (10) Chen, M.: Equations for bi-directional waves over an uneven bottom. Math. Comput. Simul. **62**, 3-9 (2003)* (11) Choi, W.: Nonlinear evolution equations for two-dimensional surface waves in a fluid of finite depth. J. Fluid Mech. Digital Archive **295**, 381-394 (1995) * (12) Coutand, D., Shkoller, S.: Well-posedness of the free-surface incompressible Euler equations with or without surface tension. J. Amer. Math. Soc. **20**, 829-930 (2007) * (13) Craig, W.: An existence theory for water waves and the Boussinesq and Korteweg-de Vries scaling limits. Commun. Partial Differ. Equations **10**, 787-1003 (1985) * (14) Craig, W.: Nonstrictly hyperbolic nonlinear systems. Math. Ann. **277**, 213-232 (1987) * (15) Craig, W., Guyenne, P., Hammack, J., Henderson, D, Sulem, C.: Solitary water wave interactions. Phys. Fluids **18**, no. 5, 057106, 25 pp. (2006) * (16) Craig, W., Schanz, U., Sulem, C.: The modulational regime of three-dimensional water waves and the Davey-Stewartson system. Ann. Inst. Henri Poincare, Anal. Non Lineaire **14**, 615-667 (1997) * (17) Craig, W., Sulem, C., Sulem, P.-L.: Nonlinear modulation of gravity waves: a rigorous approach. Nonlinearity **5**, 497-522 (1992) * (18) Dingemans, M. W.: Water wave propagation over uneven bottoms. Part 2: Non-linear wave propagation. Advanced Series on Ocean Engineering **13**, World Scientific. Singapore (1997) * (19) Friedrichs, K. O.: On the derivation of the shallow water theory, Appendix to: The formulation of breakers and bores by J. J. Stoker in Commun. Pure Appl. Math. **1**, 1-87 (1948) * (20) Gallay, T., Schneider, G.: KP description of unidirectional long waves. The model case. Proc. R. Soc. Edinb., Sect. A **131**, 885-898 (2001) * (21) Green, A. E., Laws, N., Naghdi, P. M.: On the theory of water wave. Proc. R. Soc. Lond., Ser. A **338**, 43-55 (1974) * (22) Green, A. E., Naghdi, P. M.: A derivation of equations for wave propagation in water of variable depth. J. Fluid Mech. **78**, 237-246 (1976) * (23) Iguchi, T.: A long wave approximation for capillary-gravity waves and an effect of the bottom. Comm. Partial Differential Equations **32**, 37-85 (2007) * (24) Iguchi, T.: A shallow water approximation for water waves. Preprint * (25) Kadomtsev, B. B., Petviashvili, V. I.: On the stability of solitary waves in weakly dispersing media. Sov. Phys., Dokl. **15**, 539-541 (1970) * (26) Kano, T.: L'equation de Kadomtsev-Petviashvili approchant les ondes longues de surface de l'eau en ecoulement trois-dimensionnel. Patterns and waves. Qualitative analysis of nonlinear differential equations, Stud. Math. Appl. **18**, 431-444. North-Holland, Amsterdam (1986) * (27) Kano, T., Nishida, T.: Sur les ondes de surface de l'eau avec une justification mathematique des equations des ondes en eau peu profonde. J. Math. Kyoto Univ. **19**, 335-370 (1979) * (28) Kano, T., Nishida, T.: A mathematical justification for Korteweg-de Vries equation and Boussinesq equation of water surface waves. Osaka J. Math. **23**, 389-413 (1986) * (29) Lannes, D.: Well-posedness of the water-waves equations. J. Am. Math. Soc. **18**, 605-654 (2005) * (30) Lannes, D.: Sharp estimates for pseudo-differential operators with symbols of limited smoothness and commutators. J. Funct. Anal. **232**, 495-539 (2006) * (31) Lannes, D.: Justifying Asymptotics for 3D Water-Waves. Instability in Models Connected with Fluid Flows. I. Edited by Claude Bardos and Andrei Fursikov / International Mathematical Series **6** Springer (2007) * (32) Lannes, D., Saut, J.-C.: Weakly transverse Boussinesq systems and the KP approximation. Nonlinearity **19**, 2853-2875 (2006) * (33) Li, Y. A.: A shallow-water approximation to the full water wave problem. Commun. Pure Appl. Math. **59**, 1225-1285 (2006)* [34] Lindblad, H.: Well-posedness for the linearized motion of an incompressible liquid with free surface boundary. Commun. Pure Appl. Math. **56**, 153-197 (2003) * [35] Lindblad, H.: Well-posedness for the motion of an incompressible liquid with free surface boundary. Ann. Math. (2) **162**, 109-194 (2005) * [36] Matsuno, Y.: Nonlinear evolution of surface gravity waves on fluid of finite depth. Phys. Rev. Lett. **69**, 609-611 (1992) * [37] Matsuno, Y.: Nonlinear evolution of surface gravity waves over an uneven bottom. J. Fluid Mech. **249**, 121-133 (1993) * [38] Nalimov, V. I.: The Cauchy-Poisson problem. (Russian) Dinamika Splosn. Sredy Vyp. 18 Dinamika Zidkost. so Svobod. Granicami, 254, 104-210 (1974) * [39] Nicholls, D., Reitich, F.: A new approach to analycity of Dirichlet-Neumann operators. Proc. R. Soc. Edinb., Sect. A **131**, 1411-1433 (2001) * [40] Ovsjannikov, L. V.: To the shallow water theory foundation. Arch. Mech. **26**, 407-422 (1974) * [41] Ovsjannikov, L. V.: Cauchy problem in a scale of Banach spaces and its application to the shallow water theory justification. Appl. Methods Funct. Anal. Probl. Mech., IUTAM/IMU-Symp. Marseille 1975, Lect. Notes Math. 503, 426-437 (1976) * [42] Paumond, L.: A rigorous link between KP and a Benney-Luke equation. Differ. Integral Equ. **16**, 1039-1064 (2003) * [43] Schneider, G., Wayne, C. E.: The long-wave limit for the water wave problem. I: The case of zero surface tension. Commun. Pure Appl. Math. **53**, 1475-1535 (2000) * [44] Serre, F.: Contribution a l'etude des ecoulements permanents et variables dans les canaux. La Houille Blanche **3**, 374-388 (1953) * [45] Shatah, J., Zeng, C.: Geometry and a priori estimates for free boundary problems of the Euler's equation. Preprint ([http://arxiv.org/abs/math.AP/0608428](http://arxiv.org/abs/math.AP/0608428)) * [46] Su, C.-H., Gardner, C. S.: Korteweg-de Vries equation and generalizations. III: Derivation of the Korteweg-de Vries equation and Burgers' equation. J. Math. Phys. **10**, 536-539 (1969) * [47] Wright, J. D.: Corrections to the KdV approximation for water waves. SIAM J. Math. Anal. **37**, 1161-1206 (2005) * [48] Wu, S.: Well-posedness in Sobolev spaces of the full water wave problem in 2-D. Invent. Math. **130**, 39-72 (1997) * [49] Wu, S.: Well-posedness in Sobolev spaces of the full water wave problem in 3-D. J. Am. Math. Soc. **12**, 445-495 (1999) * [50] Yosihara, H.: Gravity waves on the free surface of an incompressible perfect fluid of finite depth. Publ. Res. Inst. Math. Sci. **18**, 49-96 (1982) * [51] Yosihara, H.: Capillary-gravity waves for an incompressible ideal fluid. J. Math. Kyoto Univ. **23**, 649-694 (1983) * [52] Zakharov, V. E.: Stability of periodic waves of finite amplitude on the surface of a deep fluid. J. Appl. Mech. Tech. Phys. **2**, 190-194 (1968)
We rigorously justify in \\(3D\\) the main asymptotic models used in coastal oceanography, including: shallow-water equations, Boussinesq systems, Kadomtsev-Petviashvili (KP) approximation, Green-Naghdi equations, Serre approximation and full-dispersion model. We first introduce a \"variable\" nondimensionalized version of the water-waves equations which vary from shallow to deep water, and which involves four dimensionless parameters. Using a nonlocal energy adapted to the equations, we can prove a well-posedness theorem, uniformly with respect to all the parameters. Its validity ranges therefore from shallow to deep-water, from small to large surface and bottom variations, and from fully to weakly transverse waves. The physical regimes corresponding to the aforementioned models can therefore be studied as particular cases; it turns out that the existence time and the energy bounds given by the theorem are always those needed to justify the asymptotic models. We can therefore derive and justify them in a systematic way.
Write a summary of the passage below.